Internet giants on the defensive from increasing calls to become censors

As the definitions of ‘hate speech’ and ‘fake news’ get broader, and the consequences of being associated with them become more severe, the internet is facing unprecedented levels of censorship.

Scott Bicheno

March 22, 2017

6 Min Read
Internet giants on the defensive from increasing calls to become censors

As the definitions of ‘hate speech’ and ‘fake news’ get broader, and the consequences of being associated with them become more severe, the internet is facing unprecedented levels of censorship.

Last week a report by The Times (pay-walled) in the UK revealed that videos posted on YouTube by claimed ‘extremists’ were being served ads by well-known brands and UK public sector organisations. The subsequent outcry resulted in many of those advertisers pulling their ads from the platform entirely, noting their values don’t coincide with those represented in the offending videos.

Google was understandably concerned by this development as advertising is the main source of its considerable revenue. First UK MD Ronan Harris produced the now customary damage limitation blog, starting by listing all the great things Google does and then conceding that it’s not perfect, but will redouble its efforts to become so.

Then Chief Business Officer Philipp Schindler published a list of ‘expanded safeguards for advertisers’, with measures to protect ‘brand safety’ and to give advertisers greater control over where their ads end up. This is smart as Google is now putting the ball in the advertiser court – making them responsible for this stuff. Since many of them didn’t event seem to be aware of what was happening until The Times reported on it, panicked changes at media buying agencies are presumably currently underway.

However Google doesn’t want to just stop there. In its zeal to placate precious advertisers the internet giant seems to be promising increased restrictions on what can be posted at all. “Finally, we won’t stop at taking down ads,” blogged Schindler. “The YouTube team is taking a hard look at our existing community guidelines to determine what content is allowed on the platform – not just what content can be monetized.”

At the same time Twitter – the place so many ‘extreme’ opinions are voiced constantly – has revealed it is banning users said to be promoting terrorism at almost three times the rate it was a year ago. But the other side of that coin, it seems, is state requests to silence legitimate journalism, with Turkey by far the worst offender on that count. You can read the full Twitter transparency report here.

The other big platform for sharing news and views is, of course, Facebook. This week the social media giant started rolling out a new feature aimed at tackling ‘fake news’, which became a prominent issue during last year’s US Presidential election. Facebook is very wisely doing everything it can to avoid becoming the active censor of content on its platform, recognising what a slippery slope that is, and is instead using ‘third party fact checkers’ to flag up ‘inaccurate’ stuff via a pop-up window.

Facebook CEO Mark Zuckerberg has recently spoken on the matter, defending his company from accusations that it likes fake news because it generates traffic. But he was quick to stress that defining ‘fake’ is not straightforward.

“It’s not always clear what is fake and what isn’t,” he said. “A lot of what people are calling fake news are just opinions that people disagree with. We need to make sure that we don’t get to a place where we’re not showing content or banning things form the service just because it hurts someone’s feelings or because someone doesn’t agree with it – I think that would actually hurt a lot of progress.”

Indeed contentious rightwing news network Breitbart, which itself has been the subject of an advertising ban by a company concerned about brand-alignment, has commented that some of Facebook’s fact-checkers appear to be far from impartial. The clear inference is that if the fact-checkers have an agenda, they’re likely to scrutinise ‘facts’ they disagree with more closely than those they instinctively approve of.

Meanwhile even the European Union’s own digital commissioner seems to think things are going too far when it comes to online censorship. Speaking to the FT after the German government announced a new bill threatening to impose fines of up to €50 million on social networks that fail to delete either ‘hate speech’ or ‘fake news’, Andrus Ansip invoked Orwell’s 1984 saying “Fake news is bad, but a Ministry of Truth is even worse.” He also pointed to the recent triumph of a moderate over a claimed extremist in the Dutch general election as evidence that we should give people more credit for being able to filter bullshit themselves.

Some censorship is clearly desirable. Parents are grateful for the relative difficulty of finding adult content on YouTube and if people openly commit crimes via social networks then, of course, they should be answerable to the law. It’s also inevitable that companies like Google and Facebook will seek to placate both advertisers and regulatory authorities.

The danger of the current trend, of which this article addresses only a fraction, is the insidious reduction in the amount of subject matter considered to be ‘acceptable’ or ‘appropriate’. The regular use of parentheses in this piece is designed to illustrate how subjective terms like ‘hate speech’ and ‘fake news’ are. It’s very difficult to identify the precise moment speech crosses from strident or opinionated into ‘hate’, and who determines whether news is ‘fake’ or not?

Another completely subjective term increasingly used to police public discourse is ‘offensive’. Not only is offense entirely in the eye of the beholder, but it’s also a cornerstone of free speech and individual empowerment. This principle is encapsulated by the often-cited quote: “I disapprove of what you say, but I will defend to the death your right to say it.”

One recent example of the current situation concerns Bill Leak, an Australian satirical cartoonist who never apparently let fear of online denunciation affect who or what he chose to ridicule. As a consequence his stuff was often deemed ‘offensive’ and drew frequent accusations of overstepping the mark. He died recently and one of his sons tried to defend his legacy via a piece in The Australian, which attracted the inevitable polarised shrillness on Twitter.

Increasingly decisions on censorship seem to be made by the social media mob. The shrillest users making the most hysterical denunciations have, literally and figuratively, the loudest voices and seem to have disproportionate influence over what should be censored. Their arguments frequently fall short of even the most basic standards of logical and rhetorical rigour, but fear of public humiliation means they are seldom challenged.

Free-speech campaigning site Spiked covered the importance of unfettered satire in its Leak obituary and also expressed concern about Google’s plans to increase YouTube censorship. The internet giants are currently walking a free-speech tightrope, trying to balance the interests of customers and product – the end-users. They have the greatest power to protect the silent majority from the tyranny of the shrill, but they must be prepared to make a lot of contentious decisions in order to do so.

About the Author(s)

Scott Bicheno

As the Editorial Director of Telecoms.com, Scott oversees all editorial activity on the site and also manages the Telecoms.com Intelligence arm, which focuses on analysis and bespoke content.
Scott has been covering the mobile phone and broader technology industries for over ten years. Prior to Telecoms.com Scott was the primary smartphone specialist at industry analyst Strategy Analytics’. Before that Scott was a technology journalist, covering the PC and telecoms sectors from a business perspective.
Follow him @scottbicheno

You May Also Like