Social network Twitter continues to agonise over how it should censor its users and thinks getting them involved in the process might help.

Scott Bicheno

September 25, 2018

3 Min Read
Twitter wants your help with censorship

Social network Twitter continues to agonise over how it should censor its users and thinks getting them involved in the process might help.

While all social media companies, and indeed any involved in the publication of user-generated content, are under great pressure to eradicate horridness from their platforms, Twitter probably has the greatest volume and proportion of it. Content and exchanges can get pretty heated on Facebook and YouTube, public conversation giant Twitter is where it seems to really kick off.

This puts Twitter in a tricky position: it wants people to use it as much as possible, but would ideally like them to only say nice, inoffensive things. Even the most rose-tinted view of human nature and interaction reveals this to be impossible, so Twitter must therefore decide where on the nice/horrid continuum to draw the line and start censoring.

To date this responsibility has been handled internally, with a degree of rubber-stamping from the Trust and Safety Council – a bunch of individuals and groups that claim to be expert on the matter of online horridness and what to do about it. But this hasn’t been enough to calm suspicions that Twitter, along with the other tech giants, allows its own socio-political views to influence the selective enforcement of its own rules.

So now Twitter has decided to invite everyone to offer feedback every time it decides to implement a new layer of censorship. Do date the term ‘hate’ has been a key factor in determining whether or not to censor and possibly ban a user. Twitter has attempted to define the term as any speech that attacks people according to race, gender, etc, but it has been widely accused of selectively enforcing that policy along exactly the same lines it claims to oppose, with members of some groups more likely to be punished than others.

Now Twitter wants to add the term ‘dehumanizing’ to its list of types of speech that aren’t allowed. “With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target,” explained Twitter in a blog post, adding that such language might make violence seem more acceptable.

Even leaving aside Twitter’s surrender to the Slippery Slope Fallacy, which is one of the main drivers behind the insidious spread of censorship into previously blameless areas of speech, this is arguably even more vague than ‘hate’. For example does it include nicknames? Or as the BBC asks, is dehumanizing language targeted at middle-aged white men just as hateful as that aimed at other identity groups?

Perhaps because it’s incapable of answering these crucial questions Twitter wants everyone to tell it what they think of its definitions. A from on that blog post will be open for a couple of weeks and Twitter promises to bear this public feedback in mind when it next updates its rules. What isn’t clear is how transparent Twitter will be about the feedback or how much weight it will carry. What seems more likely is that this is an attempt to abdicate responsibility for its own decisions and deflect criticism of subsequent waves of censorship.

 

View post on Twitter

About the Author(s)

Scott Bicheno

As the Editorial Director of Telecoms.com, Scott oversees all editorial activity on the site and also manages the Telecoms.com Intelligence arm, which focuses on analysis and bespoke content.
Scott has been covering the mobile phone and broader technology industries for over ten years. Prior to Telecoms.com Scott was the primary smartphone specialist at industry analyst Strategy Analytics’. Before that Scott was a technology journalist, covering the PC and telecoms sectors from a business perspective.
Follow him @scottbicheno

You May Also Like