The politics of technology
The latest mutiny at Google illustrates what a political game technology has become and it’s only going to get more so.
April 2, 2019
The latest mutiny at Google illustrates what a political game technology has become and it’s only going to get more so.
Last week Google announced the creation of ‘An external advisory council to help advance the responsible development of AI.’ In so doing Google was acknowledging a universal concern about the ethics of artificial intelligence, automation, social media and technology in general. It also seemed to be conceding that the answers to these concerns need to be universal too.
The Silicon Valley tech giants are frequently accused of having a political bias in their approach that is hostile to conservative perspectives. Normally this wouldn’t matter, but since the likes of Google, Facebook and Twitter have so much control over how everyone gets their information and opinion, any possible bias in the way they do so becomes a matter of public concern.
In an apparent attempt to demonstrate diversity of viewpoints in this new Advanced Technology External Advisory Council (ATEAC), Google included Kay Coles James, who it describes as ‘a public policy expert with extensive experience working at the local, state and federal levels of government. She’s currently President of The Heritage Foundation, focusing on free enterprise, limited government, individual freedom and national defense.’
This decision upset over a thousand Google employees, however, who made their feelings publicly known via a an article titled Googlers Against Transphobia and Hate. The piece accuses James of being ‘anti-trans, anti-LGBTQ, and anti-immigrant’ and links to three recent tweets of hers as evidence.
Beyond those tweets it’s hard to fully test the veracity of those allegations, but it does seem clear that they are largely political. The Equality Act is a piece of US legislation currently being debated in the US House of Representatives, sponsored almost entirely by members of the Democrat Party. The legal status of transsexual people is intrinsically political, as is immigration policy, and attitudes towards them tend to be similarly polarised.
The Googlers Against Transphobia certainly seem to hold fairly strong opinions but what makes them noteworthy, apart from their numbers, is that they expect their employer to adhere to their political positions. Google has attempted to defend the appointment of James to one of the eight ATEAC positions by stressing the importance of diversity of thought.
Here’s what the Google dissidents think of that argument. “This is a weaponization of the language of diversity, ” they wrote. “By appointing James to the ATEAC, Google elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making. This is unacceptable.”
This is just the latest internal insurrection Google has faced from its passionately political workforce. Every time a story emerges about Google working on a special search engine for China there is considerable disquiet among the rank and file, finding themselves suddenly opposed to censorship in this case. And then there was the case of James Damore, sacked by Google for trying to start an internal conversation about gender diversity at the company.
But Google’s struggles pale when compared to those of Facebook. Every time it seems to have just about recovered from the last crisis it finds itself in a new one. The latest was catalysed by the atrocity committed in New Zealand, in which a gunman killed 50 people praying in two mosques in Christchurch and live-streamed the act on Facebook.
Understandably questions were immediately asked about how Facebook could have allowed that streaming to happen. While it acted quickly to ensure the video and any copies of it were taken down, Facebook was under massive pressure to implement measures to ensure such a thing couldn’t happen again. Its response has been to announce ‘a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram.’
These kinds of ideologies are largely rejected by mainstream society for many good reasons, but ideologies they remain. Facebook is also moving against claimed ‘anti-vaxxers’, i.e. people who fear the side-effects of vaccines. They may well be misguided in this fear but it is nonetheless an opinion and, at time of writing, a legal one.
Finding itself under pressure to police ideologies and opinions on its platforms, Facebook seems to have realised this is an impossible task. For every ‘unacceptable’ position it acts against there are thousands waiting in the wings and a simple extrapolation reveals a future Facebook in which very few points of view are permitted, which would presumably be bad for business. In apparent acknowledgment of that dilemma Facebook recently called on governments to make a call on censorship, but it should be careful what it wishes for.
Another type of discussion facing increasing calls for censorship is ‘conspiracy theories’, with a recent leak revealing how Facebook agonises over such decisions. Google-owned YouTube is also acting against such content, but seems to prefer sanctions that stop short of outright banning, such as the recent removal of videos published by activist Tommy Robinson from all search results.
Again this puts technology companies in the position of arbiters of content that is often of a political nature. How do you define a conspiracy theory anyway and should all of them be censored? Should, for example, the MSNBC network in the US be sanctioned for aggressively pursuing a narrative of President Trump colluding with Russia to win his general election when a two-year investigation has revealed it to be false? Is that not a conspiracy theory too?
The current era of political interference in internet platforms was probably started by the Cambridge Analytica scandal and subsequent allegations that the democratic process had been corrupted by social media manipulation. As technology increasingly determines how we view and interact with the world this problem is only going to get bigger and it’s hard to see how technology companies can possibly please all of the people all of the time.
Which brings us back to the ethical use of AI. The only hope internet platforms have of monitoring the billions of interactions they host per day is through AI-driven automation. But even that has to be programmed by people who inevitably have their own personal views and ethics and also need to be responsive to public sentiment as it in turn reacts to events.
As the US President has done so much to demonstrate, technology platforms are now the places much of politics and public discussion take place. At the same time they’re owned by commercial organizations with no legal requirement to serve the public. They have to balance pressure from both professional politicians and the politics of their own employees with the dangers of alienating their users if they’re seen to be biased. Something’s got to give.
This dilemma was illustrated well in a recent Joe Rogan podcast featuring Twitter, which you can see below. In it Twitter CEO Jack Dorsey and his head of content moderation Vijaya Gadde defend themselves from accusations of bias from independent journalist Tim Pool.
UPDATE – 14:30. 4 April 2019: Just saw this tweet.
About the Author
You May Also Like