Evidence of AI bias mounts

Another study has identified distinct political bias in one of the major large language models used to inform generative AI chatbots.

Scott Bicheno

August 17, 2023

2 Min Read
Evidence of AI bias mounts
AI (Artificial Intelligence) concept. Communication network.

Another study has identified distinct political bias in one of the major large language models used to inform generative AI chatbots.

Academics from the UK and Brazil collaborated to interrogate ChatGPT with several sets of questions: some of which instructed the chatbot to answer in a way that impersonates people from a certain part of the political spectrum, and then a control, asking the same questions but with no bias requested. This methodology led them to conclude that Chat GPT is biased towards the political left.

‘We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,’ concludes the abstract. ‘These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media.’

“With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible,” said lead author Dr Fabio Motoki, of Norwich Business School. “The presence of political bias can influence user views and has potential implications for political and electoral processes.”

“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said co-author Dr Pinho Neto. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology.”

This study comes hot on the heels of a joint US/Chinese study that also found leftist bias on the part of the various GPT models. While neither are suggesting active intervention on the part of Open AI, the company that runs Chat GPT, in this matter, the strong suggestion is that the choice of inputs to the underlying large language model exhibits bias, conscious or otherwise.

Findings such as these are especially pertinent as the US ramps up its interminable general election cycle, with one expected in the UK in the next year or so too. Since the electoral shocks of 2016, politicians around the world have become preoccupied with the effect of the internet, especially social media, on electoral outcomes. They have consequently tried, and largely failed, to influence them in their favour.

It already seems inevitable that generative AI chatbots will increasingly be a major channel through which people access the internet, especially when looking for information and advice. Anyone with an interest in future electoral events being as pure and uncorrupted as possible should welcome scrutiny such as this, in the hope that the people who run those models aspire to neutrality and tweak them in response.

 

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

About the Author

Scott Bicheno

As the Editorial Director of Telecoms.com, Scott oversees all editorial activity on the site and also manages the Telecoms.com Intelligence arm, which focuses on analysis and bespoke content.
Scott has been covering the mobile phone and broader technology industries for over ten years. Prior to Telecoms.com Scott was the primary smartphone specialist at industry analyst Strategy Analytics’. Before that Scott was a technology journalist, covering the PC and telecoms sectors from a business perspective.
Follow him @scottbicheno

Subscribe and receive the latest news from the industry.
Join 56,000+ members. Yes it's completely free.

You May Also Like