Evidence of AI bias mounts

Another study has identified distinct political bias in one of the major large language models used to inform generative AI chatbots.

Academics from the UK and Brazil collaborated to interrogate ChatGPT with several sets of questions: some of which instructed the chatbot to answer in a way that impersonates people from a certain part of the political spectrum, and then a control, asking the same questions but with no bias requested. This methodology led them to conclude that Chat GPT is biased towards the political left.

‘We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,’ concludes the abstract. ‘These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media.’

“With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible,” said lead author Dr Fabio Motoki, of Norwich Business School. “The presence of political bias can influence user views and has potential implications for political and electoral processes.”

“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said co-author Dr Pinho Neto. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology.”

This study comes hot on the heels of a joint US/Chinese study that also found leftist bias on the part of the various GPT models. While neither are suggesting active intervention on the part of Open AI, the company that runs Chat GPT, in this matter, the strong suggestion is that the choice of inputs to the underlying large language model exhibits bias, conscious or otherwise.

Findings such as these are especially pertinent as the US ramps up its interminable general election cycle, with one expected in the UK in the next year or so too. Since the electoral shocks of 2016, politicians around the world have become preoccupied with the effect of the internet, especially social media, on electoral outcomes. They have consequently tried, and largely failed, to influence them in their favour.

It already seems inevitable that generative AI chatbots will increasingly be a major channel through which people access the internet, especially when looking for information and advice. Anyone with an interest in future electoral events being as pure and uncorrupted as possible should welcome scrutiny such as this, in the hope that the people who run those models aspire to neutrality and tweak them in response.


Get the latest news straight to your inbox. Register for the newsletter here.

Tags: , ,
  • 2020 Vision Executive Summit

One comment

  1. Avatar Melfry749 17/08/2023 @ 5:31 pm

    Why stop there? Ditto Claude and HeyPi, for that matter. They are just as politically correct (biased in one direction), if you will (go ahead, try ’em). Until such time as Musk/X develops something to rival these, ‘This is the way’?

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.