Nearly half of UK generative AI users assume it always tells the truth

A new report by Deloitte looking at UK attitudes towards generative AI claims 43% of those that have used apps like ChatGPT believe they always produce factually accurate answers.

Andrew Wooden

July 17, 2023

3 Min Read
Artificial Intelligence Concept. Microprocessor with the letters AI.
Artificial Intelligence Concept. Microprocessor with the letters AI.

A new report by Deloitte looking at UK attitudes towards generative AI claims 43% of those that have used apps like ChatGPT believe they always produce factually accurate answers.

Ever since ChatGPT burst into the mainstream consciousness after a step-change in capabilities meant it started to sound a hell of a lot more human, for one thing, the tech sector and then the wider world in general has been speculating as to what degree workplaces  will be transformed – with predictions ranging from the optimistic to gloomy images of millions of redundant jobs.

Deloitte’s 2023 Digital Consumer Trends research was based on a survey of 4,150 UK adults aged 16-75, and throws up a number of data points around how many people are using generative AI and what they currently think about it.

We’re told 52% had heard of generative AI, 26% had used it, and 9% of which did so every day. 70% played around with the tech for personal use, while 34% used it for education purposes. 19% apparently think generative AI always produces factually accurate responses, which rises to 43% for those who have used the technology.

32% used generative AI of some description for work, though only 23% believe their employer would approve of them doing so. To this point, Costi Perricos, partner and global AI and data lead at Deloitte said: “With millions of people using Generative AI tools in the workplace, potentially without permission, it is critical that employers offer appropriate guidelines and guardrails so that their people know how, when and where they can use the technology.

“Businesses will also need to consider how they communicate their own policies on Generative AI to customers and understand how their suppliers are using the technology to ensure transparency. People need to understand the risk and inaccuracies associated with content generated purely from AI, and where possible be informed when content, such as text, images or audio is AI-generated.”

In terms of the fears going around about workplace disruption, 64% of those who have heard of generative AI reckon it will reduce the number of jobs available in the future, and 48% are concerned that AI will replace some of their role in the workplace.

“Generative AI has the potential to be a powerful tool, but it is imperative that its risks are managed,” added Joanna Conway, partner and internet regulation lead at Deloitte Legal. “It is therefore unsurprising to see generative AI regulation emerging across the globe. Through clear and effective rules around data risk management and the key issues of safety, bias, accuracy and liability, policymakers should aim to encourage growth and productivity through AI in a safe and controlled way and to safeguard its users.”

It’s a survey, so it’s by no means exhaustive in terms of how the UK is using generative AI, but it does throw up some interesting suggestions, particularly around how relatively trusted it seems to be in terms of factual accuracy, since there have by this point been many examples of systems throwing up some odd and in some cases entirely incorrect responses.

For a deep dive into generative AI and its ethical landscape, check out our report published today which collates some of the major debates and events around the subject in media, academic, and political circles.

 

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

About the Author(s)

Andrew Wooden

Andrew joins Telecoms.com on the back of an extensive career in tech journalism and content strategy.

You May Also Like