Gen Z is in a love-hate relationship with gen AI
Generation Z is using generative AI more than any other age group, despite being more wary than most about its potential pitfalls.
November 28, 2023
A study by Ofcom has found that 79% of teenage Internet users aged 13-17 use gen AI tools and services, compared to 31% of adults. Of those adults who have never used AI, 24% have no idea what it is.
OpenAI’s ChatGPT has been hogging the headlines, so it’s no surprise to learn from Ofcom that it’s the most popular gen AI among the sample group.
Younger Internet users prefer to get their ChatGPT fix via Snapchat My AI. Ofcom says 51% of 7-17 year-olds use the ‘digital sidekick’, which is more or less just ChatGPT dressed up in Snapchat colours. ‘Oldies’ are more inclined to go directly to the source, with 23% of Internet users aged 16 and above professing their use of ChatGPT.
As for what all these people are using it for, the most popular answer is fun, which if nothing else is a damning indictment on what passes for fun in the 21st Century. Nonetheless, 58% of respondents enjoy just interacting with AI, compared to 33% who use it for work, and 25% who use it to help with studying.
“Getting rapidly up to speed with new technology comes as second nature to Gen Z, and generative AI is no exception,” said Yih-Choung Teh, group director of strategy and research at Ofcom. “While children and teens are driving its early adoption, we’re also seeing older Internet users exploring its capabilities, both for work and for leisure.
“We also recognise that some people are concerned about what AI means for the future,” he continued. “As online safety regulator, we’re already working to build an in-depth understanding of the opportunities and risks of new and emerging technologies, so that innovation can thrive, while the safety of users is protected.”
Indeed, 58% of those surveyed said they are concerned about the potential impact of AI on society. Younger AI users are more cognisant than most, with 67% of 16-24 year-olds saying they are concerned.
As has been extensively documented by Telecoms.com and elsewhere, discussions on AI safety have been numerous and progress is being made to channel the power of AI for good, and rein in its potential to spread prejudice and misinformation.
Just this week, the UK National Cyber Security Centre (NCSC) together with the US Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) published guidelines designed to help companies build AI systems – either from scratch or running on top of existing AI tools – that function as intended, and work without revealing sensitive data to unauthorised parties.
They cover secure design, development, deployment, operation and maintenance. They have been developed with input from no fewer than 21 international agencies and ministries from around the world, including all members of the G7 and the Global South.
“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said NCSC chief executive Lindy Cameron.
“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”
Between global cooperation on AI guardrails and Gen Z being well aware of AI’s dark side, there’s hope for the world yet – maybe.
About the Author
You May Also Like