Big tech pledges to undertake some AI Seoul-searching

A 16-strong group of AI tech companies have joined a global effort to keep close tabs on generative AI (GenAI) development.

Nick Wood

May 22, 2024

5 Min Read

The announcement was made at a virtual get-together hosted by South Korea, and builds on the inaugural AI safety summit in the UK last November, which saw a group of 28 countries make broadly the same commitment – dubbed the Bletchley Declaration.

The new recruits to the cause include Amazon, Anthropic, Google, IBM, Meta, Microsoft, OpenAI and Samsung, among others. The full list of signatories is below.

This group has promised to publish safety frameworks – if they haven't done so already – so people can see how they measure the risks of their AI models, such as examining the risk of misuse of technology by bad actors, for example.

These frameworks will also outline strategies for dealing with so-called "severe risks" that unless adequately mitigated, would be "deemed intolerable".

In the most extreme circumstances, the companies have also committed to "not develop or deploy a model or system at all" if mitigations cannot keep risks below these thresholds.

"These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI," said UK Prime Minister Rishi Sunak. "It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology."

The summit also saw 10 countries, plus the EU, ramp up their combined efforts to keep the GenAI genie under control, agreeing to set up a network of safety institutes that will pursue a globally-coordinated approach to responsible AI development.

They also signed up to the Seoul Declaration, which aims to strengthen international cooperation on developing human-centric AI that is both trustworthy and responsible.

Also at the summit, the UK's Department for Science, Innovation and Technology (DSIT) announced £8.5 million of grants to fund research into how to protect society from the potential risks posed by AI.

The programme will be run out of the UK's AI Safety Institute by Shahar Avin, an AI safety researcher, and Christopher Summerfield, UK AI Safety Institute research director. The research programme will be delivered in partnership with UK Research and Innovation and The Alan Turing Institute.

The funding "will allow our Institute to partner with academia and industry to ensure we continue to be proactive in developing new approaches that can help us ensure AI continues to be a transformative force for good," said technology secretary Michelle Donelan.

The Seoul Summit concluded with 27 countries, including the UK, Korea, France, the US, United Arab Emirates, and the EU agreeing to develop proposals for assessing some of the potentially existential risks that AI poses.

Under the Seoul Ministerial Statement, signatories will draw up shared risk thresholds for frontier AI development and deployment. Thresholds might include agreeing when AI capabilities could pose 'severe risks' without appropriate mitigations. This could include helping bad actors to acquire or use chemical or biological weapons, and AI's ability to evade human oversight through deception and manipulation.

The raft of announcements coming out of this latest AI safety summit is a boon for the UK, which is trying to position itself as the driving force behind responsible AI development.

"It has been a productive two days of discussions which the UK and the Republic of Korea have built upon the 'Bletchley Effect' following our inaugural AI Safety Summit," said Donelan.

"The agreements we have reached in Seoul mark the beginning of phase two of our AI safety agenda, in which the world takes concrete steps to become more resilient to the risks of AI and begins a deepening of our understanding of the science that will underpin a shared approach to AI safety in the future," she said.

Indeed, it is all too easy to accuse governments of not following up their well-intended declarations with meaningful action, but some of these announcements do seem to have put some meat on the bones. Getting on board household names from the AI sector also lends the effort a healthy dose of credibility.

Unfortunately for one of the new signatories, recent events have somewhat undermined this pledge to act responsibly.

For all the clever work being done at OpenAI, its routine public pratfalls inevitably chip away at its claim to being a trustworthy custodian of such potent technology.

It was on a high after launching its latest flagship multimodal model, GPT-4o, but it didn't last, because days later came the departure of OpenAI's co-founder Ilya Sutskever – described by CEO Sam Altman as "easily one of the greatest minds of our generation, a guiding light of our field."

By the way, this is the same Ilya Sutskever who orchestrated Altman's firing last November but then backtracked in the face of a massive rebellion by OpenAI's staff, and then subsequently supported Altman's rehiring. Definitely a normal company.

Back to the present, and Jan Leike, OpenAI's head of superalignment – a term that refers to keeping tight reins on AI – followed Sutskever out the door. He later went on a public and disconcerting rant about his former employer, claiming that "safety culture and processes have taken a backseat to shiny products."

Then, right on cue, the launch of one of OpenAI's new audio chatbots – 'Sky' – quickly became mired in controversy when it became obvious that it sounded strikingly similar to Hollywood A-lister Scarlett Johansson.

OpenAI initially defended itself, insisting that "Sky's voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice."

However, in a statement issued to various press outlets, Johansson said Altman had approached her about the job, but she declined.

"When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference," she said.

Altman also managed to stitch himself up royally by tweeting the single word 'Her' shortly after Sky's release. Her is a 2013 film starring Joaquin Phoenix, whose character falls for a virtual assistant voiced by Johansson.

Sky has since been pulled and Altman has issued a public apology.

All this drama from a company on the leading edge of AI development, a company that insists it is undertaking its work responsibly.

France is due to host the next AI safety summit early next year. For the sake of humanity, let's hope OpenAI is a bit more grown up by then.

Full list of signatories:

  • Amazon

  • Anthropic

  • Cohere

  • Google / Google DeepMind

  • G42

  • IBM

  • Inflection AI

  • Meta

  • Microsoft

  • Mistral AI

  • Naver

  • OpenAI

  • Samsung Electronics

  • Technology Innovation Institute

  • xAI

  • Zhipu.ai

About the Author

Nick Wood

Nick is a freelancer who has covered the global telecoms industry for more than 15 years. Areas of expertise include operator strategies; M&As; and emerging technologies, among others. As a freelancer, Nick has contributed news and features for many well-known industry publications. Before that, he wrote daily news and regular features as deputy editor of Total Telecom. He has a first-class honours degree in journalism from the University of Westminster.

Subscribe and receive the latest news from the industry.
Join 56,000+ members. Yes it's completely free.

You May Also Like