Google, Microsoft, Anthropic and OpenAI launch AI safety body

Some of the biggest players in the AI sector have grouped together to set up Frontier Model Forum – described as ‘an industry body focused on ensuring safe and responsible development of frontier AI models.’

Andrew Wooden

July 26, 2023

3 Min Read
Artificial intelligence

Some of the biggest players in the AI sector have grouped together to set up Frontier Model Forum – described as ‘an industry body focused on ensuring safe and responsible development of frontier AI models.’

We’re told the Frontier Model Forum will draw on the expertise of its member companies – who currently are Anthropic, Google, Microsoft and OpenAI – to do things that ‘benefit the entire AI ecosystem’ – such as advancing technical evaluations and benchmarks, and developing a public library of solutions for industry standards.

It defines frontier models as’ large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.’

The quintessential example would presumably be Open AI’s own ChatGPT, which has done more than anything to bring generative AI to the attention of mainstream media and politicians.

The core objectives of the body are listed as:

  • Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.

  • Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.

  • Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.

  • Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

The body is open to other organisations that are willing to play ball with it’s stated aims, and over the coming months it will establish an advisory board to guide its strategy and priorities. In terms of what it actually plans to do next, we’re told over the coming year it will be working to identify best practices, advance AI safety research, and facilitate information sharing among companies and governments.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, Vice Chair & President, Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Anna Makanju, Vice President of Global Affairs, OpenAI added: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”

These sentiments may not be unrelated to the attention governments are increasingly giving AI and the potentially damaging effects it could have on workplaces and society, as well as the benefits it could bring.

But self-regulation is a tricky one – the reason regulators have teeth is that they can stop companies doing things they might otherwise like to do that might be bad for a market or society, or whatever else. If these companies are tasked with keeping themselves in check, you don’t have to be a cynic to point out the glaring conflict of interest that presents.

And then what about all the other firms out there at home and abroad developing AI that in the future might not care so much about ‘safe and responsible development’ so much as, say, being first to market with something that might make them a lot of money. Presumably they just wouldn’t join such a body as this.

It’s good if these particular firms involved in hurling these cutting-edge AI models into the world are thinking a bit more about the potential implications of doing so, especially as in some cases we don’t seem to be fully cognisant on how some the things work – but what the actual self-imposed restraining force of this body manifests itself as remains to be seen.

 

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

About the Author(s)

Andrew Wooden

Andrew joins Telecoms.com on the back of an extensive career in tech journalism and content strategy.

You May Also Like