AI leaders publish desultory public statement addressing ‘risk of extinction’

terminator robot screen shot

As the artificial intelligence arms race continues to accelerate, so does the rush to signal concern about where it could lead.

The parallel trajectory of these two trends seems increasingly disingenuous, with the latter coming over as a crude attempt to sugar the potentially lethal pill represented by the former. We had Elon Musk, et al, calling for an implausible pause on AI development after Open AI announced GPT-4, followed by purposeful rustling of papers by US, UK and EU regulators and politicians.

Now the people actively involved in this AI arms race have decided the time has come for them to voice their concerns. They chose to do so through an organisation called the Center for AI Safety (CAIS), which says it ‘exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.’

CAIS published a ‘Statement on AI Risk’ in which ‘AI experts and public figures express their concern about AI risk’ as follows. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

As a broad sentiment that’s hard to argue with. At we’re against things that could wipe out humanity and are not afraid to say so publicly. As a useful, actionable, statement, however, it’s hard to see what the point of it is. What does ‘mitigating the risk of extinction’ mean – that we should try to make it a bit less likely? There’s certainly no inference that one way to do so would be to pause the AI arms race.

Which comes as little surprise given that among the signatories are the CEOs of probably the two main drivers of current AI development – Open AI and Google DeepMind. To be fair, there are plenty of apparently unaffiliated AI luminaries, such as Geoffrey Hinton (who recently rang AI alarm bells after leaving Google), but you have to wonder what they think this statement will achieve.

And talking of affiliations, it’s not clear what truly motivates CAIS. It doesn’t seem to have published any details of who funds it, with the only reference we could find on its website being the following in its FAQ section. “CAIS is a nonprofit organization entirely supported by private contributions. Our policies and research directions are not determined by individual donors, ensuring that our focus remains on serving the public interest.”

That’s reassuring. And there’s obviously no chance that the definition of ‘the public interest’ could be in any way influenced by its benefactors either. Could it be that CAIS receives a significant proportion of those private contributions from the AI industry itself, which may have an interest in creating an exaggerated sense of oversight? We don’t know the answer but wouldn’t be asking that question if CAIS were more transparent.

The past few years have shown how bad the world is at mitigating societal-scale risks. In fact, it seems likely that a misguided attempt to do so is what led to the Covid pandemic. And as for nuclear war, Russia’s invasion of Ukraine and the response by the US and its allies have made that prospect more real than for generations. So it’s difficult to feel optimistic about us doing a better job with AI.

Perhaps the example set by the telecoms industry offers the best way forward. Much of its technology is governed by global standards, developed in a collaborative and transparent manner. So long as the US and China persist with their technological cold war, however, the chance of any global consensus on AI seems remote, especially since it’s bound to be of increasing military significance. Still, we have to start somewhere and if this statement reinvigorates the public debate on AI then it will have served some small purpose.


Get the latest news straight to your inbox. Register for the newsletter here.

Tags: ,
  • 2020 Vision Executive Summit

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.