An open letter signed by many tech leaders is calling for a pause in training all artificial intelligence systems more powerful than GPT-4, but it’s probably futile.

Scott Bicheno

March 29, 2023

3 Min Read
terminator robot screen shot

An open letter signed by many tech leaders is calling for a pause in training all artificial intelligence systems more powerful than GPT-4, but it’s probably futile.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” opens the letter. Among the most prominent signatories are tech industry legends Elon Musk and Steve Wozniak, as well as a bunch of AI researchers and commentators.

The letter goes on to lament the current AI arms race, which has been catalysed by Microsoft-backed OpenAI and features the best efforts of Google and others to one-up each other. The letter reckons this frantic development is being conducted with insufficient planning and management, which risks unleashing “ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

A bunch of fundamental questions about the safety, desirability, and ethics of rendering human beings obsolete are then asked, which come over as simultaneously naïve and essential. The point is that there are no simple answers to these questions and no consensus about how to even go about trying to find them. The signatories fear that AI development is sprinting ahead of the ethical discussion, hence the call for a pause.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” says the letter. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

There’s just so much to unpack in this letter and the issues it addresses. We hear endless talk in the letter and elsewhere about how we should ensure AI is only developed for the good of humanity, but who has the authority to determine that and what should be done if it’s not? The signatories seems to think ‘governments’ are the answer, but they surely can’t believe that.

Even if they do, there’s no sign of consensus between governments. The EU, inevitably, wants loads of control over AI but the UK recently decided a more laissez faire approach is warranted, in order to “turbocharge growth”. Meanwhile, such a pivotal technology is always going to be the focus of competitive tension between superpowers US and China, with any pause by one almost certain to be exploited by the other.

The UK attitude of apparently embracing the upsides of AI while downplaying the downsides is the sort of thing this letter is concerned about. It coincides with the publication of some research from banking giant Goldman Sachs which, while largely bullish on the economic benefits of AI, also features the following paragraph in its executive summary.

If generative AI delivers on its promised capabilities, the labor market could face significant disruption. Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.

Surely that alone merits broader discussion, let alone the Terminator/Matrix implications of creating machines that are much cleverer than humans. Inevitably the letter has polarised opinions, as you can see from the selection of tweets below. We agree that you can’t put the technological genie back in the bottle but we do seem to be running out of time to establish global guardrails around the direction of AI development.

View post on Twitter

View post on Twitter

View post on Twitter

View post on Twitter

 

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

About the Author(s)

Scott Bicheno

As the Editorial Director of Telecoms.com, Scott oversees all editorial activity on the site and also manages the Telecoms.com Intelligence arm, which focuses on analysis and bespoke content.
Scott has been covering the mobile phone and broader technology industries for over ten years. Prior to Telecoms.com Scott was the primary smartphone specialist at industry analyst Strategy Analytics’. Before that Scott was a technology journalist, covering the PC and telecoms sectors from a business perspective.
Follow him @scottbicheno

You May Also Like