Amid warnings, UK defence lab forms Google AI pact

A UK government defence lab has struck a partnership with Google Cloud that includes AI, conjuring forth exactly the sort of images of killer robots that people have been warning about.

Nick Wood

June 15, 2023

4 Min Read
Artificial Intelligence Concept. Microprocessor with the letters AI.
Artificial Intelligence Concept. Microprocessor with the letters AI.

A UK government defence lab has struck a partnership with Google Cloud that includes AI, conjuring forth exactly the sort of images of killer robots that people have been warning about.

To be fair to the UK Defence Science and Technology Laboratory (Dstl), its CEO Paul Hollinshead insisted in a statement that his lab is committed to “safe and responsible AI.”

Nonetheless, Dstl is very keen to work with Google to accelerate the adoption of AI in the UK defence sector, ergo the two have signed an MoU to that effect.

The agreement covers no fewer than five key areas, and the wording of the announcement is sufficiently ambiguous to allow room to speculate on exactly what purpose Dstl has in mind when it comes to using Google’s powerful AI tech.

First and foremost, Dstl will be able to access and use Google’s AI technologies, processes and people to learn how Google delivers AI solutions to end users.

Dstl will also get a look at some of the applications Google Cloud has developed for other industries to see how they might be used to “solve UK defence challenges,” which sounds a little unsettling.

As part of its ongoing effort to become what it calls an ‘AI ready’ organisation, Dstl will also train and upskill staff with Google-led learning and development opportunities. The partners will also share new ways of working and proven approaches in an effort to create what they hope will be a world-class AI research environment that can attract and retain top talent with its range of tools and infrastructure.

Last but not least, Dstl will be able to access the Google Cloud Marketplace and its wider partner ecosystem.

“As one of the most transformative and ubiquitous new technologies, AI has enormous potential to transform societies,” said Hollinshead.

Indeed, if you believe some ofthe warnings that have been issued about AI lately, it has the potential to transform societies into dystopias or even worse, lifeless wastelands.

Such is the degree of AI angst that politicians are pushing hard to regulate it. The European Parliament on Wednesday officially adopted its negotiating position when it comes to discussing with Member States the rules that will eventually govern AI.

It is pushing for a ban on using AI for biometric surveillance, emotion recognition, or predictive policing. Furthermore, it wants generative AI systems to disclose when content has been AI-generated. It also wants any AI systems that can be used to influence voters to be branded as high risk.

The EU said in a press release that its aim is to promote human-centric, trustworthy AI that protects the health, safety and fundamental rights and democracy from some of its potentially harmful effects. Interestingly, while it does use the word ‘harmful’, it doesn’t specifically mention military use of AI, or AI systems designed to destroy and kill.

If anything, this week’s announcements from Dstl and the EU serve to highlight the degree of hype about AI – and how highly-strung everyone has become about it – ever since people collectively decided that OpenAI’s ChatGPT large language model (LLM) was quite good.

For example, it’s worth noting that last July Dstl launched the Defence Centre for AI Research (DARC), which includes a department called the AI Research Centre for Defence (ARC-D). It was described in a press release at the time as “a centre of excellence which provides real focus to developing and applying AI ethically in defence contexts.”

However, back then, ChatGPT wasn’t providing people on the Internet with hours of entertainment, and the announcement attracted very little attention.

It is highly unlikely that would have been the case had the launch taken place this year.

Nowadays, every scrap of AI-related news is jumped on.

Just look at what happened at the Royal Aeronautical Society (RaeS) earlier this month. It caused a huge stir when a blog post recounted a presentation at one of its events from a US Air Force (USAF) Colonel.

Col. Tucker ‘Cinco’ Hamilton explained that during a simulated exercise, an AI-controlled drone had turned on its human operator and killed them because they were preventing it from achieving its objective of destroying the enemy.

The blog was subsequently updated to clarify that Col. Hamilton was speaking hypothetically and that no such simulation had actually taken place. Unfortunately, that clarification was made after the original story had been picked up by mainstream media.

In light of Dstl’s AI partnership with Google, and with AI still firmly under the microscope – and with the EU Parliament appearing to skirt the issue of military AI use – there will doubtless be some people wondering when scenarios like Col. Hamilton’s will stop being hypothetical.

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

About the Author(s)

Nick Wood

Nick is a freelancer who has covered the global telecoms industry for more than 15 years. Areas of expertise include operator strategies; M&As; and emerging technologies, among others. As a freelancer, Nick has contributed news and features for many well-known industry publications. Before that, he wrote daily news and regular features as deputy editor of Total Telecom. He has a first-class honours degree in journalism from the University of Westminster.

You May Also Like