UK points LASR at AI-based threats to national security
The UK has stepped up its commitment to keeping the AI genie under control with the launch of another lab.
November 26, 2024
This one is called the Laboratory for AI Security Research, or LASR, for short. It was unveiled by Pat McFadden, chancellor of the Duchy of Lancaster, at a NATO Cyber Defence Conference in London.
"The lab will pull together world-class industry, academic and government experts to assess the impact of AI on our national security," he said.
McFadden said the UK and its allies are in an AI arms race, and the new lab will help them stay a step ahead of their adversaries.
The official announcement was somewhat vague on the precise nature of the national security threats posed by AI. What is known though, is that AI and machine learning (ML) are playing an increasing role in cyber attacks, not only helping hackers to automate their attacks, but also to adapt to cyber defences and evade detection.
The combination of AI with state-sponsored hacker groups and simmering geopolitical tensions is a heady mix, to put it lightly.
"While AI can amplify existing cyber threats, it can also create better cyber defence tools and presents opportunities for intelligence agencies to collect, analyse, and produce more useful intelligence," he said.
"AI is already revolutionising many parts of life – including national security. But as we develop this technology, there's a danger it could be weaponised against us," he continued. "Because our adversaries are also looking at how to use AI on the physical and cyber battlefield."
In a refreshing, albeit slightly unnerving break from tradition, the chancellor did in fact name names.
"Be in no doubt: the United Kingdom and others in this room are watching Russia. We know exactly what they are doing, and we are countering their attacks both publicly and behind the scenes."
The government has stumped up £8.2 million of initial funding for LASR, and has invited the private sector to chip in.
Various government departments are involved, including the Foreign Commonwealth and Development Office (FCDO), the Department for Science Innovation and Technology (DSIT), Government Communications Headquarters (GCHQ), the National Cyber Security Centre (NCSC), and the MOD's Defence Science and Technology Laboratory (Dstl).
The Alan Turing Institute will also contribute, as will the AI Safety Institute – which seems to cover a lot of the same ground as LASR. The University of Oxford, Queen's University Belfast, and tech-focused consultancy and workspace provider Plexal are also partners.
That the UK government feels the need to launch LASR seems to be a tacit acknowledgement that the global commitment to responsible AI development is little more than hot air.
Roughly this time last year, the previous government was hailing the Bletchley Declaration.
The multilateral pledge by 28 countries – including the US, China, Japan, South Korea, Saudi Arabia, the EU and so-on – aims to put in the guardrails necessary to ensure that AI is a force for good.
However, these commitments seem a little pointless if the prevailing wisdom is that those who are determined to destabilise their adversaries are already ignoring them.
About the Author
You May Also Like