UK gives regulators millions to prepare for AI
The UK government has decided to bung regulators £10 million to help make them ready for the rigours of handling everyone's favourite fast-growing technology buzz topic: artificial intelligence.
February 6, 2024
The move forms part of a broader funding allocation into AI. It comes alongside another £90 million to be ploughed into nine new UK-based AI research hubs and a partnership with the US on responsible AI, and a handful of other AI initiatives in the low millions of pounds.
The funding announcement is part of the response to the government's consultation into its AI regulation white paper published almost a year ago, which essentially sets out the country's approach to AI regulation. Coincidentally, it also comes just days after the European Commission’s Internal Market commissioner announced that all 27 member states had endorsed the agreement announced in December on the upcoming Artificial Intelligence Act.
Essentially, it's an attempt to regulate AI within the EU, preventing the deployment of potentially harmful models, imposing transparency on general purpose AI systems, and creating rules for different levels of risk within AI. There is, of course, more to it than that – a lot more – but we'd need a nutshell the size of a KP factory to go into more detail than that at this stage. Suffice it to say, the act is now one step closer to actually coming into force and as a result the EU is blowing its own trumpet as the world's best AI regulator. Hard.
"EU means AI!" Breton declared on Twitter late last week. Doubtless those in Westminster would beg to differ.
The timing of the two announcements may well have been coincidental; the EU has been working on the AI act for years, and the UK government's white paper came out in March, but it had been kicking around ideas for some time prior to that date. But it's pretty clear that the UK is desperate to prevent its erstwhile continental compadres from winning the AI regulation race. It has to be seen to be doing something too.
The UK government made it clear some years back that it would not set up a new AI-specific regulator and would instead lean on its existing regulatory bodies to come up with their own approaches to managing the technology within their own specific remits. We're talking Ofcom, the Competition and Markets Authority (CMA), the Health and Safety Executive, the Equality and Human Rights Commission and so forth.
Now it has decided they need financial help in order to do that, hence the £10 million in funding. The cash will be used to "prepare and upskill regulators to address the risks and harness the opportunities of this defining technology," the Department for Science, Innovation and Technology (DSIT) announced on Tuesday.
It expects them to use the funding for research and practical tools to monitor and address risks in their sectors, it said. "For example, this might include new technical tools for examining AI systems."
Key regulators, including Ofcom and the CMA, have been given until 30 April to publish their approach to managing AI.
"This approach to AI regulation will mean the UK can be more agile than competitor nations, while also leading on AI safety research and evaluation, charting a bold course for the UK to become a leader in safe, responsible AI innovation," DSIT said.
And for good measure it shared a raft of canned quotes from major players in the AI space – Microsoft, Google DeepMind, Amazon and others – essentially supporting the fact that the government is taking a stab at regulating AI.
Ultimately though, for all the global posturing, there may well be no real winners in this race. The EU, the UK and others around the world are arguably trying to regulate the unregulate-able.
White papers, action plans, agreements between EU member states and so on move at a snail's pace compared with the development of technology in general, and even more so with something like AI. For all its self-aggrandisement and global leadership talk, DSIT clearly recognises that.
"The technology is rapidly developing, and the risks and most appropriate mitigations, are still not fully understood," it admitted.
"The UK Government will not rush to legislate, or risk implementing 'quick-fix' rules that would soon become outdated or ineffective," DSIT said. "Instead, the Government's context-based approach means existing regulators are empowered to address AI risks in a targeted way."
Or to put it another way, we'll throw around a lot of ideas and share them with the world, but essentially, we'll just see how it pans out.
About the Author
You May Also Like