UK government throws around some ideas for AI rules

The UK Government has put forward proposals on the future regulation of AI, which would see various regulators apply six ‘principles’ to markets which are implementing such technologies.

Andrew Wooden

July 18, 2022

4 Min Read
UK government throws around some ideas for AI rules
AI (Artificial Intelligence) concept. Communication network.

The UK Government has put forward proposals on the future regulation of AI, which would see various regulators apply six ‘principles’ to markets which are implementing such technologies.

The Government has sketched out its approach to regulating AI in a paper published today, which describes ‘proposed rules addressing future risks and opportunities so businesses are clear how they can develop and use AI systems and consumers are confident they are safe and robust.’

It has come up with six core principles that regulators in different sectors would have to enforce, which are claims are designed to focus on supporting growth and avoiding ‘unnecessary barriers being placed on businesses’. Some of the applications around these rules could be about sharing information as to how businesses test AI reliability, and how any related deployments are ‘safe and avoid unfair bias.’

Emphasis is placed on having different bodies enforce things relevant to their sectors, as opposed to what it describes as a more centralised way of keeping an eye on AI coming out of the EU – so Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency would all have to decide how to interpret and enforce rules in their respective fields.

This will create ‘proportionate and adaptable regulation’, it reckons, and regulators will be encouraged to consider ‘lighter touch options’ which could include guidance and voluntary measures – which could presumably be ignored. The six key principles that make up the spirit of the proposals are:

  • Ensure that AI is used safely

  • Ensure that AI is technically secure and functions as designed

  • Make sure that AI is appropriately transparent and explainable

  • Consider fairness

  • Identify a legal person to be responsible for AI

  • Clarify routes to redress or contestability

There’s more detail in the published paper, and the government has also invited various types in the know about AI as well as the regulators themselves to give some feedback on what it is suggesting, which will be considered alongside the development of another paper called the AI White Paper, which will explore how to put the principles into practice.

“We welcome these important early steps to establish a clear and coherent approach to regulating AI,” said Professor Dame Wendy Hall, Acting Chair of the AI Council. “This is critical to driving responsible innovation and supporting our AI ecosystem to thrive. The AI Council looks forward to working with government on the next steps to develop the White Paper”

Digital Minister Damian Collins added: “We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work. It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”

Having rules which amount to ‘be fair’ and ‘be safe’ seem so broad as to make you wonder how they could be very effectively applied to anything in a legal setting, but there will probably be more for the regulators to dig their teeth into when what the government is describing with these principles comes into clearer focus with the next white paper.

Whenever the UK government starts making a noise about how it’s going to crack down, influence or regulate big tech sectors in its own way, the point has to be made that there are complications when most of the products are going to come out of other countries, so really it doesn’t seem like Ofcom or the CMA is going to have a significant impact on how the AI industry evolves in general.

But to be fair it wasn’t very long ago you’d get a blank look or a smirk out of any politician asked what they think about how AI is going to change society in the future, so it’s worth giving props to them for at least seeming to have a good think about the issue.

AI applications are being integrated into an increasing amount of sectors, and come as standard as part of some wider software deployments from big tech firms. It depends who you ask, but when it comes to the dangers that may be lying in wait down the road, we could be looking at anything between people losing jobs to a full on takeover of the global military and governmental framework by a hyper-intelligent AI intent on the destruction of all redundant, fleshy human life on the planet. Perhaps Ofcom should have a rule about that.

 

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

About the Author(s)

Andrew Wooden

Andrew joins Telecoms.com on the back of an extensive career in tech journalism and content strategy.

You May Also Like