UK competition watchdog fleshes out approach to AI regulation

Accountability, flexibility and transparency are among the seven principles that will guide the Competition and Markets Authority (CMA)’s approach to keeping tabs on AI.

The CMA has a slightly different remit compared to some of the other watchdogs that have weighed in on this year’s hottest tech topic. There is still an emphasis on consumer protection, but this protection is provided by the structure of the market and how it functions, rather than addressing AI development directly.

Since its preliminary review of AI in May, the CMA has talked to 70 stakeholders – ranging from AI developers and businesses that use AI, to consumer and industry groups, and academics – to help inform its approach.

Based on this consultation, it has now drawn up a more detailed blueprint for how it thinks companies that develop foundation models (FMs) – which underpin generative AI services like OpenAI’s ChatGPT and Google Bard – should conduct themselves to safeguard competition and end users.

First and foremost, the CMA wants to make FM developers and the companies that deploy them accountable for the outputs their AI provides to consumers. This would give regulators recourse in the event that an AI spreads harmful or misleading information, and is the one principle that comes closest to steering AI development.

The next five principles – access, diversity, choice, flexibility and fair dealing – are all riffs on the theme of making sure that consumers and businesses have a choice about which FM they use, and have the freedom to switch between them as needed. The CMA could intervene if it uncovers evidence of anti-competitive conduct, including tying clients to long-term deals with onerous terms and conditions, or anti-competitive self-preferencing and bundling.

The seventh and final principle, transparency, is about making sure consumers and businesses are informed about the risks and limitations of FM-generated content. It is not clear whether the onus would be on AI developers or regulators to disseminate this information.

“The CMA’s role is to help shape these markets in ways that foster strong competition and effective consumer protection, delivering the best outcomes for people and businesses across the UK,” said CMA chief exec Sarah Cardell, in a statement. “In rapidly developing markets like these, it’s critical we put ourselves at the forefront of that thinking, rather than waiting for problems to emerge and only then stepping in with corrective measures.”

As well as consulting stakeholders, the CMA has established these principles in line with a UK government white paper published in late March, which set out its overarching position on AI regulation.

Rather than giving responsibility for AI governance to a single new regulator, the government has called on existing watchdogs – the CMA, the Health and Safety Executive, and the Equality and Human Rights Commission – to draw up their own context-specific approaches that address how AI relates to their sectors. It sounds like a calm and pragmatic approach – which is unusual for a central government – that avoids the confusion and upheaval that typically comes with establishing and staffing a new regulator.

Over the next few months, the CMA plans to engage with Google, Meta, OpenAI, Microsoft, Nvidia and Anthropic to gauge the reception of its seven guiding principles. It will also talk to enterprises, consumer groups, academics, governments and other regulators. It aims to provide an update on its regulatory approach in early 2024.

“While I hope that our collaborative approach will help realise the maximum potential of this new technology, we are ready to intervene where necessary,” Cardell said.


Get the latest news straight to your inbox. Register for the newsletter here.

Tags: , ,
  • 2020 Vision Executive Summit

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.