The European Commission is proposing new rules that will allow it to ban any form or implementation of AI it deems too risky.

Scott Bicheno

April 21, 2021

3 Min Read
AI

The European Commission is proposing new rules that will allow it to ban any form or implementation of AI it deems too risky.

Having kept its head down for a bit following the Covid vaccine debacle, the EC is back with a vengeance, vowing to make up for lost time by embarking on a banning spree. At the top of the list is Artificial Intelligence, which it reckons is far too risky not to have the elite of European bureaucrats constantly passing judgment over it.

“On Artificial Intelligence, trust is a must, not a nice to have,” revealed Margrethe Vestager, EVP for a Europe fit for the Digital Age. “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

“AI is a means, not an end,” pitched-in Commissioner for Internal Market Thierry Breton. “It has been around for decades but has reached new capacities fueled by computing power. This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security. It also presents a number of risks. Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

Yeah, sure. These rules follow an arbitrary risk scale, at the top of which is ‘unacceptable risk’, which will result in the offending bit of AI being banned. Anything that threatens the job security of Eurocrats will presumably be on that list. The big category, however, is ‘high-risk’, which covers the following use cases.

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;

  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);

  • Safety components of products (e.g. AI application in robot-assisted surgery);

  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);

  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);

  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);

  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);

  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

These are the hoops you have to jump through if the EC puts your AI stuff in the high-risk category:

  • Adequate risk assessment and mitigation systems;

  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;

  • Logging of activity to ensure traceability of results;

  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;

  • Clear and adequate information to the user;

  • Appropriate human oversight measures to minimise risk;

  • High level of robustnesssecurity and accuracy.

While we defer to no-one when it comes to paranoia about machines taking over the world, unless the EC tightens these rules up a hell of a lot they risk making it just too much of a drag to develop AI applications in the EU for all but the largest companies. The UK, of course, is now free ignore these ridiculously vague regulations and do its own thing with AI. If recent precedent is anything to go by that will result in us developing it much more quickly and the EU seeking to punish us in a feeble attempt to save face.

About the Author(s)

Scott Bicheno

As the Editorial Director of Telecoms.com, Scott oversees all editorial activity on the site and also manages the Telecoms.com Intelligence arm, which focuses on analysis and bespoke content.
Scott has been covering the mobile phone and broader technology industries for over ten years. Prior to Telecoms.com Scott was the primary smartphone specialist at industry analyst Strategy Analytics’. Before that Scott was a technology journalist, covering the PC and telecoms sectors from a business perspective.
Follow him @scottbicheno

You May Also Like