Europe publishes stance on AI ethics, but don’t expect much

The European Commission has revealed its latest white paper detailing guidelines on an ethical and trustworthy approach to AI, but whether it actually means anything remains to be seen.

Jamie Davies

June 28, 2019

3 Min Read
european parliament flag

The European Commission has revealed its latest white paper detailing guidelines on an ethical and trustworthy approach to AI, but whether it actually means anything remains to be seen.

The guidelines themselves are now open for public comment with the Gaggle of Red Tapers seeking feedback on how to make improvements and increase applicability to the world of today. However, the industry continues to operate under the semblance of oversight but in the reality of the digital wild-west.

Such is the top-line nature of the guidelines, you have to wonder whether there have been any real efforts to integrate the thinking into business. At the moment, the guidelines do not seem to have any substance to them, simply stating the obvious, or at least what you would hope is obvious to the developers creating the algorithms and applications. These guidelines would have been useful 2-3 years ago, but now it seems a bit of a redundant statement. AI regulation needs action not philosophical thinking.

After reading the guidelines, there is a sense of ‘so what’. What was the point in making this statement aside from cosmetically attracting headlines for the European Commission? There doesn’t seem to be anything new in there, just the European Commission making a statement for the sake of making a statement.

The seven guidelines are as follows:

  1. Humans should have oversight of AI at all times

  2. AI systems need to be resilient and secure

  3. Governance measures should be introduced to protect privacy

  4. Transparency should be ensured

  5. Bias should be removed

  6. AI should benefit all

  7. Accountability for AI should be introduced

Having the guidelines is all well-and-good, there needs to be a yard-stick, but we would expect at the least for some sort of accountability model. It seems a bit half-arsed at the moment as there are still numerous questions.

Firstly, how is the European Commission going to judge whether these guidelines are being followed by industry? What will the metrics be? What will be the punishments for not taking the principles into account or negligible behaviour? Where are the reporting mechanisms for ‘unethical’ behaviour and complaints?

The next steps for the Commission is to consult with industry and run various pilot programmes across the bloc. After these initiatives have been completed, another consultation period will be entered into before the Commission will review the assessment lists for the key requirements in early 2020. At some point in the ill-defined future, Europe might have some rules on AI.

Considering the posturing which has taken place over the last couple of months, Europe has promised it will lead the world on AI, this announced seems nothing but superficial. These generic comments and guidelines should have been put out years ago, now is a time for action and a time for rules.

AI is already in the world and having a fundamental impact on our day-to-day lives. We might not realise it all the time, but it is increasingly interwoven into the services and products which we use each day. Now is the time for action from regulators, not posturing and pondering.

Subscribe and receive the latest news from the industry.
Join 56,000+ members. Yes it's completely free.

You May Also Like