news


EU human rights agency says we should tread carefully with AI

The EU Agency for Fundamental Rights (FRA) has issued a report warning of confusion about the impact AI and automation can have of people’s rights.

As if the whole topic of AI wasn’t already dystopian enough, the report is titled ‘getting the future right’, as if the FRA reckons it already has. It warns that, while AI might be handy at times, it can also lead to discrimination and be hard to challenge. It calls on policymakers to provide more guidance on how existing rules apply to AI and ensure any future AI laws protect fundamental rights.

“AI is not infallible, it is made by people – and humans can make mistakes,” said FRA Director Michael O’Flaherty. “That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI. We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

Here are the specific things it wants all EU stakeholders to have a think about:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

That all seems fairly sensible, which begs the question of why this report was considered necessary? What safeguards are currently being put in place before we hand over our lives to some pitiless, amoral machine? Most of the time automation is used to make human being redundant and thus save money. While the morality of doing so is, in itself, worthy of further examination, it should certainly not be used to shield those that employ it from liability if it results in negative outcomes.

Tags: , ,
  • 2020 Vision Executive Summit

  • BIG 5G Event


Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Polls

What role will consumers expect telcos to play when COVID-19 is behind us?

  • As a reliable utility similar to the role they have managed to play during the pandemic (45%, 13 Votes)
  • Rolling out new technologies and services to support new customer behaviours (28%, 8 Votes)
  • Turning special service offerings during the pandemic into standard offerings (17%, 5 Votes)
  • As a digital platform that they have struggled to change into since the pre-COVID times (10%, 3 Votes)

Total Voters: 19

Loading ... Loading ...