news


UK government grapples with bias in artificial intelligence

Artificial intelligence (AI) has enormous potential for good, but with applications processing data faster than we can comprehend, how do you protect against bias?

To address this issue, the Department of Digital, Culture, Media and Sport (DCMS) has unveiled the Centre for Data Ethics and Innovation, with one of the first project focusing on the idea of programmed or learned bias in the algorithms which power AI.

“Technology is a force for good and continues to improve people’s lives but we must make sure it is developed in a safe and secure way,” said Digital Secretary Jeremy Wright. “Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development.

“I’m pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the centre’s future recommendations to help make sure we maximise the benefits of these powerful technologies for society.”

First up, the new centre will partner with the Cabinet Office’s Race Disparity Unit to explore potential for bias in crime and justice. As more applications emerge for use in the world of policing, assessing the likelihood of re-offending for instance, a lack of research on the potential of bias makes for a very dangerous scenario.

The algorithms which are in place might not demonstrate any bias at any point in the future, but implementation without understanding the risk is incredibly irresponsible. When these applications are used to inform decisions about policing, probation and parole, there is a very real-world consequence. Proceeding without such safeguards for bias in place is leaving developments down to chance.

This is of course just one application of AI, though the increased use of AI is becoming much more common. In recruitment, computer algorithms can be used to screen CVs and shortlist candidates, or in financial services, data analysis has long been used to inform decisions about whether people can be granted loans. The idea of unconscious bias can be applied to both instances with vert detrimental outcomes. In the recruitment case, there have already been reports circulating of gender bias.

Technology giant Amazon is one of those firms which got caught unawares. In 2014, Amazon began building an application which would review the CVs of the thousands of applicants it gets every week, giving each CV a rating between one and five stars. In 2015, it realised the application was not assessing the CVs in a gender-neutral manner, favouring male applicants for more technical roles.

The complication perhaps arises when machine learning applications search for attributes which are traditionally associated with roles. For a computer, data is everything and stereotypes are there for a reason, therefore it would appear to be a very logical decision to make.

This type of conundrum is one of the main challenges with AI. As these machines are driven by data and code, it is very difficult to translate ethics, morals, acceptable tolerances, nuance and societal influences into a language it understands. These are also limited applications, built for a single purpose. In the recruitment case, it looks at past attributes to decide, but does not have the ability to understand context. In this instance, the context would be sexism is not acceptable, but as the machine does not have the general knowledge or understanding of a human, how would it know?

This is the finely balanced equation which both industry and government have to assess. Without slowing the wheels of progress, how do you protect society and the economy from the known unknowns and unknown unknowns?

What is developing is the perfect catch-22 situation. The known challenges are known, but without a solution progress is a risk. Then you have the unknown challenges, those which might be compounded through progress but without anyone being aware until it is a complete disaster.

The Centre for Data Ethics and Innovation is an excellent idea to benefit society in numerous ways. But, it faces an almost impossible task.

  • 2020 Vision Executive Summit

  • BIG 5G Event


Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Polls

Should privacy be treated as a right to protect stringently, or a commodity for users to trade for benefits?

Loading ... Loading ...