IBM unveils software to detect AI bias, but how do you know it isn’t also biased?
IBM has unveiled its latest offering, the Fairness 360 Kit, which will help identify any bias in AI decision making and recommend adjustments.
September 19, 2018
IBM has unveiled its latest offering, the Fairness 360 Kit, which will help identify any bias in AI decision making and recommend adjustments.
It’s area of the burgeoning artificial intelligence segment which could prove to be its downfall. AI is supposed to set of technologies designed to make our lives easier, though the presence of bias in algorithms and outcomes could undermine mass market acceptance. Why would anyone want to integrate technology which potentially could be fundamentally flawed?
“IBM has led the industry and has driven the establishment of values such as trust and transparency in the development of new AI technologies, ” said David Kenny, SVP of Cognitive Solutions at IBM. “It is time to bring these values to the table. We are providing companies that use AI with greater transparency and control to face the potential risk of erroneous decision-making.”
Research from the Rand Corporation, a paper entitled ‘An Intelligence in Our Image’, assesses some of the risks associated with bias. While some might be small and non-impactful, errant algorithms in infrastructure, defence systems, or financial markets could cause some pretty significant damage on a global scale. It of course all depends on the purpose of the AI application, though incomplete data, sub-conscious bias from the human programmer or applying the application to a process or situation it was not perfectly designed for could all influence the technology. In short, there are a lot of things which could go wrong with this embryonic technology.
According to the paper, as the breadth and depth of data increases, the demand to make use of insight increases. There is pressure to create more complex algorithms to create value out of the information, though this might be compounding the problem. Some might suggest it is not a good idea to move onto more complex AI applications when the basic ones have not been mastered just yet.
The simple answer is not to use it, though this is not a feasible solution. Other ideas include conducting regular audits of the algorithms and/or provide more transparency on how the decision making process works. Unfortunately due to the increasingly complex nature of artificial intelligence, offering transparency (the perfect solution) is a pretty useless path to travel. The general public, or even the organizations implementing the technology, will not understand it. In a recent IBM survey, 63% of respondents said they lack the capabilities and internal talent to manage this technology with confidence.
Companies like IBM, Google and Amazon have been doing wonderful things to democratise AI, though this is part of the problem. In increasing the accessibility of AI, these companies are allowing organizations to use the technology without understanding it. These companies are ‘standing on the shoulders of giants’, though they are helpless to identify when there is an issue with the bedrock technology as they had no hand in developing it.
IBM’s answer here is to offer AI which can detect error and bias in other AI technologies. The software service can also be programmed to monitor the specific decision factors that are taken into consideration for any business workflow. It effectively monitors the decision making processes in real-time, and captures potentially unfair results as they occur. The software will identify what factors made the decision tilt to one side or the other, confidence in the recommendation and the factors behind that confidence.
But here is the catch, is it sensible to identify errors in potentially faulty algorithms, with another algorithm? Who is to say there is not fault or bias in the detection software, and what compounded issue could this cause?
About the Author
You May Also Like