In a letter to Congress, IBM CEO Arvind Krishna has said the firm would halt the development and sale of facial recognition software, as it is not being deployed in a fair and reasonable manner.

Jamie Davies

June 9, 2020

5 Min Read
IBM exiting facial recognition as police can’t use it responsibly

In a letter to Congress, IBM CEO Arvind Krishna has said the firm would halt the development and sale of facial recognition software, as it is not being deployed in a fair and reasonable manner.

As with the development of every technology, there is great potential for good but also terrible power for bad. The Law of Unintended Consequences lurks around every corner in the digital economy, and it seems Krishna is not confident enough in the moral code of Government authorities to allow them to deploy and operate such technologies.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

The letter, which has been addressed to Senators Corey Booker and Kamala Harris, as well as Representatives Karen Bass, Hakeem Jeffries, and Jerrold Nadler, confirms IBM will no longer sell general purpose IBM facial recognition or analysis software. There is seemingly too much of a bias risk, too many untrustworthy individuals and not enough accountability or transparency.

Perhaps the most interesting element of this story is the Law of Unintended Consequences, a definition of which is below:

A term used to describe a set of results that was not intended as an outcome. Though unintended consequences may be anticipated or unanticipated, they should be the product of specific actions within the process. The concept of unintended consequences is part of the Six Sigma philosophy and encompasses three types of results: positive effects, potential problems that may result in a reduction of quality, and negative effects

While academics and economists are wary of this dynamic in an evolving society, popular opinion and politicians rarely have the foresight to consider it in the real world.

This is the reason so many political or Government initiatives fail; most are incredibly short-termist, with not enough attention or thought afforded to anything outside the primary splash in the societal pond. Ripples can gather momentum to make waves and can be far more consequential. This is seemingly what has happened here.

The unintended consequence in this example seems to be the use of facial recognition for mass surveillance purposes, which have iffy relationships with privacy rights, or have demonstrated bias. This has been particularly consequential for African American and Asian citizens in the US.

At the end of 2019, the National Institute of Standards and Technology (NIST), a branch of the US Department of Commerce, discovered that the performance of commercially available facial recognition software varied monstrously. Some applications being sold could misidentify members of some demographics (age, race, and ethnicity) up to 100 times more frequently than others. African American and Asian citizens were the most likely to be misidentified.

No technology is ever perfect, but this is clearly an area of the industry which has not been given anywhere near enough attention. The algorithms demonstrate bias when applied in the real world and are not ready for deployment, but still, there are hundreds of usecases of commercial implementation.

Amazon has created a retail product which relates to charging, the London Metropolitan Police Service has implemented the technology to tackle knife and violent crime in several locations around the city, while Facebook is using facial recognition to identify individuals in pictures.

Some of the implementations could be considered harmless, others carry a very serious penalty, such as false arrest or imprisonment. However, the technology should not be deployed in any format until the flaws have been addressed; mission creep is a genuine threat and Governments are not trustworthy enough.

What is worth noting is that IBM’s stance against the aggressive implementation of facial recognition technologies is not new.

City of San Francisco has effectively legislated against the deployment of facial recognition with very specific cases falling outside the carpet ban. Microsoft has been calling for reform to privacy laws which would take into the account new technologies such as facial recognition. The Electronic Frontier Foundation has also been campaigning against the rushed application of facial recognition, with the privacy campaigners suggesting greater requirements for accountability and transparency are needed.

While not new, IBM’s stance is an important one. With its AI engine Watson creating a leadership position for IBM in the AI industry, this is a somewhat critical gesture. IBM could sell facial recognition technologies, but it is putting its responsibility to society in front of its commercial ambitions.

A dialogue needs to be opened on the dangers of facial recognition technologies, as the consequences have not been considered. Realistically, the technology has been rushed to commercial deployment and it should be clawed back. In every other segment of the technology world we talks about the five nines of reliability, but that has either been forgotten or ignored here, not to mention the principles of accountability and transparency which are also absent.

This is a case of too much, too fast.

Telecoms.com Poll:

Should companies/authorities be allowed to deploy facial recognition technologies?

  • No, it should never be allowed - it is an invasion of privacy (48%, 15 Votes)

  • No, it has been rushed to market and needs to be 99.999% accurate before deployment (23%, 7 Votes)

  • Yes, the consequences are being exaggerated (19%, 6 Votes)

  • Yes, the issues will be fixed during live deployments (10%, 3 Votes)

Total Voters: 31

You May Also Like