Google makes itself responsible for the ethical use of AI

Google has reacted to negative press and employee outrage over the use of its artificial intelligence technology by setting a charter dictating code of conduct, holding itself accountable for the use of its technology.

Jamie Davies

June 8, 2018

4 Min Read
terminator robot screen shot

Google has reacted to negative press and employee outrage over the use of its artificial intelligence technology by setting a charter dictating code of conduct, holding itself accountable for the use of its technology.

The update from CEO Sundar Pichai follows an internal revolt to a Google’s involvement with Project Maven, a Pentagon initiative to study the use of AI in drone strikes, with more than 4,500 employees signing a letter demanding the termination of the contract. Google has since confirmed it will not renew the contract, which ends in 2019, though such was the backlash from the news, Pichai has seemingly found it pertinent to clarify Google’s position moving forward.

“We recognize that such powerful technology raises equally powerful questions about its use,” said Pichai. “How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”

The seven principles are as follows;

  • There has to be a clear benefit to society which ‘substantially’ outweighs the foreseeable risks and downsides

  • Avoid creating or reinforcing unfair bias

  • Built and tested for safety with renewed focus on avoiding unintended results that create risk of harm

  • Ensure any AI application is and will continue to be subject to human direction and control

  • Incorporate privacy design principles

  • Uphold high standards of scientific excellence

  • Only to be made available for use cases which also uphold the same principles as Google

The latter one is important, as while it is all well and good to outline internal processes and accountability, the nature of the world is technology can be used for purposes outside of the initial idea. By writing the final point into the charter, Google is giving itself a mission to be held accountable. Most companies would release themselves of such responsibility, but Google is asking itself to assess the primary purpose and likely use, the uniqueness, scale and the nature of its own involvement. Should this prove to be more than window dressing, it is a refreshing change to see a company taking corporate responsibility for the repercussions of its work.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” said Pichai. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”

This is an important point to note. The letter from outraged Google employees last month demanded Pichai swear off military relationships completely, though the CEO has not gone that far. Google will not help the development of offensive weapons which are intended to cause harm, but they will aid the military in other ways. This provides suitable wiggle-room for the management team, while also allowing the business to maintain the hipster-persona of the Google brand.

The importance of this message cannot be underplayed. Not only will this impact employee retention, dozens have already reportedly walked out the door, but also the future prospects of the business. Google is not a product company, it is a software and services business, therefore critically reliant on the people which it can attract. Bright, young engineers entering the workforce for the first time have romantic ideas about contributing to the greater good, while also making serious cash. Google is currently a business which offers the prospect of both, though initiatives like Project Maven destroy the ‘Don’t be Evil’ tagline of the business, and the appeal to the stereotypical software engineer.

The cynics will suggest this is only a cover up and PR mission as Google will still be helping the development of military applications, in a defensive sense however, though we are inclined to believe this is a genuine attempt to manage an ethical rollout of AI. Google’s success is built on the people it attracts, and its ability to attract the right people is through its outstanding brand and reputation. Protecting this image is paramount.

Subscribe and receive the latest news from the industry.
Join 56,000+ members. Yes it's completely free.

You May Also Like