news


Silicon Valley reckons it can give AI a conscience

Signpost with four arrows - business ethics (integrity, ethics, respect, honesty).

LinkedIn founder Reid Hoffman is one of a host of investors bank-rolling a new initiative to develop ethics and governance standards for artificial intelligence.

The $27 million Ethics and Governance of Artificial Intelligence Fund, which also features Omidyar Network as a founder, will be built around not only engineers and corporations, but also social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers, with the intention of defining standards for AI both in the US and internationally. The team will aim to address such areas as ethical frameworks, moral values, accountability and social impact.

Artificial intelligence agents will impact our lives in every society on Earth. Technology and commerce will see to that,” said Alberto Ibargüen, President of Knight Foundation, which has committed $5 million to the initiative. “Since even algorithms have parents and those parents have values that they instil in their algorithmic progeny, we want to influence the outcome by ensuring ethical behaviour, and governance that includes the interests of the diverse communities that will be affected.”

“There’s an urgency to ensure that AI benefits society and minimizes harm,” said Hoffman, who is now a Partner at venture capital firm Greylock Partners. “AI decision-making can influence many aspects of our world – education, transportation, health care, criminal justice, and the economy – yet data and code behind those decisions can be largely invisible.”

The idea of developing a series of standards to define ethics and morals is something which has needed to be addressed, and has been raised at industry conferences. Back in October at IP Expo, Nick Bostrom who leads Oxford University’s Future of Humanity Institute, noted there needed to be a set of rules to define the development of AI.

It was all very doom and gloom, but Bostrom asked a very basic question; how we control computers when their own intelligence supersedes our own? For this, developers will have to essentially develop consciousness and a moral code into the algorithm; is this possible?

Writing an algorithm is simple when you look from a distance; it involves creating a number of strict rules which defines the behaviour and activity of an application. Considering philosophers have been debating the basis and definition of conscious, morality and ethics for thousands of years without coming to a distinct answer, are we able to expect software engineers to be able to do so in the next few? Morality and ethics mean something different to everyone; it’s a personal interpretation which creates grey areas. How do you program grey areas into something as scientific as a computer algorithm?

This in itself leads onto another question; who should decide on the basic definition of morals? The US is taking the lead here, and it quite rightly bringing in a range of people with different opinions and backgrounds, but why is it assumed the US moral code is the most appropriate to define such an important area?

We’re not saying any country is more or less morally sound than another, however each defines morality and ethics differently. Sometimes only slightly, sometimes quite radically. Take for example the idea of capital punishment, for some this is horrifying, but for some it’s completely justifiable. This is only one example, but what about women’s rights, or gun control, or the role of religion in politics, or advertising standards.

None of these examples are related to the definition of morals and ethics in artificial intelligence, but it serves to prove a point. The variety of opinions is far ranging and can prove to be the source of many conflicts.

The introduction of such a group is a very important milestone in the development of AI. AI is a technology which has the potential to redefine not only the technology industry, but society on the whole. Governments around the world will not only need to embrace the technology, but also decide how to retrain the millions of people whose jobs will be made redundant. The impact of AI will be far and wide; if planning for such outcomes has not already begun, it should.

Despite a touch of scepticism from your correspondent, the value of this group should not be underplayed. That said, it does raise a barrage of new questions which need to be answered very definitively and definitely. This is an area which has the potential to be a political and cultural minefield; those involved will have to be at their diplomatic best to avoid an uncomfortable outcome.

  • AI & Machine Learning World


Leave a comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Polls

Do you feel your company has an appropriate strategy to move to the cloud?

Loading ... Loading ...