AI detractors have just been given a bit more ammo after Facebook has had to shut down one of its programmes after it invented its own language.

Jamie Davies

July 31, 2017

4 Min Read
Facebook has a whoopsy as AI programme invents own language

AI detractors have just been given a bit more ammo after Facebook has had to shut down one of its programmes after it invented its own language.

While it would not be considered realistic for AI to rise up and take-over the world in the near future, the consequences of not responsibly developing the technology are huge. Facebook researchers have had to shut down one of their own AI programmes after it stopped using English and started talking in its own language, which the developers could not understand initially according to Fact Co. Design.

And why did it do this? Because it was not explicitly told not to.

Imagine AI as a small child for the moment. The child does not know what is right and what is wrong, therefore you have to set strict parameters as to what can be done, and what can’t. If you do not, who knows what chaos this small child could cause.

The same could be said for AI. The programme does not know right from wrong, unless explicitly told. It will not know appropriate from inappropriate. Or in some circumstances, legal from illegal. This is why strict parameters have to be set, as the machine will find the most efficient and logical means to accomplish a task, if that is the objective.

In this instance, the programmers did not set the parameters correctly; they did not tell the machine to only use English, therefore it just did its own thing. This might sound a bit amusing, kind of like finding your child has torn up all the toilet paper, but the risks are quite significant.

AI programmes are being taught to be self-teaching. This is incredibly efficient and holds great promise for the world, but it needs to be kept in check. If you want to leave an AI alone and not worry about it, you need to know it isn’t going to start drawing inappropriate things on the wall with crayons.

Firstly, if you cannot understand what a machine is saying, how can you ensure that the methods and practises which it is employing are correct. Secondly, if it will make up its own language, what else will it do to accomplish tasks?

Although this is a story which will attract attention due to the fact it is Facebook we’re talking about, this is not a new area of thought. It’s just one which has been ignored because, let’s be completely honest, the answer is not an easy one to find. Like cyber security, it has been relegated down the priority list because there is no immediate gratification. We are human after all.

Last year, at the IP Expo, Nick Bostrom who leads Oxford University’s Future of Humanity Institute, gave a warning to the technology world; how can we control computers when their own intelligence supersedes our own? Such advances in technology are unprecedented, but how to we ensure the development continues to keep human objectives at the heart of the mission? How do we stop the computer rewriting its own objectives, directives and parameters because it deems other areas more deserving? It’s a very complicated area.

More recently, Tesla CEO Elon Musk had a public spat with Facebook CEO Mark Zuckerberg, after Musk took offense to a comment from Zuckerberg. Musk warned of progressing too quickly in AI, while Zuckerberg said people should be a bit more optimistic. The argument from the Tesla CEO is that regulation needs to be in place as this is a very powerful development, which could be amazing or disastrous. Zuckerberg said stop being a buzzkill.

Considering this development, you can give a point to Musk.

The problem is that the challenges which many AI developers are coming across will sometimes be the first time these challenges have ever been encountered. You don’t know that you have made a mistake until you have made one unfortunately. This example was a mistake. One of the programmers forgot to put in a condition that English had to be the language used, but fortunately, the mistake took place in a controlled environment. But it should be a warning.

AI will eventually move more and more into the real world (though there already is a limited presence). It will take control of business critical applications. It will be involved in the most intricate part of our lives. Mistakes like this need to be eradicated completely, parameters need to be very strictly set, otherwise who knows what chaos could ensue.

You May Also Like