AI may lead to world domination after all – oh dear
Apparently it’s not only the Telecoms.com Editor who is worried about AI taking over the world as there are already research groups looking into how a real-world Terminator can be avoided.
October 5, 2016
Apparently it’s not only the Telecoms.com Editor who is worried about AI taking over the world as there are already research groups looking into how a real-world Terminator can be avoided.
Speaking at IP Expo Nick Bostrom who leads Oxford University’s Future of Humanity Institute gave some insight into the future of AI. His team are specifically looking into one area which seems to have been swept under the carpet recently: how will super-intelligent computers impact the human condition in a positive or even detrimental way?
It is very doom-and-gloom and usually a topic that is relegated to the comment boards of the Daily Mail but it is a question worth considering. Can we control computers when their own intelligence supersedes our own?
Although there are different thoughts on the development of super computers throughout the academic and commercial world, one area is firmly agreed: development is accelerating, and accelerating at a faster pace year-on-year.
Bostrom’s team has conducted a substantial amount of research over the last few months to identify how quickly AI is actually growing. The general consensus is AI will match, if not exceed human intelligence at some point within the next 15-50 years. That maybe a big window, however it is potentially within a generation. Once this point has been reached, the continued growth towards super-intelligence will only get faster.
How to define the objectives of artificial intelligence and then how to ensure the computer completes these objectives correctly is a key issue. What control methods need to be put into place? Bostrom highlighted any control method which defines the perimeters of the machine would have to be highly scalable, which the community has not achieved to date. If the control methods are not scalable, once a computer reaches a suitable level of intelligence, there is nothing stopping it from retrospectively reprogramming its parameters.
Now this may seem drastic, but you have to consider what scientists want to achieve with AI. With machine learning constantly redefining how the computer interacts with the world and dynamically altering decision making processes, how can the engineer ensure the machine is continuing to interact within the parameters originally set?
Who is to say an intelligent machine wouldn’t find a loop-hole in the control methods, and redefine the way in which it meets its objectives? Will a computer act illegally to achieve objectives if it deems the actions necessary? Some humans would, and if we are aiming to replicate human behaviour and consciousness, why wouldn’t a machine if it means completing its sole objective if there is no other way at the time?
This is down to morality. What is right, what is wrong, what is illegal, what is legal but highly frowned upon, what is culturally acceptable, what is PC, what is rude, what is inconsiderate. These are all questions relevant to the individual and critical to that individual’s sense of morality. This sense of morality will vary so who defines what morality is when translating into computer language to act as a control method for artificial intelligence?
Now this is a conversation which is happening between sci-fi geeks and the ultimate pessimists throughout the world, but it is one which should be considered very seriously not only by industry, but governments around the world. Bostrom knows there is a solution, but he also claims something else. AI will have the same profound impact on the human condition as the agricultural and industrial revolution. Most of us just haven’t figured that out yet.
About the Author
You May Also Like