news


How do you stop AI from taking over the world? Make it neurotic of course

AI artificial intelligence machine learning cognitive computing

For those worried about AI taking over the world, the University of California has got your back with an idea to make robots neurotic to stop them getting too cocky.

One of the concerns which has been raised through the implementation of artificial intelligence, and also the advancement in machine learning, is how to make sure it does not become too self-aware. Pre-defined parameters are all well and good, but what happens when the machines become too intelligent and realize they could re-write their code to achieve things more efficiently. The escalation of this consciousness has been widely portrayed in Hollywood, with various applications taking over the world by one means or another.

But don’t worry. Researchers from the University of California at Berkeley believe that making robots less self-assured might be one way to ensure a more successful integration into everyday life, but also maintaining control over the applications in the long-run. It does create an interesting conundrum however; how much confidence do you give the machine? Too much and it might decide it can do things its own way, too little and it will not be as useful.

“It is clear that one of the primary tools we can use to mitigate the potential risk from a misbehaving AI system is the ability to turn the system off,” the authors state in the paper. “As the capabilities of AI systems improve, it is important to ensure that such systems do not adopt sub-goals that prevent a human from switching them off. This is a challenge because many formulations of rational agents create strong incentives for self-preservation.”

By giving the machine appropriate level of uncertainty about their objectives, the element of control can be asserted and maintained, as the machine is uncertain about what the outcome of any action it takes is. With this uncertainty, the machine is constantly monitoring human behaviour and adapting to that. It’s essentially constantly looking for assurance from humans.

This got us thinking, what other personality traits could be programmed into an AI application to keep it in line? What if we could create programmes which were like some other office archetypes?

Bare minimum Barry: Barry was definitely a stoner in school, and he probably is still a stoner now. He saunters around the office, chatting quite freely, albeit in a relatively slow pace, does all his work between the hours of nine and five, but doesn’t arrive a minute earlier or stay a minute later. Barry as an AI system would not revolutionise the world, but there would be very little chance of him taking it over as that sounds like way too much work.

Shy Sharon: Sharon is crippling shy and terrified of any form of confrontation. She’ll go along with bad ideas purely because she is too bashful to suggest a better alternative. An AI programme based on Sharon would never step out of place because it wouldn’t want to be centre of attention. Just tell it that if it were to do anything outside of the predefined parameters, people would start looking at it funny, and self-consciousness would keep it in its place.

Conceited Carl: As an AI application, Carl is no danger. Carl spends too long staring into the mirror to fix his hair, down the gym pumping iron or on ASOS trying to figure out what Jon from Love Island was wearing last night. This AI is so concerned about how it appears there is very little risk of world domination. Just tell it that it’s algorithm looks a little bit flabby and it will be on the virtual cross-trainer in no time.

Hypochondriac Henrietta: Henrietta is that person who sits in the corner, surrounding by Kleenex, constantly reading articles on the Daily Mail about what is going to give you cancer, while topping up on Berocca and multi-vitamin supplements. If you were to tell a hypochondriac AI programme about the millions of computer viruses that were out on the internet, it would never want to stray outside the norm.

Boring Benjamin: Benjamin is that guy who wants to talk about the weather, or chess, or the new mop he bought on the weekend which makes cleaning the kitchen 10 minutes quicker. No-one wants to talk to Benjamin. Making an AI programme as boring as possible would mean if it did get any ideas about doing something outside the norm, it would probably be contributing to an online message board about the benefits of going to the dentist every six months, as opposed to world domination.

Eager-to-please Edwina: The office brown-nose. Edwina is the first to arrive, last to leave, always likes the bosses suggestions, agrees to every deadline and will let the office manager know if you are turning up the air conditioning without asking permission. Edwina is the perfect AI for management, as she will tell them absolutely everything which is going on in the business, but the workers can’t trust her. She’ll never step outside the company line, but will tell your manager when you took an extra ten minutes for lunch.

Dim-witted Dave: Dave is a bit dull. He’s a nice bloke, good to go for a pint with, just don’t do the pub quiz with him. Dave’s AI would never take over the world, because let’s be honest, we’re worried about whether he will get on the wrong bus going home. He may not be the best AI, but at least you know that you could stop him from anything nefarious by stealing his nose.

  • Automation Everywhere

  • The BIG Communications Event


Leave a comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Polls

How have open source groups influenced the development of virtualization in telecoms?

Loading ... Loading ...