news


Google exec ditches Moonshot labs for hippy life

Hippie man playing guitar outdoors

Mo Gawdat, former Chief Business Officer for Google Google [X], has left the life of cutting edge technology to spread the message of peace, love and happiness.

It is quite a turnaround. Leaving one of the world’s most ruthless profit generating machines to start #onebillionhappy, an organization with the simple sounding objective of making one billion people happier in their day-to-day lives. While this might sound very optimistic and generic, there is an underlying philosophy which ties quite neatly back into his background at Google, IBM and Microsoft.

“Artificial intelligence is real, it’s here,” Gawdat said in a LinkedIn video promoting his new mission (which you can see at the bottom of this article). “Those machines are developing partial intelligence that way surpasses our human intelligence. They see better, they hear better and sometimes they even reason better. Over the next fifteen to twenty years this is going to develop a computer that is much smarter than all of us. We call that moment singularity.

“Singularity is a moment beyond which we can no longer see, we can no longer forecast. The development of the world beyond the moment where machines are smarter than we are is highly unpredictable. Everything is possible. Those machines can solve every problem we’ve ever faced. Or they can actually decide we are the problem and get rid of us.”

This is the sci-fi concept which Gawdat is using the basis of his new organization. If machines are to rise up and take over the world, the blame can only lie with ourselves. Human nature could very well be the foundation of its own downfall.

It sounds like a conspiracy theory, usually regulated to the comment boards of sci-fi websites, but there is some logic so stick with us for a second. By 2029 artificially intelligent machines will surpass human intelligence and by 2049 AI is predicted to be a billion times more intelligent than us. The way in which these machines are being designed is with machine learning technologies at the core. The vision is to create intelligence which is can demonstrate self-learning, self-governance, self-repair and self-reliance. But this is where the problem lies.

The basic concept of machine learning technologies say that data is absorbed and characteristics/capabilities/protocols adjusted in light of this new information.

“How are those machines learning? They’re looking at the knowledge which is out there in the world and they’re building patterns from that,” said Gawdat. “Just like an 18 month old infant.

“We basically write algorithms which allow computers to understand those patterns. Through pattern recognition and billions of observations, they learn. They’re learning by observing. And what are they observing? They’re observing a world that is full of greed, disregard for other species, violence, ego, showing off.”

A computer learning to hate because all it sees is hate might sound far-fetched and the basis for a George Orwell novel, but we have already experienced how our terrible behaviour can negatively influence artificial intelligence. Who remembers Tay?

Back in 2016 Microsoft unveiled Tay, a Twitter bot which was supposed to learn from interactions, comments and trends from the social media platform. Thanks to online trolls and general social media bad behaviour Tay picked up some awful habits, and this is a bit of an understatement. Within 24 hours Tay went from being friendly and inoffensive, to comparing feminism to a cult and worshipping Adolf Hitler.

At the time it was an experiment which went horribly wrong and was entertaining for a moment, but it does show the danger of the way which we act online. It is entirely possible for this influence to have a very negative impact on the digital economy of tomorrow.

Right now the danger is not as apparent. Artificial intelligence is still in the very early days and structured data is what is being fed into the machines. The vast majority of the information which is available on the web is unstructured so we are able to control the flow of this information. But AI is advancing incredibly quickly and it won’t be long before these machines are intelligent enough to interpret any form of data, videos or pictures for example.

This is where it becomes a bit more complicated, how do you programme an understanding of sentiment? Or how about context? These two are some of the simple ones, but how about sarcasm? How do you implant our emotional intelligence, of which we don’t really understand, into a machine so it understands the various nuances of human interaction. This is complicated.

Of course if you let the machines run wild, this will create all sorts of problems. The trick will be to programme policies into the machines from the beginning, but as Ericsson’s Ulrika Jägare pointed out to us at Mobile World Congress, finding the right balance between maintaining integrity, but also creating the flexibility to allow for the machine to be creative is a tricky one. To prevent world domination, strict parameters will have to be set, but will these parameters prevent the machines from using this extraordinary intelligence to its full potential and helping to advance the human race? Catch-22.

Maybe Gawdat is right; we just need to be nicer:

“How do we contain them, we don’t contain them at all,” said Gawdat. “The best way to raise wonderful children is to be wonderful parents. It’s not the inventor of the technology which is going to set the tone moving forward, it’s the technology itself that’s going to use the knowledge, the values that we communicate to it, to develop its own intelligence.”

  • Automation Everywhere

  • The BIG Communications Event


Leave a comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Polls

What will be the defining telecoms trend in 2018?

Loading ... Loading ...