Two reasons why autonomous vehicles won’t work for a while

For some, the success of the self-driving car could be the point where we’ve made it. But perhaps that is much further away than we thought, and the stakeholders would have you believe.

Jamie Davies

September 12, 2017

4 Min Read
Two reasons why autonomous vehicles won’t work for a while

For some, the success of the self-driving car could be the point where we’ve made it. But perhaps that is much further away than we thought, and the stakeholders would have you believe.

While at CloudSec 2017, we had the opportunity to sit down with Rik Ferguson, VP of Security Research at Trend Micro, who gave us the guided tour of  the deep and dark corners of the internet of things. And while the technology is progressing well, little attention has been paid to the questions we don’t like the answers to; this could begin to cause an issue before too long.

“Physics intervenes with romance,” said Ferguson. “It’s an important fact which people tend to forget.”

Now bear with us for a second, Ferguson’s point does make sense when you look at the entire picture.

The first reason we are not ready for autonomous vehicles is the fact we are not able to give away control. We are happy for computers to make decisions in some circumstances, the best route to the pub for instance, but not when it comes for anything which would be deemed moral. Or how would we react when a computer intentionally causes harm to a human?

A number of people will reference Isaac Asimov’s Three Laws of Robotics, which can be used as a basic foundation to dictate the behaviour of robots, meaning they would be unable to cause harm to a human. This is a very romantic idea, but don’t forget, physics gets in the way of romance.

Imagine a person jumps in front of an autonomous car driving at 30 miles per hour in a busy street. In some circumstances, the two options could be to (1) hit the person, or (2) swerve and crash the car causing damage to the driver. Irrelevant as to when the AI component of the car reacts, there will be a breaking distance. In some circumstances, there is no alternative solution aside from to injure, potentially fatally, one of the individuals involved. So which one?

This is a scenario which will have to be addressed, but it also leads to some more complicated questions as well.

Who is ready to allow the machine to make the decision? Who is ready to hand over control of such a moral conundrum to a computer? Or how do you programme such a decision into an algorithm? What data could the AI access to process such a decision? If there is a fatality, should someone go to prison; the driver who is not responsible for the direction of the vehicle? The person who stepped in front of the car by accident, causing a crash? Or programmer who wrote the algorithm?

These are questions we, at Telecoms.com, do not have the answer to. Maybe someone does, but until there are answers accepted by the majority, can we put autonomous vehicles on the road?

The second reason is also a bit dark.

“Can we make sure these cars cannot be weaponised?” said Ferguson. “This is why security is so important. We have a record of not getting it right the first time, and messing things up.”

Irrelevant to what people actually say, security is built onto solutions and very rarely built into them from the ground floor. The driver is to be the first to market and capitalize on the fortunes; small problems can be fixed later. Security used to be one of these areas which can be fixed later, but as Ferguson points out, the connected era means it has to be perfect from the beginning.

“We’re at a tipping point where we could create a toxic legacy of connected devices,” said Ferguson.

Products often make it onto the market with small flaws, but these are usually fixed before too long. Product recalls are an example, but in the digital world there are patches and system updates. For Ferguson, this approach is not good enough when it comes to autonomous vehicles, as if there is an opportunity to cause havoc, some nefarious individual will take advantage.

If there is a flaw in the system, you know have a two ton lump of metal, with up to 60 litres of combustible liquid contained inside, being controlled by a dodgy character somewhere in the world. This is now a weapon.

The approach which needs to be taken is to make security a bigger issue. It is an issue now, but companies still seem to be ignoring it. Ferguson’s idea links back to batteries.

“One of the things I would love to see is a certified safe stamp,” said Ferguson. “Just like batteries for instance. People wouldn’t need to know the processes behind the stamp, but you could guarantee connected devices with the stamp have the right security credentials.”

Regulation is the only way to enforce such a certification, but once it is around, it would be unlikely people would purchase a stamp-less product. Perhaps forcing security regulation on the firms is the only way forward. Evidence to date proves they can’t be left on their own to sort out the problem.

So it certainly wasn’t a conversation which was filled with fluffy bears and gummy drops, but Ferguson has a very good point. We aren’t ready for autonomous vehicles.

Subscribe and receive the latest news from the industry.
Join 56,000+ members. Yes it's completely free.

You May Also Like