While we have all been getting excited about the prospect of autonomous vehicles a couple of issues seemed to have been overlooked, but now it seems the legal community is addressing a big one; liability.

Jamie Davies

March 19, 2018

5 Min Read
If an autonomous vehicle kills someone, who is liable?

While we have all been getting excited about the prospect of autonomous vehicles a couple of issues seemed to have been overlooked, but now it seems the legal community is addressing a big one; liability.

The technical challenges of autonomous vehicles are massive. Latency needs to almost non-existent, coverage needs to be almost ubiquitous and the AI components need to be almost perfectly intelligent while also demonstrating almost human like logic and context. These very complex problems, but with the army of technologists acutely focusing on them it might not be too long before they are solved. The challenges which remain are far more nuanced.

The insurance industry will need to be completely reconfigured, passing the element of control to a machine will have to be accepted as the norm and regulations will have to be updated to be relevant. These are only some of the issues, but not the one we are going to address today; the idea of liability is finally being considered.

Dr John Kingston of the University of Brighton has recently written a paper discussing the idea of liability should an autonomous vehicle become involved in an incident using current legal precedent, and it is a complicated issue. The paper focusing on the following question:

“It is the year 2023, and for the first time, a self-driving car navigating city streets strikes and kills a pedestrian. A lawsuit is sure to follow. But exactly which laws will apply?”

This is a very complicated situation and to understand it properly, you need an understanding of how the law works. In most judicial systems, though this example is most accurate for the US, criminal law requires an actus reus (an action) and a mens rea (a mental intent). Actus reus could consist of a direct action, or a failure to act, whereas mens rea could mean knowledge of the action or negligence. In road incidents, this is already a highly contestable debate, but when a computer system is thrown in the argument becomes much more difficult.

When a potential criminal activity is concerned there are three ways in which the law could be applied to AI. Firstly, perpetrator-via-another. In current law, this means the offence is committed by a person or animal which cannot be held mentally responsible, i.e. does not possess the mental intellect to consider the crime, therefore the responsible party is the instructor. This is possibly an example of negligence of the programmer.

The second example is known as natural-probable-consequence. In this example, the autonomous vehicle is simply carrying out its task, but hasn’t been designed to deal with consequence or does not understand the frailty of those involved in possible consequences. Here accomplishes of the crime are usually held accountable, therefore the user or the programmer would be liable. Kingston notes that in this case it would most likely be the programmer once again.

This is where it becomes a bit more complicated as the legal process would have to distinguish between machines which know there is illegal activity underway and ones which do not. If it is a machine which does demonstrate knowledge, mens rea can be demonstrated, which leads us onto the final interpretation; direct liability.

Direct liability could occur when an autonomous vehicle has broken the law at some point during the incident (e.g. exceeding the speed limit), or did not act (e.g. did not turn away from the person who was hit). Attributing mens rea, the mental intent, is much more complicated. But people today are still convicted when there is no mental intent, so it should still be possible to hold the AI criminally liable.

If the ultimate decision is to use the final interpretation of the law when it comes to incidents involving autonomous vehicles, who is held responsible as it is impossible to punish AI. There are so many different scenarios that it makes it very complicated. If connectivity was poor, should the telco be responsible? Or perhaps it is still a problem with the code and therefore the programmer? Does the user accept liability? Or should the blame lie with the pedestrian who crossed the road at an ill-judged time?

In some circumstances, the AI might have made a recommendation which caused the incident, but then this recommendation might have avoided a different incident. There is the possibility of a no-win scenario. Either the car hits the pedestrian crossing the road, veers right into a wall or veers left into an on-coming vehicle. Whatever the decision there will be an incident and a potential fatality.

The final scenario to consider would be nefarious actors. If a virus is placed into to the autonomous vehicle which corrupts the software, does this clear all those involved from liability?

There are other elements to this discussion, such as whether AI for autonomous vehicles should be sold as a product or a service. Service encourages recurring revenues, but also more direct liability. On the other hand products require a warranty and the acceptance of a different type of responsibility. You can read the whole paper here.

Other areas which effect liability include whether the machine has access to most recent information, if all scenarios could be considered in the time available and the limitations of the autonomous vehicle. Limitations on the autonomous vehicle might include information we judge as general knowledge, such as slowing down as you pass a school at 3pm. This might seem simple, but our brains are capable of storing millions of these examples which we access easily.

Hazard perception is down to experience, and while this is theoretically possible with machine learning techniques, where do you store such a vast amount of information efficiently and economically. Storing a mini data centre in the vehicle might not be possible, therefore you are relying on the cloud and latency. The number of different scenarios are far too numerous to count, and we take the complexities of the brain to store this information for granted. For us it is simple, but for a machine it is not.

Kingston concludes that all three of the interpretations of liability could be applied to the scenario, but this debate shows how far we are away from autonomous vehicles ever becoming a reality. The technical aspects of the autonomous vehicle might not be that far away, but we still haven’t considered the philosophical, ethical or moral implications of everything else.

 

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

You May Also Like