Side View Of A Young Man Sitting Inside Autonomous Vehicle Using Mobile Phone
Liability and Risk in Programming Autonomous Vehicles by Mark Weston, Head of Information Technology, Intellectual Property and Commercial Law at Hill Dickinson

Liability and Risk in Programming Autonomous Vehicles

Many readers will remember the Knight Industries Two Thousand (or KITT) from the 1980s – the fascinating self-aware car driven by the Knight Rider – the Hoff. That car was programmed using sophisticated Artificial Intelligence and machine learning. The best science fiction (and it was science fiction in the 1980s) should become science fact shortly afterwards. It is that time because autonomous vehicles are shortly to storm the world stage.

We are about to undergo a paradigm shift from passive response-based systems in cars (such as cruise control, lane-change warning alarms, obstacle alarms and so on) to fully active systems. This is where an autonomous vehicle, having processed various inputs from multiple sensors, having (in some implementations) integrated those inputs with externally supplied data (for example from sensors transmitting from road signs or from the road itself), takes a decision on what to do – how much to accelerate or to brake, how to turn the wheel, etc.

Electro-ethics of autonomous vehicles

One of the most pressing issues currently in technology law involves just this kind of artificial intelligence and machine learning and these issues are giving rise to philosophical issues that have legal consequences. This is the field of so-called ‘electro-ethics’. Electro-ethics is the intersection of technology, law and moral philosophy. To enable machines to perform sophisticated decision-making to complete complex tasks, software designers need to develop sets of rules that will underpin decisions made in any situation. It is impossible to program on a situational basis, so higher level guiding principles need to be programmed with clarity so any situation can be dealt with safely and properly.

Most decisions taken by autonomous vehicles will be benign and straightforward. This is true for two-dimensional car driving or three-dimensional drone or plane piloting – although that’s a topic for another article! For example, a self-driving or autonomous vehicle will use its programming to avoid collisions with other cars and obstacles. However, occasionally an autonomous vehicle will face what we may term an ‘extreme situation’ where the action it chooses will result in injury or loss of life. It is in these scenarios that the ethical underpinning of the programming will be squarely in the legal (and moral) spotlight.

For example, imagine a self-driving car driving at 28mph in a 30mph zone. On the left-hand side is a concrete wall, and coming up on the right hand side of the road, an elderly man waits at a bus stop. Two teenage girls step into the road in front of the car, close enough that the car would not be able to reach a stop before hitting them. The car would face three courses of action to choose as follows:

  1. It could brake as much as possible before colliding with the two girls, risking their (probable) death.
  2. It could turn sharply left, avoiding the girls, colliding with the wall and risking killing or injuring its occupant and possibly bouncing onto the girls anyway.
  3. It could turn to the right, colliding with the elderly man and probably causing his death.

What should it do? How should the programmers who developed the decision trees and flows or who were responsible for the original learning base or who were responsible for the algorithms that developed learning from that original base – program the car?

The moral philosophy problem

This is effectively a moral philosophy problem, and the car will make its choice according to the rules of its software. However, not all of the world would agree on how to react because not all of the world subscribes to the same moral philosophy. Different cultures and societies would choose a different option in the above scenario. For example, in Western liberal democracies, the prevailing view is likely to be to take option C, and risk the death of the elderly man – a classically utilitarian response (“the greatest good for the greatest number”). One old man who has lived his life compared to two teenagers who have yet to live theirs compared to one occupant of the car. In parts of the Middle East, however, the prevailing moral philosophy forbids any positive action which would take a life. Residents in this part of the world therefore would expect the car to take option A, and risk the deaths of the two girls, since choosing to steer into the wall or elderly man would be a positive action towards taking a life, whilst continuing to steer towards the girls whilst braking as much as possible is not – since the girls stepped into the road of their own volition.

It is possible that cars could come with different rules designed for different markets, or include a mechanism for switching between ethical rules. However the import and export market complicates the picture; for example, what if a manufacturer programmes a car for the UK market, which is later exported to the Middle East?

Legal considerations for autonomous vehicles

There are also legal consequences to consider.  Someone (probably the relatives of anyone killed) is going to sue. Who are they going to sue? The manufacturer of the car? The occupant of the car? The owner of the car (if different)? The original programmers? The programmers who worked on the most recent software update? The programmers who programmed the original base? The programmers who developed the particular AI learning model? The owner of the computer in the car (if different) or the licensor of the various parts of the software?

What if the car’s programming failed due to a cyber attack – a whole other area of risk and liability. Who is responsible to prevent this happening? Again, a topic for another article!

Even if such events are as rare as one in fifty million – think about how many car journeys there are per year globally – and you get the idea that this scenario is going to be encountered at some point. Law traditionally lags behind technology but in this case, failure to legislate or regulate could mean stifling these nascent technologies at birth for fear of bearing the cost of crystallized risks. Technology changes very fast and the law needs to catch up. Many countries are beginning to pass laws addressing liability.  At the European level, the European Commission’s Digital Agenda for Europe recognizes the challenge facing development of autonomous vehicle technology. The Commission has adopted a number of Communications on the topic since 2016 and has passed various enabling pieces of legislation since 2014 – all aimed at ensuring consistent roadworthiness checking and documentation for vehicles. In the U.K., the Department for Transport (DT) and the Centre for Connected and Autonomous Vehicles (which is a joint policy unit of the DT and the Department of Business, Energy and Industrial Strategy) are responsible for making Britain a leader in this area. They intend to take a “light touch” which is probably the right way to go. Legislation is making its way through Parliament.

Need for universal and global standards

But it is early days on the legal front. Large global organizations may need to meet to agree on universal and global standards for programming software, either agreeing on the philosophical underpinning for these rules, or, how to switch between them. This would help legislators who are struggling to keep up. As always, the technology is moving faster than the law. Watch this space – because it will be filled by an autonomous vehicle very soon!