Liability and Risk in Programming Autonomous Vehicles

Many readers will remember the Knight Industries Two Thousand (or KITT) from the 1980s – the fascinating self-aware car driven by the Knight Rider – the Hoff. That car was programmed using sophisticated Artificial Intelligence and machine learning. The best science fiction (and it was science fiction in the 1980s) should become science fact shortly afterwards. It is that time because autonomous vehicles are shortly to storm the world stage.

We are about to undergo a paradigm shift from passive response-based systems in cars (such as cruise control, lane-change warning alarms, obstacle alarms and so on) to fully active systems. This is where an autonomous vehicle, having processed various inputs from multiple sensors, having (in some implementations) integrated those inputs with externally supplied data (for example from sensors transmitting from road signs or from the road itself), takes a decision on what to do – how much to accelerate or to brake, how to turn the wheel, etc.

Electro-ethics of autonomous vehicles

One of the most pressing issues currently in technology law involves just this kind of artificial intelligence and machine learning and these issues are giving rise to philosophical issues that have legal consequences. This is the field of so-called ‘electro-ethics’. Electro-ethics is the intersection of technology, law and moral philosophy. To enable machines to perform sophisticated decision-making to complete complex tasks, software designers need to develop sets of rules that will underpin decisions made in any situation. It is impossible to program on a situational basis, so higher level guiding principles need to be programmed with clarity so any situation can be dealt with safely and properly.

Most decisions taken by autonomous vehicles will be benign and straightforward. This is true for two-dimensional car driving or three-dimensional drone or plane piloting – although that’s a topic for another article! For example, a self-driving or autonomous vehicle will use its programming to avoid collisions with other cars and obstacles. However, occasionally an autonomous vehicle will face what we may term an ‘extreme situation’ where the action it chooses will result in injury or loss of life. It is in these scenarios that the ethical underpinning of the programming will be squarely in the legal (and moral) spotlight.

For example, imagine a self-driving car driving at 28mph in a 30mph zone. On the left-hand side is a concrete wall, and coming up on the right hand side of the road, an elderly man waits at a bus stop. Two teenage girls step into the road in front of the car, close enough that the car would not be able to reach a stop before hitting them. The car would face three courses of action to choose as follows:

  1. It could brake as much as possible before colliding with the two girls, risking their (probable) death.
  2. It could turn sharply left, avoiding the girls, colliding with the wall and risking killing or injuring its occupant and possibly bouncing onto the girls anyway.
  3. It could turn to the right, colliding with the elderly man and probably causing his death.

What should it do? How should the programmers who developed the decision trees and flows or who were responsible for the original learning base or who were responsible for the algorithms that developed learning from that original base – program the car?

The moral philosophy problem

This is effectively a moral philosophy problem, and the car will make its choice according to the rules of its software. However, not all of the world would agree on how to react because not all of the world subscribes to the same moral philosophy. Different cultures and societies would choose a different option in the above scenario. For example, in Western liberal democracies, the prevailing view is likely to be to take option C, and risk the death of the elderly man – a classically utilitarian response (“the greatest good for the greatest number”). One old man who has lived his life compared to two teenagers who have yet to live theirs compared to one occupant of the car. In parts of the Middle East, however, the prevailing moral philosophy forbids any positive action which would take a life. Residents in this part of the world therefore would expect the car to take option A, and risk the deaths of the two girls, since choosing to steer into the wall or elderly man would be a positive action towards taking a life, whilst continuing to steer towards the girls whilst braking as much as possible is not – since the girls stepped into the road of their own volition.

 


Leave a Reply

Please Login to comment
  Subscribe  
Notify of

Enjoyed the article?

Get notified of new articles and relevant events.

Thanks for subscribing!

Pin It on Pinterest

Share This