Self-driving cars and other vehicles have been in the press recently. On BBC Breakfast on the 26th when reviewing the papers, they discussed some investigations being conducted by an insurer into the damages which might result in various car accident situations. Within the report the key question of who is responsible in the event of an accident was raised. Separately on BBC the plan to introduce self-driving or platooning articulated lorries to UK roads was discussed (read here). The introduction of self-driving cars brings with it questions.
The BBC Breakfast report specifically stated the need to consider responsibility in the event of an accident where an autonomous vehicle was involved. The guest who had a legal background suggested that under current law the person in control of the vehicle would be deemed responsible. As such if I was sat in the driver’s seat of a self-driving car I would be responsible. In fact, this seems to suggest that if I controlled the vehicle in that I set its destination, which therefore could be considered control, then I would be responsible even if I was sat in a rear passenger seat. So would I be responsible for the self-driving taxi I used to get home?
In the event of hacker compromising a cars systems responsibility seems rather clear in that this would represent a criminal act and therefore the hacker would be responsible. The other possibility is that the manufacturer failed to exercise sufficient security precautions to protect their vehicle from cyber-attack therefore leading to partial responsibility on their part. This seemingly simple picture is quickly complicated if we consider that the systems in a self-driving car are likely to be like other computer systems; they will need updating. So in the event of an accident due to a car using an out of date software system or where the system was compromised due to not receiving the latest security patches who would be responsible?
This brings us to what I consider the biggest question in the use of self-driving or autonomous cars. How will the car decide who lives or dies in the event of a serious accident? Consider this: An accident is unavoidable however the car has a choice of crashing into a group of around 10 people where serious injuries are likely, or crashing into a bike rider going in the opposite direction resulting in guaranteed death. Which should the car choose? Does changing the number of people in the group upwards to 100 or down to 5 make a difference? A variant of the above might be that the car can choose either to crash into the group of people or crash itself in such a way as to kill the cars occupants for example by crashing into the sea or over a cliff. Does the fact the death will be of the occupant of the car, who therefore is in control of the car, make a difference to the cars decision making process?
The above questions and scenarios are very difficult for us as humans to answer and likely to stimulate some debate yet it will be human computer programmers will have to put together the code that makes these decisions. Will these programmers be responsible for the acts of the cars for which they provide the software?
Self-driving vehicles in widespread use look to be a highly likely part of the future, in the next 5 years however before this happens there are still a lot of unanswered questions especially ethical ones.