Artificial intelligence and machine learning are two important and interesting topics for discussion with students. Given the increasing use or at least experimentation with AI and machine learning, it is likely that they will become a more common part of life for the students which current occupy our classes, therefore it is important we get them thinking about the implications of these new technologies.
One use for AI and machine learning is the development of self-driving vehicles. A number of companies are currently experimenting with this technology. It does however present a very interesting philosophical dilemma. How do we program vehicles to make ethical decisions?
How do we train an autonomous vehicle to make decisions where such decisions will either impact the safety of pedestrians or others drivers versus the safety of the occupants of the car itself? Take for example the situation where a car can either avoid a group of pedestrians but in doing so crash causing injury to the occupants, or hit the pedestrians causing serious injury or death. Is the likely injury of a person worth it when saving a person from death? The answer I get to this when I ask students is to save the pedestrian by crashing the car despite the resultant injury to the passenger; A life is worth more than an injury. What if there were 4 passengers who might all be injured? What if the passengers were children or rather than injury there was a potential for a fatality among the passengers? Would we still crash the car to save one person but risk 4 children? How can we program cars to make these decisions when we are unable to resolve such complexities ourselves?
The other issue is the habits which will come out of continued use of autonomous vehicles. If we routinely use such vehicles without issue, it is likely that we will be unable to act in the event that something goes wrong. Do we want to become slaves to our electronic chauffeurs? Would we still know what to do to avert an accident following years allowing the cars autonomous systems to drive us around?
The recent Uber incident seems to be case in point in that the driver, who currently by law is required even where the car is autonomous, appeared unfocused on the road, clearly used to allowing the car to do the driving. Sadly, she was not prepared for the tragic accident which occurred. From the footage, I am unsure that she would have been able to do anything even if he had been fully focused on the road and had taken control of the vehicle.
The video below shows the moments leading up to the accident.
Please note some people may find it upsetting.
The crash also brings to light another issue, being that of who is responsible in the event of an accident. Since the accident there has been mention in the press of the backup driver who is required by law to be in vehicle to take control in the event of an incident. There has been mention of the company owning the vehicle, Uber plus of the company which provides the sensor technology. Additional organisations involved could include the car manufacturer and any maintenance staff, as well as software programmers among what I would expect is a long of organisations and individuals involved in this project. The question is who is responsible where an autonomous vehicle goes wrong?
Autonomous cars perfectly highlight the philosophical and ethical issues with surround the increasing use of technology, AI and big data. It is easy to see the issues as their are lives at risk. In other areas such as our internet searching, our consumption of online news stories and our online shopping the issues are not quite so apparent, although they are equally there. They may not result in possible death but they may result in the shaping of beliefs, viewpoints and even cultures. This is a deep area for discussion but one I believe we need to be having with our students.