AI, Autonomous Vehicles and Philosophy

Artificial intelligence and machine learning are two important and interesting topics for discussion with students.    Given the increasing use or at least experimentation with AI and machine learning, it is likely that they will become a more common part of life for the students which current occupy our classes, therefore it is important we get them thinking about the implications of these new technologies.

One use for AI and machine learning is the development of self-driving vehicles.   A number of companies are currently experimenting with this technology.    It does however present a very interesting philosophical dilemma.   How do we program vehicles to make ethical decisions?

How do we train an autonomous vehicle to make decisions where such decisions will either impact the safety of pedestrians or others drivers versus the safety of the occupants of the car itself?    Take for example the situation where a car can either avoid a group of pedestrians but in doing so crash causing injury to the occupants, or hit the pedestrians causing serious injury or death.    Is the likely injury of a person worth it when saving a person from death?   The answer I get to this when I ask students is to save the pedestrian by crashing the car despite the resultant injury to the passenger; A life is worth more than an injury.   What if there were 4 passengers who might all be injured?       What if the passengers were children or rather than injury there was a potential for a fatality among the passengers?     Would we still crash the car to save one person but risk 4 children? How can we program cars to make these decisions when we are unable to resolve such complexities ourselves?

The other issue is the habits which will come out of continued use of autonomous vehicles.  If we routinely use such vehicles without issue, it is likely that we will be unable to act in the event that something goes wrong.   Do we want to become slaves to our electronic chauffeurs?   Would we still know what to do to avert an accident following years allowing the cars autonomous systems to drive us around?

The recent Uber incident seems to be case in point in that the driver, who currently by law is required even where the car is autonomous, appeared unfocused on the road, clearly used to allowing the car to do the driving.  Sadly, she was not prepared for the tragic accident which occurred.   From the footage, I am unsure that she would have been able to do anything even if he had been fully focused on the road and had taken control of the vehicle.

The video below shows the moments leading up to the accident.

Please note some people may find it upsetting.

The crash also brings to light another issue, being that of who is responsible in the event of an accident.   Since the accident there has been mention in the press of the backup driver who is required by law to be in vehicle to take control in the event of an incident.   There has been mention of the company owning the vehicle, Uber plus of the company which provides the sensor technology.    Additional organisations involved could include the car manufacturer and any maintenance staff, as well as software programmers among what I would expect is a long of organisations and individuals involved in this project.    The question is who is responsible where an autonomous vehicle goes wrong?

Autonomous cars perfectly highlight the philosophical and ethical issues with surround the increasing use of technology, AI and big data.    It is easy to see the issues as their are lives at risk.    In other areas such as our internet searching, our consumption of online news stories and our online shopping the issues are not quite so apparent, although they are equally there.   They may not result in possible death but they may result in the shaping of beliefs, viewpoints and even cultures.    This is a deep area for discussion but one I believe we need to be having with our students.

Advertisements

Big data and digital literacy

technology-3178765_640The recent Cambridge Analytica scandal is a perfect discussion topic for use with students when looking at the implications of big data on our lives, or more importantly on the future lives of the students which currently occupy our classrooms.

For me one of the first areas for discussion is to try and get an appreciation for all of the data which we make available to organisations such as Google, Facebook, etc.     As we use their free services we provide them data.

The second area for consideration is the fact that the data provided can then be used to identify further data or to extrapolate probabilities of certain characteristics.    A perfect example is how Target gathered data in the hope of identifying which female shoppers were pregnant due to the tendency for pregnant women to be profitable for the organisation.   Looking at a women’s spending habits including changes in habits over time, Target were able to assign a pregnancy probability rating to its customers, therefore identifying which customers were the most likely to be pregnant.

Ethics and privacy are another area for discussion.    How comfortable are students with the fact that companies such as target might be able to identify such private aspects of our lives such as whether a woman is pregnant?     Is this an invasion of our privacy?

One of the main issues which surround Cambridge Analytica is the possible use of data to profile individuals and then to influence them and their decision making.    Through targeted marketing, targeted specifically at individuals based on the data which is available on them, they may have had their voting decisions shaped.    Their decisions may not have actually been their own decisions.    Is such a practice of profiling and influencing individuals ethical?

We also have the issue of information sharing.   If we provided the information to Facebook or Google do they have the right to share this with other and if so, are there limitations on what such a third party might do with this data?   The Cambridge Analytica scandal highlights this in that the data gathered came from a questionnaire app, however made use of sharing functionality in Facebook to hoover up far more data than it was directly given, gathering data on the friends of users of the app.

The fact we don’t pay for Google or Facebook is another area worthy of discussion.    The phrase, If your aren’t paying for it, you are the product, seems appropriate here.    We don’t pay for using Facebook as Facebook gets its revenue from advertising.    It therefore is sharing data with advertisers to allow them to target the appropriate customers to maximise the return from advertising expenditure.   Are we happy that Facebook and Google too are in effect sellings us?    This also leads us to the purpose of Google and Facebook.   Both appear to be companies providing services which enhance our lives.    Although this is true it is also important to remember that they are also companies with shareholders and therefore companies out to make a profit.    Does the safe, ethical and responsible use of all the data we provide trump their need to make a profit?

As we use more and more technology, with more and more of it being online, we are generating more and more data.    This data is being gathered by organisations.    I don’t believe there is any easy answer to this situation as proceeding oblivious or ignorant to the implications is ill advised as is total disconnection and an attempt to avoid generating any data.    For me the key is for our students to be consciously aware of big data and its implications.

 

Self Driving Vehicles: Who is responsible?

Self-driving cars and other vehicles have been in the press recently.   On BBC Breakfast on the 26th when reviewing the papers, they discussed some investigations being conducted by an insurer into the damages which might result in various car accident situations.    Within the report the key question of who is responsible in the event of an accident was raised.    Separately on BBC the plan to introduce self-driving or platooning articulated lorries to UK roads was discussed (read here).   The introduction of self-driving cars brings with it questions.

The BBC Breakfast report specifically stated the need to consider responsibility in the event of an accident where an autonomous vehicle was involved.   The guest who had a legal background suggested that under current law the person in control of the vehicle would be deemed responsible.   As such if I was sat in the driver’s seat of a self-driving car I would be responsible.    In fact, this seems to suggest that if I controlled the vehicle in that I set its destination, which therefore could be considered control, then I would be responsible even if I was sat in a rear passenger seat.    So would I be responsible for the self-driving taxi I used to get home?

In the event of hacker compromising a cars systems responsibility seems rather clear in that this would represent a criminal act and therefore the hacker would be responsible.   The other possibility is that the manufacturer failed to exercise sufficient security precautions to protect their vehicle from cyber-attack therefore leading to partial responsibility on their part.   This seemingly simple picture is quickly complicated if we consider that the systems in a self-driving car are likely to be like other computer systems; they will need updating.    So in the event of an accident due to a car using an out of date software system or where the system was compromised due to not receiving the latest security patches who would be responsible?

This brings us to what I consider the biggest question in the use of self-driving or autonomous cars.   How will the car decide who lives or dies in the event of a serious accident?    Consider this:  An accident is unavoidable however the car has a choice of crashing into a group of around 10 people where serious injuries are likely, or crashing into a bike rider going in the opposite direction resulting in guaranteed death.   Which should the car choose?    Does changing the number of people in the group upwards to 100 or down to 5 make a difference?    A variant of the above might be that the car can choose either to crash into the group of people or crash itself in such a way as to kill the cars occupants for example by crashing into the sea or over a cliff.   Does the fact the death will be of the occupant of the car, who therefore is in control of the car, make a difference to the cars decision making process?

The above questions and scenarios are very difficult for us as humans to answer and likely to stimulate some debate yet it will be human computer programmers will have to put together the code that makes these decisions.    Will these programmers be responsible for the acts of the cars for which they provide the software?

Self-driving vehicles in widespread use look to be a highly likely part of the future, in the next 5 years however before this happens there are still a lot of unanswered questions especially ethical ones.

Photos and privacy: Say cheese!!

I was sat reading my book in a roof top bar in London.   The evening was drawing in and it had been a long day in travelling down to London, walking for around an hour from the train station to the hotel in which I was to be staying, and then getting checked in and settled.

As I sat there reading my book I saw a flash out of the side of my eye, from the phone in the hands of the gentleman sat to my right.    Had he just taken a photo?    Was his phone camera directed at me?  If so why?

As we use our devices more and more, including using them in public, there is an increasing chance of accidentally invading someone else’s privacy, of taking a picture of someone without their permission.   This photo may then go on to be shared on social media.

When I used to work out in the UAE I would often spend holiday periods sat by the beach in Abu Dhabi, and like my incident in London, would quite often feature in the holiday snaps of other people visiting the beach.     These holiday snaps would most likely then get uploaded to Facebook or other social media sites where facial recognition might attempt to tag me in photos that I was otherwise unaware that I was in.   There now was a public record of my holiday activities yet I hadn’t created it and may not even be aware of its existence.

Looking at the above incidents from the viewpoint of the person taking the photo there comes a point where we need to ask permission or to warn people before we take a photo.    This wasn’t the case when our photos had to be developed from film and when sharing was limited to showing friends and relatives the photo album you have gathered.   Now photos are digital and can easily be shared online, copied and even amended and adjusted this has become more important.   The question though is when is it acceptable to capture people in a photo by accident and when should we be asking permission?

From the point of view of the person ending up in a photo we have to ask whether we are happy to end up in someone else’s photo that may be shared.    As professionals would we be happy for photos of birthday party antics being online for people to find?    This leads to the difficult situation of having to speak to people taking photos to question their motives and intended use of images.   This does not generally come naturally to us as it often involves addressing strangers.

The increasingly common use of photography due to the ease of use brought about by high definition cameras built into our mobile phones presents a challenge.    The benefits of taking more photos, more photographic records of events, which are then shared versus the risk to personal privacy.

Do you tend towards the need for privacy or the benefits of taking lots of photos?

As facial recognition, big data and AI improve does this become more of an issue?

Encryption, privacy and public safety

hackerThe internet provides us with freedom to discuss ideas and thoughts, to collaborate and share.   For educators this is invaluable in sharing teaching techniques, resources and also allowing for the discussion of pedagogy and teaching ideologies.   It also allows us to securely purchase goods and services and to share images and video with our friends via social media such that only those we wish to have access to our content will have access.     For organisations it allows secure communication and transfer of files such as confidential or other sensitive business documents even when staff are out of the office travelling on business.    It allows files to be protected through encryption so that only authorized personnel have access.     On a personal level it allows files to be protected from prying eyes for where they are of a personal or private nature.

The above represents the positive side of technology, however technology is a tool and therefore much as a hammer can be used to build things or as weapon of violence, it can be used for malicious and evil purposes as much as it can be used for good.

In particular, the ability for secure communication and sharing of files can be used in planning acts of terrorism.    It can be used in coordinating acts of violence or other criminal activities.   It can be used to prevent police or intelligence services from accessing files which relate to illegal activities.

The above represents a dilemma.   From the security perspective we want the police and intelligence services to be able to access files and streams of communication for the purposes of keeping us safe.    This seems logical and an obvious step in light of recent events in the UK.    The prime minister in her recent speech made reference to how the internet provides a safe space for extremism to grow and how this needs to be tackled.   The issue here is that to do so we need to introduce vulnerabilities into the encryption methods to allow the police and intelligence agencies to have access.    This means that secure access methods become less secure not just for those conducting or planning illegal acts but for all users.   The vulnerabilities that give the police access, may be discovered or breached by criminal or other threat actors.      It’s like adding an extra side door to your house where only the police have the key.    If someone manages to copy the key, someone manages to create a skeleton key or if the police lose the key, then our house becomes accessible to those we would prefer to prevent from access.    The new door represents an increase in the risk to the privacy of home.    A perfect technology example is the recent WannCry ransomware where the source of some of the used vulnerabilities can be traced back to the NSA.    The NSA had discovered the vulnerability and developed tools to exploit it with a view to using it to protect people’s safety however when this leaked the same vulnerability was put to malicious use having a significant impact on the UK National Health Service (NHS) among others.      Any weakening of encryption is going to increase the risk associated with the security of business communications, banking, social networking and any other systems where data is being exchanged using the now weaker encryption methods.

Although giving the police and intelligence agencies the tools to better identify illegal activities and terrorism online sounds an obviously good idea, it doesn’t come without some downsides and risks.

Where is the correct balance between personal and corporate privacy and the ability of national agencies to view and intercept data in the interest of public safety?  

Where does personal privacy end and public safety begin?

 

 

 

 

 

 

Secure communication and sharing to prepare

Easy access to information.

Communication for the purposes of coordination

 

The internet is neutral with no-one exerting control.   It crosses boarders.    So how would it be effectiviely monitored?

If it is monitored then this introduces vulnerabilities to protocals which have security at their heart.   Such vulnerabilities, may become known by malicious actors.  This is a risk.