The Internet of Things (IoT)

The internet of things is a big concern and should be one of students are very aware of as it potentially threatens our privacy and our security.

When discussing the Internet of things I focus on two issues; one being that these devices generally have default user names and passwords and that these are seldom changed by users and the second is the difficulty and also lack of regularity in terms of updating the software which runs on such devices.

When discussing passwords I focus on the 2014 reporting of 70,000 web cams across the world which an internet user had gathered on a single site.   As these devices all had no default password set any users could effectively connect to the feed and view whatever the web camera sees whether this be a car park, a football ground, the inside of house or the pathway to someone’s front door.

A quick discussion with students as to how they would feel having their movements monitored by persons unknown and also the risks which such monitoring might expose them to quickly gets the point across as to the need to change password.

To illustrate the need to update operating systems I use the vulnerability which was identified in robotic vacuum cleaners.   This allowed hackers to gain access to the video feed from such a vacuum cleaner as well as being able to control the device itself.   The vulnerability was in the software which was then patched by the vendor following discovery of the issue.

Students were then asked about how they would know if devices they have purchased had identified vulnerability.   Would vendors have a way to contact those that purchased their device?     It became clear that generally the answer is no and therefore the only way to remain secure is in fact to keep updating devices so that they are using the latest and therefore least vulnerable software.

The internet of things will continue to grow as more and more devices are connected to our home network.   As the list of devices grow so does the risk.    As the risk grows it will become more and more important that students are aware of the risks and are aware of the basic security measures they can take such as updating software and changing default passwords.

Advertisements

Have I been pwned?

There have been that many high profile data breaches over the last few years including the Yahoo breach which hit around 3 billion user accounts, the LinkedIn breach which around 160 million user accounts along with many other small breaches of services across the internet.   I have often used the fact that these breaches have occurred as evidence that students need to take care as to the details they share with services, the strength of the passwords they use as well as the need to ensure they do not share common passwords across different sites.

pwned

Around 6 months ago I was introduced to the Have I Been Pwned website and it is now regular a part of my lessons with students in relation to cyber security and digital citizenship.    The site contains a huge database of the details which have been leaked as a part of the many publicly reported data breaches.    I ask students to volunteer and enter their email addresses into the service to see if their email account has ever been involved in part of a data breach.   This very much gets students engaged as they wait in anticipation to see if they have been involved in a data breach.   To date at least 1 in every 3 students who volunteer and enter their email address have been identified as having their account details “pwned”.    This to me is worrying as those concerned are generally unaware that any of their details may have been leaked, and therefore now be accessible on the net, prior to accessing the site.

I would recommend the use of the site with students, as well as with staff and personally to check how exposed you are to past breaches.   Speaking personally, the first time I accessed the site it flagged up the fact my own personal details had been compromised as part of a breach I wasn’t aware of.   Having identified this I quickly was able to change my password and take other preventative measures.

Two sides to sharing my data

When we share data with services we do so with a clear view of the benefits.    Sharing requests with Alexa or Google Home makes life more convenient for example.   Using Google Maps on my phone and sharing location information helps Google make their traffic reports and advice more accurate and timely.    Using a fitness tracker and sharing data in relation to my heart rate, distance traveled and route allows the service to provide me detailed tracking information plus allows me to share my progress via social media as well as allowing me to compare my progress with that of other people.    The issue however is that, as with most things, for all the positives there are also risks or downsides.

To help illustrate the potential flip side and to get students to at least consider the implications of sharing personal data I use the below video showing the Google perspective of what they do followed by a more cynical Microsoft perspective on what Google do.

This perfectly illustrates that Googles objectives may not be as altruistic as it at first appears.   They are after all a company with shareholders and a need to turn a profit.

For me the issue is simply that we share a massive amount of data about ourselves, about our lives, our habits, our families and the people we interact with.   We share based on the clear and obvious benefits of doing so however we often do not consider the risks that this might pose, the risks which are not at first evident.

As we continue to share more data we need to at least start of consider the potential implications this may have further down the line.

Internet of Things

As we increasingly bring more and more internet enabled devices into our homes, I wonder if our students have really considered the implications.    Internet enabled printers, children’s toys, baby monitors, temperature control systems, robo-vacuum cleaners, internet enabled fridges and washing machines…..the list goes on.

The issue is two fold in my eyes.

  1. The more devices we have in our homes which are internet enabled, the more possible access routes we are providing for cyber intruders to gain entry, gain valuable personal information and even gain our hard earned cash.
  2. The makers of these items first focus is on their business and not necessarily on cyber security.   As such the devices are often not designed with data protection and cyber security at their core.

In illustrating this for students I like the below video in relation to a vulnerability which was identified in robo-vacuum cleaners.

Following this I usually ask students to consider the range of internet enabled devices they have at home and how each might be misused if compromised.

My closing remark for students; Bringing internet enabled devices into our homes makes life more convenient and more fun in cases, however it isn’t without its downsides and risks.

Digital Citizenship

I was asked by a student in a lesson the other week as to what “digital citizenship” meant.    Up until this point I had considered this to be a simple extension of citizenship but into the technological or digital world but hadn’t given it much thought beyond this.    The question made me think or at least made me look online for an outline which I could use as a starting point.   I found this outline which can be viewed here.

The website raises the following factors in digital citizenship:

  • Digital Access: The opportunity to access the digital world, digital resources and to get online.
  • Digital Commerce: The ability to conduct commerce, to manage money and to buy and sell items online.
  • Digital Communication: The ability to communicate online through email, SMS, Video Conferencing and other online medium.
  • Digital Literacy: The skills required to use technology and also to learn new technologies as they arise or become required.
  • Digital Etiquette: An understanding of what is right and proper behaviour when conducting yourself in the digital world.
  • Digital Law: An understanding of the laws and regulations which surround the use of online services and resources.
  • Digital Rights and Responsibilities: An understanding of the expectations of users online and also what they can expect from others and the online world.
  • Digital Health and Wellness: An understanding of the physical and mental implications of technology use including digital addiction.
  • Digital Security: The process of how to remain safe and secure when online.

To me the above seem reasonably comprehensive however I wonder if there are another couple of areas which might merit inclusion.

  • Digital implications: An ability to appreciate the wider implications of new technology use seeing technology beyond its intended purpose into the unintended or secondary consequences.
  • Digital ethics: The ability to evaluate the ethical and moral issues surrounding technology use. Considering not the question of “can we” but the questions of “should we”.
  • Digital Openness: The ability to read the comments from others, such as tweets, and understand that the comments represent a viewpoint and that the medium itself may shape how you perceive the comments and their meaning and intention.   An ability to avoid taking things personally.
  • Digital resilience: The ability to manage technological failures, difficulties and negatives, and to move on, trying new things and seeking better solutions.    Also linked to digital openness in the ability to move on from negative comments received online.

Have I missed anything?

 

Self Driving Vehicles: Who is responsible?

Self-driving cars and other vehicles have been in the press recently.   On BBC Breakfast on the 26th when reviewing the papers, they discussed some investigations being conducted by an insurer into the damages which might result in various car accident situations.    Within the report the key question of who is responsible in the event of an accident was raised.    Separately on BBC the plan to introduce self-driving or platooning articulated lorries to UK roads was discussed (read here).   The introduction of self-driving cars brings with it questions.

The BBC Breakfast report specifically stated the need to consider responsibility in the event of an accident where an autonomous vehicle was involved.   The guest who had a legal background suggested that under current law the person in control of the vehicle would be deemed responsible.   As such if I was sat in the driver’s seat of a self-driving car I would be responsible.    In fact, this seems to suggest that if I controlled the vehicle in that I set its destination, which therefore could be considered control, then I would be responsible even if I was sat in a rear passenger seat.    So would I be responsible for the self-driving taxi I used to get home?

In the event of hacker compromising a cars systems responsibility seems rather clear in that this would represent a criminal act and therefore the hacker would be responsible.   The other possibility is that the manufacturer failed to exercise sufficient security precautions to protect their vehicle from cyber-attack therefore leading to partial responsibility on their part.   This seemingly simple picture is quickly complicated if we consider that the systems in a self-driving car are likely to be like other computer systems; they will need updating.    So in the event of an accident due to a car using an out of date software system or where the system was compromised due to not receiving the latest security patches who would be responsible?

This brings us to what I consider the biggest question in the use of self-driving or autonomous cars.   How will the car decide who lives or dies in the event of a serious accident?    Consider this:  An accident is unavoidable however the car has a choice of crashing into a group of around 10 people where serious injuries are likely, or crashing into a bike rider going in the opposite direction resulting in guaranteed death.   Which should the car choose?    Does changing the number of people in the group upwards to 100 or down to 5 make a difference?    A variant of the above might be that the car can choose either to crash into the group of people or crash itself in such a way as to kill the cars occupants for example by crashing into the sea or over a cliff.   Does the fact the death will be of the occupant of the car, who therefore is in control of the car, make a difference to the cars decision making process?

The above questions and scenarios are very difficult for us as humans to answer and likely to stimulate some debate yet it will be human computer programmers will have to put together the code that makes these decisions.    Will these programmers be responsible for the acts of the cars for which they provide the software?

Self-driving vehicles in widespread use look to be a highly likely part of the future, in the next 5 years however before this happens there are still a lot of unanswered questions especially ethical ones.

Basic Tech Safety

In developing a series of sessions on digital literacy I thought a good place to start would be that of basic computer safety including password management.    Ahead of this is an initial discussion with students in terms of identifying what the risks and implications of using technology where no consideration has been given for computer safety and security.

The areas which I consider to represent the basic elements of safety are:

  1. Password and account management
  2. Risk associated with website access
  3. Social media dangers
  4. The danger of the ubiquitous use of email
  5. Data loss from mobile devices, portable storage or storage failure.

In discussing each I use the CIA acronym as a structure for examining the risks and safety measures.    CIA refers to Confidentiality, Integrity and Accessibility.     In discussing password management confidentiality may lead us to consider how we keep usernames and password confidential such that our files remain confidential.   It may also leads us to discuss accessibility in that as users we want easy access to our data and therefore shorter easier to remember usernames and passwords seem preferable yet this run contrary to the need for confidentiality.    This conflict may leads to examine how password managers might assist in achieving both confidentiality and accessibility.

The main aim of the first session will be to get students to consider their technological safety in greater detail and depth than they may have done previously.     It is also hoped that this first session will allow for in group discussion and debate, which will set the tone for the discussion and debate which will be needed on some of the more moral or ethically related discussions in later sessions.

You can access the basic PowerPoint (yes, I know, a PowerPoint!   Have just used it to create a basic framework only and have no intention of death by PowerPoint) related to session one here.

I would welcome any thoughts or comments.