Digital Citizenship

I was asked by a student in a lesson the other week as to what “digital citizenship” meant.    Up until this point I had considered this to be a simple extension of citizenship but into the technological or digital world but hadn’t given it much thought beyond this.    The question made me think or at least made me look online for an outline which I could use as a starting point.   I found this outline which can be viewed here.

The website raises the following factors in digital citizenship:

  • Digital Access: The opportunity to access the digital world, digital resources and to get online.
  • Digital Commerce: The ability to conduct commerce, to manage money and to buy and sell items online.
  • Digital Communication: The ability to communicate online through email, SMS, Video Conferencing and other online medium.
  • Digital Literacy: The skills required to use technology and also to learn new technologies as they arise or become required.
  • Digital Etiquette: An understanding of what is right and proper behaviour when conducting yourself in the digital world.
  • Digital Law: An understanding of the laws and regulations which surround the use of online services and resources.
  • Digital Rights and Responsibilities: An understanding of the expectations of users online and also what they can expect from others and the online world.
  • Digital Health and Wellness: An understanding of the physical and mental implications of technology use including digital addiction.
  • Digital Security: The process of how to remain safe and secure when online.

To me the above seem reasonably comprehensive however I wonder if there are another couple of areas which might merit inclusion.

  • Digital implications: An ability to appreciate the wider implications of new technology use seeing technology beyond its intended purpose into the unintended or secondary consequences.
  • Digital ethics: The ability to evaluate the ethical and moral issues surrounding technology use. Considering not the question of “can we” but the questions of “should we”.
  • Digital Openness: The ability to read the comments from others, such as tweets, and understand that the comments represent a viewpoint and that the medium itself may shape how you perceive the comments and their meaning and intention.   An ability to avoid taking things personally.
  • Digital resilience: The ability to manage technological failures, difficulties and negatives, and to move on, trying new things and seeking better solutions.    Also linked to digital openness in the ability to move on from negative comments received online.

Have I missed anything?

 

Advertisements

Self Driving Vehicles: Who is responsible?

Self-driving cars and other vehicles have been in the press recently.   On BBC Breakfast on the 26th when reviewing the papers, they discussed some investigations being conducted by an insurer into the damages which might result in various car accident situations.    Within the report the key question of who is responsible in the event of an accident was raised.    Separately on BBC the plan to introduce self-driving or platooning articulated lorries to UK roads was discussed (read here).   The introduction of self-driving cars brings with it questions.

The BBC Breakfast report specifically stated the need to consider responsibility in the event of an accident where an autonomous vehicle was involved.   The guest who had a legal background suggested that under current law the person in control of the vehicle would be deemed responsible.   As such if I was sat in the driver’s seat of a self-driving car I would be responsible.    In fact, this seems to suggest that if I controlled the vehicle in that I set its destination, which therefore could be considered control, then I would be responsible even if I was sat in a rear passenger seat.    So would I be responsible for the self-driving taxi I used to get home?

In the event of hacker compromising a cars systems responsibility seems rather clear in that this would represent a criminal act and therefore the hacker would be responsible.   The other possibility is that the manufacturer failed to exercise sufficient security precautions to protect their vehicle from cyber-attack therefore leading to partial responsibility on their part.   This seemingly simple picture is quickly complicated if we consider that the systems in a self-driving car are likely to be like other computer systems; they will need updating.    So in the event of an accident due to a car using an out of date software system or where the system was compromised due to not receiving the latest security patches who would be responsible?

This brings us to what I consider the biggest question in the use of self-driving or autonomous cars.   How will the car decide who lives or dies in the event of a serious accident?    Consider this:  An accident is unavoidable however the car has a choice of crashing into a group of around 10 people where serious injuries are likely, or crashing into a bike rider going in the opposite direction resulting in guaranteed death.   Which should the car choose?    Does changing the number of people in the group upwards to 100 or down to 5 make a difference?    A variant of the above might be that the car can choose either to crash into the group of people or crash itself in such a way as to kill the cars occupants for example by crashing into the sea or over a cliff.   Does the fact the death will be of the occupant of the car, who therefore is in control of the car, make a difference to the cars decision making process?

The above questions and scenarios are very difficult for us as humans to answer and likely to stimulate some debate yet it will be human computer programmers will have to put together the code that makes these decisions.    Will these programmers be responsible for the acts of the cars for which they provide the software?

Self-driving vehicles in widespread use look to be a highly likely part of the future, in the next 5 years however before this happens there are still a lot of unanswered questions especially ethical ones.

Basic Tech Safety

In developing a series of sessions on digital literacy I thought a good place to start would be that of basic computer safety including password management.    Ahead of this is an initial discussion with students in terms of identifying what the risks and implications of using technology where no consideration has been given for computer safety and security.

The areas which I consider to represent the basic elements of safety are:

  1. Password and account management
  2. Risk associated with website access
  3. Social media dangers
  4. The danger of the ubiquitous use of email
  5. Data loss from mobile devices, portable storage or storage failure.

In discussing each I use the CIA acronym as a structure for examining the risks and safety measures.    CIA refers to Confidentiality, Integrity and Accessibility.     In discussing password management confidentiality may lead us to consider how we keep usernames and password confidential such that our files remain confidential.   It may also leads us to discuss accessibility in that as users we want easy access to our data and therefore shorter easier to remember usernames and passwords seem preferable yet this run contrary to the need for confidentiality.    This conflict may leads to examine how password managers might assist in achieving both confidentiality and accessibility.

The main aim of the first session will be to get students to consider their technological safety in greater detail and depth than they may have done previously.     It is also hoped that this first session will allow for in group discussion and debate, which will set the tone for the discussion and debate which will be needed on some of the more moral or ethically related discussions in later sessions.

You can access the basic PowerPoint (yes, I know, a PowerPoint!   Have just used it to create a basic framework only and have no intention of death by PowerPoint) related to session one here.

I would welcome any thoughts or comments.

 

 

A digital literacy programme

I am currently in the process of preparing a programme of lessons for 6th form students focusing on preparing to live in an increasingly digital and technological world.     The first part of my planning is to decide on the specific topic areas which merit discussion.    Currently my thinking is to include the below:

Basic internet safety

The basics of internet safety including passwords, phishing, etc.

Cyber security and internet safety

Examination of some of the more technical aspects of cyber security including the devices we use at home and the increasing prevalence of the Internet of Things (IoT).

Privacy and public safety

Discussion of the paradox of privacy and online security versus public safety.

Digital Profiles

Why establishing an online profile might be important and things to consider in developing an online presence.

Disconnecting and the risks of addiction

Managing our technology so it doesn’t become additive and understanding how our technology use might shape our behaviours and habits.

Managing our data

Understanding our data and how it may be stored and used by others and the resulting implications.   Also consideration of machine learning and how it can impact on individuals.

Social Media as a collaboration tool

Discussion of how social media can be used for much more than sharing funny cat videos

Googling It

Discussion of the benefits of google as a source of info along with potential risks.

The Internet of Things

Examination of the internet of things, the potential benefits and risk

Other emerging technologies

Discussion of emerging technologies such as VR and AR

 

Now the above are just my initial rough ideas for topic areas.    Over the coming weeks I hope to flesh them out a little bit further and add some skin to the bones however in the meantime I would appreciate any thoughts or comments on the areas which you think need including.

GP consultations in an app: what next??

Part of being digitally literate is the need to cope with the pros and cons of emerging services online.    I was sat watching the TV the other day and an advert popped up for Push Doctor an app which apparently allows you to access a doctor online rather than visiting a GPs practice.    I smiled as the advert came on as I have found myself complaining about the difficulty of getting access to a GP on a number of occasions since having returned to the UK.    You can only get an appointment by phoning up first thing in the morning as an emergency and hoping for an available slot or by booking weeks if not months in advance.   As such the idea of an on demand doctor via an app on my smartphone sounds like a good idea, however is it?

An online doctor can take all of the personal history and also ask the same diagnostic questions as a GP may be able to do however they don’t have the physical access to you.    They don’t have the ability to carry out a physical examination and to take diagnostic readings as to your blood pressure, heart rate, etc.     They also don’t have the same relationship which may exist with a long standing family GP, for those lucky enough to have one.    Without the physical access I am not sure I would feel comfortable with an online doctor prescribing me medication.

I also wonder about the credibility of an online doctor.    My GP has been installed in a health practice and therefore will have been vetted by the practice for suitability, experience and skill.   They also are tangible in my ability to actually meet with them, see them in the local area, etc.   They have a physicality which an online doctor doesn’t have.   They can’t just disappear by disabling an online account in the same way that on online doctor may be able to do.

I think the idea of an online doctor is an excellent one especially when the NHS is as stretched as it is often reported to be.     That said I still think there is some work to be done in winning people over and encouraging people, including myself, to make use of such a service.

Thinking a bit further ahead I wonder if the solution to the diagnostic readings side of things might be the increasing number of us wearing fitness devices.  Through these devices our online doctor might be able to gather rudimentary, and possibly in the future more diagnostic, data such as heart rate, exercise habits, etc.    In doing so they might be better able to diagnose and given the constant monitoring of such devices they may prove to be better able to diagnose than the currently conventional GP.

The online doctor is but one of a number of emerging services which technology is facilitating, however are we ready to accept and use such new services?

GDPR and third party sites

The new GDPR regulations coming into force in May 2018 mean that the potential fines associated with data breaches or other leaks will be greater than those that exist under the current data protection act.

The new regulations also finally make third party vendors liable where their action or inaction result in the release or leak of data which they are processing on your behalf.   This seems like a good thing in that if you use a third party and through no error of your own their use leads to the leak of data, they will be held responsible.

The issue here though is that the above is only part of the story.     Although the third party vendor may be responsible for the breach it would have been your responsibility to confirm their compliance with GDPR and their security and other measures in relation to data prior to commissioning them to handle your data.       Even although the breach or leak may have been due to the action or inaction of a third party you are going to have to prove that you showed due diligence in checking out the third party and its operations prior to signing them up to process, store or otherwise use your data.   If you didn’t then you too may be found to be liable and therefore receive what could be a significant fine.

As schools a large number of third party sites are used in the delivery of the educational experience we provide the students under our care.    This might be specific maths or science websites with sample questions or learning materials, or it might be more generic services such as Showbie or G-suite.   In each case you will be providing personal info on your students, with some sites requiring more data than others.    In each case you will need to prove that you undertake at least a basic review of the provision offered in relation to data safety and security by each site or service.

With this in mind the key questions I see the need to ask a third party are:

  • Do you share my data or allow others to access my data?  If so, with who and why?
  • What security do you have in place (physical and logical) to protect my data?
  • What disaster recovery and backup process do you have in place?
  • How long do you retain data and what happens to data should I quit your service?
  • Do I have the right to audit or request the audit of your data security provision?

As we approach the May implementation date for GDPR we need to ensure we have a better handle of where school data, that of students, staff, parents, visitors and other stakeholders, is stored.    Part of this will involve identifying all third party vendors and asking them regarding their preparedness for GDPR.

 

My data?

pacemaker-1943662_640A recent BBC News article highlighted a US judges decision to allow data gathered from a defendants pacemaker to be admissible in court (You can read the article here).    The data in question was used by an expert witness to cast doubt on the defendants explanation as to the events surrounding the case in hand.   The issue here is the gathering of data for one purpose, to measure the defendants vital conditions in order to aid medical treatment and diagnosis, versus the eventual use of the data to prove what he was doing during a specific period in time in relation to criminal prosecution.   Surely data gathered from a device in my body would consistent “my data” and therefore be for me to decide or approve its use.

This incident seems to go against the basic rules of the data protection act and also the upcoming general data protection regulations due to come into effect in May 2018 in that the eventual usage of data did not relate to its original purpose.    The required permission for storage and usage of the data would have been limited to this purpose.    Now there are exceptions for law enforcement in relation to protecting society which may have come into play, plus the incident happened in the US and I don’t have any experience as the equivalent of the data protection act in the US however I would assume the similarities likely far outweigh the differences.

This case seems to suggest that it may be possible for data gathered to be used for purposes other than that for which it is intended or for which permission was obtained.     All that is required is some justification of need.    This seems vague and particularly concerning.

So what about the Amazon echo sitting in the front room recording every comment, discussion and noise occurring in my house?    What about the camera in a Smart TV equipped with gesture control or the Kinetic device attached to my sons Xbox One?      What about the engine management unit or GPS unit in my car, the data my smart watch gathers or info from my FitBit or other fitness tracking device?     We may be happy about these devices gathering data for their intended purposes but what about the purposes to which the data could be used, where we as yet can predict this?    I am sure the bloke with the pacemaker couldn’t have predicted he might be convicted based on data his pacemaker gathered.    How might a hacker or someone else with malicious intent use the data which available?

As we work with students to build them into digitally or technologically literate individuals we need to discuss the above.

Are we happy with so much data being gathered, stored and processed on is by third parties?   Do we truly understand how the data is or can be used?