An ethical framework for learning analytics

Judging from the ATD conference agenda, more people thinking about Learning Data. That’s a great thing. As we see from the topics there, we’re on the cusp of doing some really exciting things with it.

Already I’m seeing people achieving some really laterally-minded objectives. I’ve heard one team looking at how to forecast whether a student will struggle before they do based on analytics. I’ve spoken to another group who are looking to keep their trainees safe for decades by tracking how, when and how they train now – in case they learn about risks at some point down the track. These are exciting opportunities to do good for everyone.

That’s why I’m such a fan of the increased availability of data to the training profession. However I do not want anyone to imagine that I am oblivious to the associated risks. When we discuss these topics, conversations need to address not only opportunities, but risks. I’ve done plenty of the former.

Let’s do some of the latter.

Inevitably, any data collection process raises ethical questions. These most often begin and end with privacy and while that applies to any area of data collection, learning analytics is no exception. Much has been written about that aspect, so today I would like to turn our attention to some of the other ethical considerations that are specifically training related.

Whenever we set out to develop a learning analytics strategy, we must give consideration to these questions and build (and communicate) our roadmap.

1. Training involves values

No matter how clever we’re getting with learning data, there is still one large blind spot; What’s important?  That question always falls to us trainers. No machine can determine reliably what is and is not important.

Sure, we can train AI to make decisions, but who trains it? Yes, a system can undertake analysis of data, but who determines when to act on it?

In most cases, these questions are not ones that have clear-cut answers; they’re value calls.  In general we rely on our values when an answer does not follow automatically from the data – i.e. we are making an assumption of some kind. No matter how immutable those assumptions may seem, it’s a fact that our values change with time. If our systems have assumptions “hard coded” within them, will we even remember what assumptions are there and how they were reached when we need to?

So… whose values will we apply?

So – in designing your learning analytics strategy ask:

  • Identify where a system is making an assumption
  • Institute a process for ensuring that assumption is accepted by others
  • Document it
  • Communicate it
  • Be consistent
  • Implement a periodic review of those assumptions

2. a machine cannot assume responsibility.

All  training involves taking responsibility – for the use of a trainee’s time, for the quality of the material, for the effectiveness of the training, for the health and well-being of participants etc. Even if a tool can be built that achieves continuous improvement and delivers training in a perfectly safe way, that is not the same as taking responsibility for those outcomes.

Each of us has a basic right to know who is taking responsibility and to be in a position to hold them to account.

  • Ensure you understand the points at which responsibility is being assumed
  • Determine how the data can support the responsible person and furnish him or her with it
  • Ensure that a responsible person is embedded into the process at each
  • Communicate where that responsibility lies and allow all participants to hold us accountable.

3. a machine does not exercise judgement

No matter how many situations a machine is programmed to handle, there are going to be new ones. There is, as they say, a first time for everything and we should not choose to deal with the unexpected without judgement.

I recall the first time I stood in front of an all-male year ten math class and found that my first task wasn’t to teach, but to duck the flying furniture… then teach. Fresh out of teacher training, it was not  a situation for which I’d prepared! I made a judgement call that led to my emerging unscathed and returning the next day to deliver an effective maths class. I didn’t do it by the book; no one was sent to the principal. That was a judgement call – and it worked.

In theory you could program a system to act as if it had the experience of having once been a year ten maths student itself. Perhaps you could program it to take into account information that goes beyond maths into responding to a challenge to its authority. Perhaps.

That will work really well… if an identical situation ever arises. 😂

  • Use data to improve your judgement calls, then wait for them to be made
  • Expect the tool to fail you from time to time
  • Have in place well understood mechanisms to manage those failures
  • Communicate those mechanisms

4. a machine does not exercise authority

Leaving aside the practicality of preparing for the unexpected, we should not want to do so. I don’t care how smart my tools are; I choose to have the right to review their decisions before they deliver them. To do otherwise would not be ethical when my learners expect judgement to underpin effort into which they invest their effort

There are many reasons, but at least one of these should be irrefutable; humans are not to be  dictated to by machines. Substituting an algorithm for judgement undermines this principle.

Most people would rebel on principle the moment they believe their humanity has been degraded in this way. Like my year ten boys, we instinctively demonstrate that we have a form of authority that the machine cannot control – refusal

  • Remember that all training establishes power structures (any trainer knows this)
  • It is incumbent on us to ensure  that those power structures are ethical
  • The more excited we become by data’s potential the more we take our eye of the ethics ball
  • Never forget that ethical humans are compelled to resist unethical situations 

About the author

Picture of Peter Hawkins

Peter Hawkins

Peter is one of Australia’s foremost learning technology and data analytics experts. Originally a maths/science teacher and data analysis researcher, Peter’s work in learning technology dates from the early 1990s when he was asked to assist UNESCO to implement remote learning in Africa. He established the first learning technology system for Monash University during his period as an academic in information technology. Peter's company, Global Vision, was the first to provide LMS services in Australia and now provides a range of learning technologies to organisations nationally, both government and non-government. Peter is a regular contributor to Australia’s xAPI community and is the founder of Australia’s xAPI Trailblazers Group.
Share this article:
Skip to content