I have been asked to step into the breech so to speak for the learning analytics interventions should always be mediated by a human debate later this week at Digifest.
The structure for the debate is as follows:
The machine will argue they can use learning analytics to provide timely and effective interventions to students improving their chances of achieving better qualifications. Machines don’t forget or get sick; learning analytics is more accurate and not prejudiced; evidence for automated interventions.
The human will argue although machines can make predictions they will never be 100% accurate; only a person can factor personal circumstances; automated interventions could be demotivating; automated interventions are not ethical.
Fortunately for me I have been given the human side of the debate. Unfortunately for the organisers, Leanne Etheridge is no longer able to attend. Leanne, I will do my best.
Preparation for the debate has started already with this blog post from Richard Palme aka “the opposition”. In order for me to get my thoughts into some kind of order for Wednesday morning’s debate, I’m going to try and outline my reactions to the provocations outlined in the post by my learned colleague
Richard has outline three key areas where he believes there is increased potential for data driven system interventions.
- First of all, humans have a long history of believing that when certain things have always been done in one way, they should stay that way, far beyond the point where they need to be. . . .If you look at Luddite rebellions, we thought that it should always be a human being who stretched wool over looms and now everyone agrees that’s an outdated concept. So, deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.
2. Secondly, people object that the technology isn’t good enough. That may, possibly, be the case right now but it is unlikely to be the case in the future. . . Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated.
3. Thirdly, how good do we actually think people are? Certainly, human beings can empathise and pick up on non-verbal or even non-data-related signals from other people, but when was the last time a computer turned up to work hungover? Or stressed or worried about something – or just didn’t turn up at all?. . . . Will a computer ever be better than the perfect person? Maybe, maybe not. But, let’s face it, people aren’t perfect. . . .We worry about computers sending insensitively worded emails and inappropriate interventions but we all know human beings who are poor communicators, who are just as capable, if not more, of being insensitive.
Where to start? Well, despite us pesky humans almost falling at the first hurdle of not being able to be there in person – so unreliable! We can pick up challenge and a thread from where our colleagues have left off without the need for any additional programming. I don’t know what Leanne was going to say, but I really like the 2 quotes for the 2 slides she has selected. (I detect an air of confidence from only 2 slides!)
“ It is the supreme art of the teacher to awaken joy in creative expression and knowledge” Albert Einstein
“Every student can learn, just not on the same day, or in the same way” George Evans.
Going back to Richard’s post I believe there is a truly pressing need to challenge this apparently sensible, logical narrative. The narrative that is being spun around data and analytics is becoming an ever complex web for us to break out of. But break out of it we must! To paraphrase Paulo Freire it is time for some critical analytics. It is time to seriously consider the analytics of the oppressed.
Point 1 – On humans “deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.” I always worry when the Luddite card gets pulled into play. The negative connotations that it implies, negates the many, many skilled craftspeople who were actually fighting for their livelihoods, their craft. Audrey Watters explained this perfectly in her 2014 ALTC keynote Ed Tech Monsters.
“The Luddites sought to protect their livelihoods, and they demanded higher wages in the midst of economic upheaval,”
Sound familiar? It strikes me as uncannily similar to our current union campaigns for fair pay, to stamp out casualisation of academic staff contracts. But it’s ok because the overriding managerial narrative is that data can help us rationalise, to streamline our processes. It’s been a while since Fierre wrote this, but again it rings true today.
Our advanced technological society is rapidly making objects of us and subtly programming us into conformity to the logic of its system to the degree that this happens, we are also becoming submerged in a new “Culture of Silence”
Point 2 – On technology not being good enough “Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated.”
Data about our students will be more predictive? Our predictions will be “better” – better at doing what? Better at showing us the things we want to see? Getting our student “customers” through their “student success journeys” without any difficult interrogations, without the right to fail? Or actually stopping someone actually starting/continuing their educational journey because their data isn’t the “right fit”?
The promise of increasing personalisation fits into an overwhelming narrative from ed tech companies that is permeating through governments, funding bodies, University leaders. Personalisation is the future of education. Personalised alerts are the natural progression to student success. But are they just another form of manipulation? Assuaging the seemingly endless collective need to measure, monitor, fitbit-itize the educational experience? The words of Fierre again ring true.
One of the methods of manipulation is to inoculate individuals with the bourgeois appetite for personal success. This manipulation is sometimes carried out directly by the elites and sometimes indirectly, through populist leaders.
Point 3 Just how good are people anyway? We don’t turn up, we get ill and we are biased. Well all of those apply to most systems I’ve ever interacted with. Our own biases are intrinsically linked to the systems we develop, to the interpretations of data we chose to accept. As Fierre said
One cannot conceive of objectivity without subjectivity
I cannot agree that the downside of machine interventions are “no worse that humans doing it badly”. Surely we need to be engaging critically to ensure that no human or machine is doing anything “badly”.
The “system” should not just be replicating current bad practice. Data should provide us with new ways to encourage a richer dialogue about education and knowledge. Learning analytics can’t just be a way to develop alerting and intervention systems that provide an illusion of understanding, that acquiesce to not particularly well thought out government driven monitoring processes such as the TEF.
In these days of alternative facts, distrust of expert knowledge, human intervention is more crucial than ever. Human intervention is not just an ethical issue, it’s a moral imperative. We need to care, our students need to care, our society needs to care. I”ll end now with the words of the Cassandra of EdTech, Audrey Watters
In order to automate education, must we see knowledge in a certain way, as certain: atomistic, programmable, deliverable, hierarchical, fixed, measurable, non-negotiable? In order to automate that knowledge, what happens to care?
Many make the assumption that analytics are unbiased too, let’s also not forget that the machines are “programmed” by humans, and that the algorithms will always be, in some way, politically motivated, or hold a bias from the originator.
absolutely Lawrie – was saving that for the day😀
oh, sorry, feel free to delete 😉
no, it’s fine – you’re the only person that reads this blog anyway!
Great post Sheila. Work has been shared between humans and machines for a long time. It’s just that the pace has been gathering recently 🙂 On one level, the decision is how to share this work for the benefit of humanity eg we could delegate some work to machines to leave more time for the bits that humans do best but I don’t think that is what is happening always.
I am convinced by what Richard Edwards says “the work of the knowledge infrastructures of open education results in an inherent inscrutability within its practices, which is elusive in terms of significance, processes and effects.” http://www.tandfonline.com/doi/full/10.1080/17439884.2015.1006131
My personal concern is that the qualitative research that online and open education cry out for is neglected at the expense of quantitative research for which the data is more easily found and analysed. This is an example of the streetlight effect 🙂 https://en.wikipedia.org/wiki/Streetlight_effect
thanks Frances – will try and weave this into the debate on Wednesday too. I agree re research, we can’t just got for the “low hanging fruit”