I wish I’d said that . . . reflections from #digifest17

You know how it is, despite how much you plan for a debate/live speaking situation,  there’s always something that pops in to your head on the train home that makes you think, “oh I wish I’d said that.”  Since last week’s digifest I have had several of those moments.

As I wrote about last week, I took part in the “do analytics interventions always need to be mediated by humans” debate.  I was defending that  motion. I tried to explain my thoughts in this post.  Richard Palmer from Tribal put up a strong case taking the other view. In the end, despite me claiming a Trump like spectacular, popular victory ( Many people said so), the final vote was pretty close.  Due mainly to the word “always” and Richard’s pretty convincing argument that there are some alerts and “low level” interventions  can be automated and so do not “always” need human intervention.

However, of course they do. The final intervention/ action from any alert, analytics intervention has to be mediated by a human. In the context of our debate that means a student actually doing something as a direct result of that intervention. I wish I’d said that. And if students just ignore the automated alerts/interventions – what then? Are we measuring and monitoring that?  And what if all the power goes off?  What about alerts then? What happens when a student challenges the alert system for allowing s/he to fail? Oh, I wish I had said that  . . .

We do already alert students in a number of ways and we need to ensure we are having a dialogue with students so that we all understand what are the things that are actually motivating, and keep being motivating so that any student apps/alert systems we do produce don’t just suffer from the fitbit syndrome where obsession doesn’t actually lead to motivation but to disengagement.

The other thing – well it’s actually a word – that I wish I had said was “praxis”.  Part of my argument was to (very quickly and I confess somewhat superficially as I didn’t have a huge amount of time to prepare for the debate) draw some comparisons with learning  analytics and Freire’s  seminal Pedagogy of the Oppressed.  I did want to get the notion of praxis into the debate but on the day it didn’t quite happen.  However Mahi Bali picked this up over the weekend and commented on my blog.

“great title, Sheila, and bringing in Paulo Freire inside it is an additional bonus! I love where you’re going with this but would love it if you had the opportunity to take it further into more of Friere’s ideas with regards to praxis, consciousness-raising and empowerment of the oppressed. . . .What I think is interesting is the thinking of Paul Prinsloo on how to decolonize learning analytics such that learners possibly hold more power/control over their data and how it’s used. This could be a third path…

I couldn’t agree more. I think it really is time to discuss praxis in this context. Which brings me back to the core part of my argument last week. We need to have more debate and dialogue around learning analytics and the theoretical approaches we using to frame those dialogues.

I know this is a sweeping generalisation, please forgive me dear reader, but I do worry that emerging design models, partly driven by more fully online delivery, are defaulting to the now seemingly standard: read/watch, quiz, bit of “lite” discussion on the side of the page, badge/certificate  and repeat.  They are easy to measure, to “alert-ify”.  But they are not always the best educational experience.

I missed LAK this year and only so a few tweets so I’m sure that there is a lot of work going on a much higher levels in the learning analytics community. However there is still the nagging feeling in the back of my brain that discussing bayesian regression modelling is still quite dominant. I know last year at LAK there was a concerted effort to work with the learning sciences community, to bring in more learning theory.  But reflecting on last week, it seems to me that behaviourism is going to become (even more) embedded in our systems, in our KPIs, without us actually realising it or having the chance to have a an informed dialogue with our practising teachers and students. A post from Doug Clow from back in 2011,  springs to  mind, is the sinister sausage machine here?

Learning analytics, at least in digifest terms, seems to be the current “future now”.  There were so many session with it as their main theme, it was hard to avoid it. On the one hand I think this is great to see. The debate, the dialogues I have been arguing for are being given a chance to begin. We just need to ensure that they are given enough critical space to continue.  And to that end I guess I should get my “butt in to action” and maybe take a bit more time to write something a bit more informed about praxis.  In the meantime here’s a short interview where Richard and I try to summarise our debate.

Time for Analytics of the Oppressed? – my starter for 10 for #digifest debate

Analytics of the Oppressed(1)

I have been asked to step into the breech so to speak for the learning analytics interventions should always be mediated by a human debate later this week at Digifest.

The structure for the debate is as follows:

The machine will argue they can use learning analytics to provide timely and effective interventions to students improving their chances of achieving better qualifications. Machines don’t forget or get sick; learning analytics is more accurate and not prejudiced; evidence for automated interventions.

The human will argue although machines can make predictions they will never be 100% accurate; only a person can factor personal circumstances; automated interventions could be demotivating; automated interventions are not ethical.

Fortunately for me I have been given the human side of the debate.  Unfortunately for the organisers,  Leanne Etheridge is no longer able to attend.  Leanne, I will do my best.

Preparation for the debate has started already with this blog post from  Richard Palme aka “the opposition”.  In order for me to get my thoughts into some kind of order for Wednesday morning’s debate,  I’m going to try and outline my reactions to the provocations outlined in the post by my learned colleague

Richard has outline three key areas where he believes there is increased potential for data driven system interventions.

  1. First of all, humans have a long history of believing that when certain things have always been done in one way, they should stay that way, far beyond the point where they need to be. . .  .If you look at Luddite rebellions, we thought that it should always be a human being who stretched wool over looms and now everyone agrees that’s an outdated concept. So, deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.  

2. Secondly, people object that the technology isn’t good enough. That may, possibly, be the case right now but it is unlikely to be the case in the future. . . Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated.

3. Thirdly, how good do we actually think people are? Certainly, human beings can empathise and pick up on non-verbal or even non-data-related signals from other people, but when was the last time a computer turned up to work hungover? Or stressed or worried about something – or just didn’t turn up at all?. . . . Will a computer ever be better than the perfect person? Maybe, maybe not. But, let’s face it, people aren’t perfect. . . .We worry about computers sending insensitively worded emails and inappropriate interventions but we all know human beings who are poor communicators, who are just as capable, if not more, of being insensitive.

Where to start?  Well, despite us pesky humans almost falling at the first hurdle of not being able to be there in person – so unreliable!  We can pick up challenge and a thread from  where our colleagues have left off without the need for any additional programming.  I don’t know what Leanne was going to say, but I really like the 2 quotes for the 2 slides she has selected.  (I detect an air of confidence from only 2 slides!)

“ It is the supreme art of the teacher to awaken joy in creative expression and knowledge”  Albert Einstein

“Every student can learn, just not on the same day, or in the same way” George Evans.

Going back to Richard’s post I believe there is a truly  pressing need to challenge this apparently sensible, logical narrative.  The narrative that is being spun around data and analytics is becoming an ever complex web for us to break out of. But break out of it we must!  To paraphrase Paulo Freire  it is time for some critical analytics. It is time to seriously consider the analytics of the oppressed.

Point 1 – On humans “deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.” I always worry when the Luddite card gets pulled into play.  The negative connotations that it implies, negates the many, many skilled craftspeople who were actually fighting for their livelihoods, their craft.  Audrey Watters explained this perfectly in her 2014 ALTC keynote Ed Tech Monsters.

“The Luddites sought to protect their livelihoods, and they demanded higher wages in the midst of economic upheaval,”

Sound familiar? It strikes me as uncannily similar to our current union campaigns for fair pay, to stamp out casualisation of academic staff contracts.   But it’s ok because the overriding managerial narrative is that data can help us rationalise, to streamline our processes. It’s been a while since  Friere wrote this, but again it rings true today.

Our advanced technological society is rapidly making objects of us and subtly programming us into conformity to the logic of its system to the degree that this happens, we are also becoming submerged in a new “Culture of Silence”

Point 2 – On technology not being good enough Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated.

Data about our students will be more predictive? Our predictions will be “better” – better at doing what?  Better at showing us the things we want to see? Getting our student “customers” through their “student success journeys” without any difficult interrogations, without the right to fail?  Or actually stopping someone actually starting/continuing their educational journey because their data isn’t the “right fit”?

The promise of increasing personalisation fits into an overwhelming narrative from ed tech companies that is permeating through governments, funding bodies, University leaders. Personalisation is the future of education. Personalised alerts are the natural progression to student success.  But are they just another form of manipulation? Assuaging the seemingly endless collective need to measure, monitor, fitbit-itize the educational experience?  The words of Fierre again ring true.

One of the methods of manipulation is to inoculate individuals with the bourgeois appetite for personal success. This manipulation is sometimes carried out directly by the elites and sometimes indirectly, through populist leaders.

Point 3 Just how good are people anyway? We don’t turn up, we get ill and we are biased. Well all of those apply to most systems I’ve ever interacted with. Our own biases are intrinsically linked to the systems we develop, to the interpretations of data we chose to accept.  As Fierre said

One cannot conceive of objectivity without subjectivity

I cannot agree that the downside of machine interventions are “no worse that humans doing it badly”. Surely we need to be engaging critically to ensure that no human or machine is doing anything “badly”.

The “system” should not  just be replicating current bad practice.  Data should provide us with new ways to encourage a richer dialogue about education and knowledge. Learning analytics can’t just be a way to develop alerting and intervention systems that provide an illusion of understanding, that acquiesce to not particularly well thought out government driven monitoring processes such as the TEF.

In these days of alternative facts, distrust of expert knowledge, human intervention is more crucial than ever. Human intervention is not just an ethical issue, it’s a moral imperative.   We need to care, our students need to care, our society needs to care. I”ll end now with the words of the Cassandra of EdTech, Audrey Watters

In order to automate education, must we see knowledge in a certain way, as certain: atomistic, programmable, deliverable, hierarchical, fixed, measurable, non-negotiable? In order to automate that knowledge, what happens to care?

Clawing my way up through the trough of disillusionment with learning analytics

512px-Gartner_Hype_Cycle.svg

(image: Jeremykemp at English Wikipedia [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons)

Warning -this is a bit of a moan post.

Last week I attended the Jisc Learning Analytics Network meeting. It was a really good day, lots of people there, lots of good sharing, moaning, asking where next-ing.  One of the reasons I find these events useful is that they help focus my mind and give me a sense of relief that some of the challenges that I face are similar, if not exactly the same, as many others in the sector.

In terms of learning analytics, my experiences to date have been metaphor-tastic: (ever decreasing) circles, slopes, dead ends, stop-starts . . . I feel that it’s appropriate reflect on my journey via the well trodden Gartner hype cycle.

I’m the first to admit I enjoyed being swept up to the peak of inflated expectations. Exploring the potential of data and learning analytics was probably the last piece of innovation work I was involved in when I work with Cetis. I really enjoyed trying to figure out the practical applications and meanings for mainstream learning and teaching of the swirly twirly graphs at early LAK conferences. It was great to support the emerging UK community via early SoLAR meeting.  I learnt a huge amount being involved in the Cetis Analytics Series.  I always think I brought a  healthy degree of scepticism to some of the hype of learning analytics, but I could  (and still can) see the benefits of extracting, exploring and understanding data around learning and teaching.

From the giddy heights of the peak of inflated expectation, I knew when I moved to a “proper job” within a university I would have a bit of a slide down the slope to the trough of disillusionment. It’s getting out of the trough that I’m finding real difficulty with. Changes in senior management, have meant going through a bit of a treadmill in terms of gaining institutional support and understanding. That’s before even accessing any data.

The Jisc Effective Analytics Programme has been a bit of ray of light and hope for me. Towards the end of last year we took part in the Discovery phase of the programme. This involved a consultancy exercise, onsite for 3 days with a cross section of institutional stakeholders to assess our “readiness” for analytics. At the end of the exercise we got a report with our readiness matrix and some recommendations.  You can view our report here.

At the meeting last week a number of institutions who have gone through the Discovery phase took part in a panel discussion about the experience.  One common thread was the reassurance that the exercise gave to everyone in terms of being “on the right track” with things.  I was pleasantly surprised that we got such good score in terms of our cultural readiness. The validation of having an external report from a nationally recognised agency such as Jisc is also incredibly useful for those of us on the ground to remind/cajole (hit people of the head – oh wait that’s only in my dreams) with in terms of what we should be doing next.

I think one of the main problems with analytics is finding a starting point. Going through the Discovery phase does give a number of starting points. My frustration just now is that my institution is now going through a major rethink of our overall data architecture. So on the one hand I think “hurrah” because that does need to be done. On the other I feel that I am almost back to square one as terms of “business needs” anything to do with learning and teaching seems to fall off the list of things that need to be done pretty quickly.  It’s difficult to juggle priorities, what is more important, getting our admissions process working more efficiently or developing ways to understand what happens when students are engaging (or not) with modules and the rest of the “stuff” that happens at University? Or updating our student record system, or updating our finance systems?

Amidst all this it was good to get a day out to find out what others are up to in the sector. Thanks Jisc for providing these networking events. They really are so useful for the sector and long may they continue. UEL who hosted the event have been doing some great work over the past four years around learning analytics which has emerged from their original BI work with Jisc. The work they have been doing around module attendance (via their swipe card system and VLE data) and performance is something I hope we can do here at GCU sometime soon.

In the morning we got updates from 3 mini projects just have funded starting with the University of Greenwich and their investigations into module survey results and learning outcomes. The team explain more in this blog post. I was also very interested in the Student workload model mini project being developed at the OU.  You can read more about it here.

The other mini project from the University of Edinburgh, was interesting too, but in a different way. It is more what I would term, a pure LA research project with lots of text data mining, regression modelling of (MOOC) discussion forums. Part of me is fascinated by all of this “clever stuff”, but equally part of me just thinks that I will never be able to use any of that in my day job.  We don’t have huge discussion forums, in fact we are seeing (and in many ways encouraging) less use of them (even with our limited data views I know that) and more use of wikis and blogs for reflection and discussion. Maybe these techniques will work on these areas too, I hope so but sometimes thinking about that really does make my head hurt.

I hope that we can start moving on our pilot work around learning analytics soon. ’Til then, I will hang on in there and continue my slow climb up the slope, and maby one day arrive at the plateau.

Looking in the mirror to discover our institutional capability for learning analytics

picture of a mirror

(image CC Share Alike https://commons.wikimedia.org/wiki/File:Mirror_fretwork_english_looking-glass.png)

It’s been a busy week here at GCU Blended Learning Towers.  We’ve just finished the onsite part of of the Jisc Effective Analytics Programme. So this week has been a flurry of workshops and interviews led by the consulting team of Andy Ramsden and Steve Bailey. Although Andy and Steve work for Blackboard, the discovery phase is “platform agnostic” and is as much about culture and people as technology.  The evaluation rubric used had more about culture and people than technology.  Having a team who really understand the UK HE sector was very reassuring. Sadly, it’s not often that you can say that about and HE.

I think GCU is the second institution to go through the discovery process, I know there are quite a few others who will be  doing the same over the next six months. The process is pretty straightforward and outlined in the diagram below.

discovery process diagram

A core team from the institution have a two online meetings with the consulting team, relevant institutional policy/strategy documentation is reviewed before the onsite visit. At the end of the onsite visit an overall recommendation is shared with early findings, before a final report is given to the institution.

I was pleased (probably slightly relieved too) that we got a “ready with recommendations”.  That’s what we were hoping for.

Although we are still awaiting the final report, the process has already been incredibly useful. It has allowed us to bring together some of our key stakeholders; (re)start conversations about the potential and importance of learning analytics; the need to develop our infrastructure, people and process to allow us to use our data more effectively. The final report will also be really helpful in terms of helping us focus our next steps.

Andy described the process as a bit like “holding a mirror to ourselves” which is pretty accurate.  The process hasn’t brought up issues we weren’t aware of. We know our underlying IT infrastructure needs “sorting”, we starting to do that. What is has done is to illustrate some potential areas to help us focus our next steps. In a sense it has helped us not to see forest from the trees, but rather show some twinkling lights and pathways through the forest.

All dashboards but no (meaningful) data – more on our #learninganalytics journey

Back in March I blogged about the start of our journey here at GCU into learning analytics. We had just produced our annual blended learning report which had some headline stats around learning and teaching activity. As I said in the post, the figures we are getting are not that accurate and extracting and making sense of the data from numerous sources has been no mean feat for those involved.  Since then we have been making good progress in moving things forward internally so this post is really an update on where we are.

When I was working on the Cetis Analytics Series, I remember Jean Mutton, University of Derby, telling me about the power of “data driven conversations”.  I have a far greater understanding of exactly what she meant by that.  Since instigating  initial discussions about the where, what, why, when, who and how of our data we’ve been having some really productive discussions, mainly with our IS department and most importantly with one of our Business Analysts, Ken Fraser, who is now my new BFF 🙂

Ken has totally risen to our data challenge and has been exploring our data sets and sprinkling a bit of BI magic over things. Like many institutions, we populate our VLE automagically via our student record system. It is a key source of data and really our primary data source for student information. However actual student activity is recorded in other systems, primarily our VLE. We haven’t quite cracked the automagic feedback of assessments from the VLE back into our SRS – but again I don’t think we’re alone there. So any meaningful analytics process(es) needs to draw on both of these data sources (as well as a number of other ones but that’s for another post).

We also take a snapshot of our VLE activity every night, which Ken has been churning into a datastore, which has been quickly filling up and seeing what he can extract.  Using Oracle BI systems he has been able to develop a number of dashboards far quicker than I expected. But, and there’s always a but, they are next to meaningless as the data we are extracting in our snapshot is pretty meaningless e.g we can get total number of users, but it looks like the total number of users we’ve had on the system since it was installed. It is also not a real time process. That’s not a huge issue just now, but we know we have the tools to allow real time reporting and ideally that’s what we are aiming for.

So we are now exploring the tables in our snapshot from the VLE to see if we can get a more useful data extraction. and thinking about how/if we can normalise the data, and make more robust connections to/from our primary data source the student record system. This is also raising a number of wider issues about our data/information management process. The cycle of data driven conversations is well and truly in motion.

In terms of learning analytics we are really at the exploratory meso stage just now.  We are also getting lots of support from Bb too which is very encouraging.  It may well be that using their system will be the most useful and cost effective solution in the long run in terms of learning analytics. However I don’t think we can make that decision until we really understand ourselves what data we have access to, what we can do with it given our resources, and what we want to do with it. Then I can get back to ranting about big data and thinking about the really important stuff like just what is learning analytics anyway?

Are we just all data end points?

I’ve had  two very contrasting data experiences this week which are both clarify and confusing my views on data and learning analytics.  Firstly there was the LACE (learning analytics community exchange) project webinar titled: Big Picture of Learning Analytics Interoperability. Brian Kelly has written up the event and his blog post contains a link to the recording.

If you think about it, interoperability is key to any kind of data and analytical work. However as the webinar explained, learning analytics has the added complication of the numerous levels and models it can work in and across. The project are very keen to engage stakeholders around concepts but I think they are suffering from the classic chicken and egg scenario just now. They want to engage with the community, but some of the abstract terms do make it difficult for the community (and I include myself here) to engage with, so they need real examples. However I’m not sure right now how I can engage with these large concepts. But in my next post where I’ll update on the work we;re doing here at GCU it might become clearer. I am very keen to be part/track this community so I guess I need to try harder to engage with the higher level concepts.

Anyway, as you’ll know, dear reader, I have been experimenting with visual note taking so used the webinar yesterday to do just that. It’s an interesting experience as it does make you listen in a different way. Asking questions is also kind of hard when you are trying to capture the wider conversation. This is my view naive of the webinar.

Visual notes from LACE webinar

Visual notes from LACE webinar

In contrast, the University of Edinburgh’s “Digital Scholarship Day of Ideas : Data” had a line up of speakers looking at data in quite a different way.  Luckily for me, and others, the event was live streamed and the recording will be available over the next few days on the website.  Also Nicola Osborne was in attendance and live blogging – well worth a read whilst waiting for the videos to be uploaded.

A common theme for most of the speakers was exploration of the assumption that data is neutral.  Being a digital humanities conference that’s hardly surprising, but there were key message coming through that I wish every wannabe and self proclaimed “big data guru”, could be exposed to and take head of. Data isn’t neutral, and just because you put “big” it front of it doesn’t change that.  It is always filtered and not always in a good way. I loved how Annette Markham described how advertisers can use data to flatten and equalise human experience, and her point that not all human experiences can be reduced to data end points however much advertisers selling an increasingly homogenised, consumerist view of the world want it to be.

This resonated in particular with me as I continue to develop my thoughts around learning analytics. I don’t want to (or believe that you can) reduce learning to data end points that have a set of algorithms which can “fix” thing i.e. learner behaviour. But at the same time I do believe that we can make more use of the data we do collect to help us understand what is going on, what works, what doesn’t and allow us to ask more questions around our learning environments. And by that I mean a  holistic view of learning environment that the individual develops themselves as much as the physical and digital environments they find themselves in.  I don’t want a homogenised education system, but at the same time I want to believe that using data more effectively could allow our heterogeneity to flourish.  Or am I just kidding myself? I think I need to have a nice cup of tea and think about this more. In the meantime I’d love to hear any views you may have.

 

Exploring the digital university – next steps digtial university ecosystems?

Regular readers of this (and my previous) blog, will know that exploring the notion of just what a digital university is, c/should be is an ongoing interest of mine. Over the past couple of years now my colleague Bill Johston and I have shared our thinking around the development of a model to explore notions of the digital university. The original series of blog posts got very high viewing figures and generated quite a bit of discussion via comments. We’ve developed the posts into a number of conference presentations and papers. But the most exciting and rewarding development was when Keith Smyth from Edinburgh’s Napier University contacted us about the posts in relation their strategic thinking and development around their digital future. Which in turn will help them to figure out what their vision of digital university will look like.

For the past year Bill and I have been critical friends to Napier’s Digital Futures Working Group. This cross institutional group was tasked with reviewing current practice and areas of activity relating to digital engagement, innovation and digital skills development, and with identifying short term initiatives to build on current practice as well as proposing possible future developments and opportunities. These will be shared by Napier over the coming months. Being part of the Napier initiative has encouraged me to try and develop a similar approach here at GCU.  I’m delighted that we have got senior management backing and later this month we’ll be running a one day consultation event here.

Earlier this week Bill, Keith and myself had a catch up where we spent quite a bit of time reflecting on “our journey” so far.  Partly this was because we have another couple of conference paper submissions we want to prepare.  Also as we now have a very rich set of findings from the Napier experience we needed to think about  our next steps. What can we at GCU learn from the Napier consultation experience? What are the next steps for both institutions? What common issues will emerge? What common solutions/decision points will emerge?  What are the best ways to share our findings internally and externally?

As we reflected on where we started we (well, to be precise, Bill) began to sketch out a kind of process map of where we started (which was a number of lengthy conversations in the staff kitchen between Bill and I) to where we might be this time next year, when hopefully we will have set of actions from GCU.

The diagram below is an attempt to replicate Bill’s diagram and outline the phases we have gone through so far. Starting with conversations, which evolved into a series of blogs posts, which evolved in conference papers/presentation, the blog posts were spotted by Keith and used as a basis for the development of their Digital Futures Working group, which is now being used as an exemplar for work beginning here at GCU.

Stages of the Digital University Conversation

I am more and more convinced that one of the key distinguishing features of a digital university is the ability of staff and students to have a commonly shared articulation and experience of the digitally enabled processes they engage with on a daily basis, and equally a shared understanding of what would be missing if these processes weren’t being digitally enabled. You know, the digital day of student, lecturer, admin person type of thing, but not visions written by “futurologists”, ones written by our staff and students.  Alongside this we could have the daily live of the physical spaces that we are using. So for example we could have overlays of buildings not only showing the footfall of people but also where and when they were accessing our wifi next works etc.

Now, I know we can/could do this already (for example we already show access/availability of computers in our labs via our website) and/or make pretty good educated guesses about what is happening in general terms. However it is becoming easier to get more data and more importantly visualise it in ways that encourage questions around “actionable insights’ not only for our digital spaces, digital infrastructure but our physical ones too. Knowing and sharing the institutional digital footprint is again central to the notion of digital university.

Alongside this, by using learning analytic techniques can we start to make see any correlations around where and why students are online? Can we understand and learn from patterns around access and engagement with learning activities?  Are students are using our uni provided spaces and wifi to do the majority of their uni work or to download “stuff” to listen/watch/read to on the bus? Are they just accessing specialist software/kit? Does it matter if they all have Facebook/youtube/whatsapp open all the time if we are confident (through our enhanced data driven insights) that they are successfully engaging with our programmes and that they have the digital literacy skills to connect and collaborate with the right people in the right spaces (both on and offline)?

As we were talking one word kept coming.  It’s maybe a bit old fashioned, I know they were all the rage a few years ago particularly in the repository sphere, but we did think that mapping the ecosystem of a digital university could be the next logical step. The ecosystem wouldn’t just be about the technology, infrastructure and data but the people and processes too.  Via the the SoLar discussion list I discovered the  Critical Questions for Big Data  article by Danah Boyd and Kate Crawford. As part of their conclusions they write:

“Manovich (2011) writes of three classes of people in the realm of Big Data: ‘those who create data (both consciously and by leaving digital footprints), those who have the means to collect it, and those who have expertise to analyze it’. We know that the last group is the smallest, and the most privileged: they are also the ones who get to determine the rules about how Big Data will be used, and who gets to participate.”

In terms of a digital university, I think we need to be doing our utmost to ensure we are extending membership of that third group, but just now there is a need to raise awareness to all about how and where their data is being collected and to give them a voice in terms of what they think is the best use of it.

What a digital university will actually look like will probably not differ that much from what a university looks like today, what will distinguish it will be the what happens within it and how everyone in that university interacts and shares through a myriad of digitally enabled processes.