If the product works, but what about the people?

This is probably going to be an even more incoherent ramble than normal but I have been trying to write posts around a number of things for the last couple of weeks I’m going to try and merge them.

A couple of weeks ago, I read this post by David Wiley. At the time I tweeted:

I confess to a more than a bit of this sentiment, and not just in relation to OER,   “Much of the OER movement has a bad attitude about platforms.” I am always wary when the focus is on developing platforms and not developing the people who will use these platforms.

I was once in a meeting where I put forward the “people and process not platforms and products” case. I was told that what was being discussed was platform “in the Californian sense of platform”.  . .  I’m sure a classic WTF look must have passed over my face, but it was explained that this meant people as well as technology.  Geography aside, three years later this sense of platform doesn’t seem to be that wide spread or acknowledged. Maybe I need to go to California. But I digress.

Not long before the Wiley post I was reading the Pearson White Paper on learning design.  It caused me a bit of unease too.  Part of me was delighted to see learning design being recognised by, whatever might happen to them, a significant player in the education technology provider field.   Using learning design to help product design is a bit of a no brainer. Technology should be driven by educational need or as Pearson put it :

“Products and systems that effectively leverage learning design can deliver superior learning outcomes.”

One example in the paper referred to work they had done in social science classes

“we quickly recognized that students were easily distracted by conventional textbooks. This told us we needed to eliminate distractions: any extraneous cognitive load that doesn’t promote learning. Fortunately, our learning design work reveals many proven techniques for accomplishing this. REVEL segments all content into manageable pieces and presents it via a consistent structure. It provides strong signaling cues to highlight key material and places all relevant content on screen simultaneously to offer a continuous, uninterrupted experience”

Which kind of related to this point from the Wiley post:

“Our fixation on discovery and assembly also distracts us from other serious platform needs – like platforms for the collaborative development of OER and open assessments (assessments are the lifeblood of this new generation of platforms), where faculty and students can work together to create and update the core materials that support learning in our institutions. Our work in OER will never be truly sustainable until faculty and students jointly own this process, and that can’t happen until a new category of tools emerges that enables and supports this critical work. (Grant money for OER creation won’t last forever.)

And don’t even start trying to explain how the LMS is the answer. Just don’t. “

Well of course Pearson do try to explain that:

“As testing progresses, we can overcome problems that compromise outcomes and build a strong case that our design will support learning. The very same work also helps us tightly define assessments to find out if the product works in real classrooms”

Of course they don’t really touch on the OER aspect (all their learning design stuff has been made available with CC goodness) but I’ll come back to that.

That phrase “if the product works”, I keep coming back to that.  So on the one hand I have to be pleased that Pearson are recognising learning design. I have no argument with their core principles .  I agree with them all.  But I am still left with the niggle around the  assumption that the platform will “do” all the learning design  for both staff and students. That underlying  assumption that if only we had the right platform all would be well, everything could be personalised, through data and analytics and we’d have no retention issues.  That niggles me.

I was part of a plenary panel at the HESPA conference last week called “the future of learner analytics” where a number of these issues came up again.   The questions asked by this group of educational planners really stimulated a lot of debate. On reflection I was maybe a bit of a broken record.  I kept coming back not to platforms but people and more importantly time.  We really need to give our staff and students (but particularly our staff) time to engage with learning analytics.   Alongside the technical infrastructure for learning analytics we need to asking where’s the CPD planning for analytics?  They need to go hand in hand. Cathy Gunn, Jenny McDonald and John Milne’s excellent paper “the missing link for learning from analytics” sums this up perfectly:

there is a pressing need to add professional development and strategies to engage teachers to growing range of learning analytics initiatives If these areas are not addressed, adoption of the quality systems and tools that are currently available or underdevelopment may remain in the domain of the researchers and data analysis experts” 

There seems to be an assumption that personalisation of learning is a “good thing” but is it?  Going back to learning design, designing engaging learning activities is probably more worthwhile and ultimately more useful to students and society than trying to create homogenised, personalised chunked up content and assessments.  Designing to create more effective engagement with assessment and feedback is, imho, always going to be more effective than trying to design the perfect assessment platform.

In terms of assessment, early last week I was also at a Scotbug (our regional Blackboard user group) meeting, where I was in a group where we had to design an assessment system. This is what we came up with – the flipped assessment – aka student generated assessments.

img_0107

Not new, but based on pedagogy and technology that is already in use ( NB there’s been a really great discussion around some of this in the ALT list this weekend).   I don’t think we need any new platforms for this type of approach to assessment and feedback – but we do need to think about learning design (which encapsulates assessment design) more, and give more time for CPD for staff to engage more with the design process and the technologies they either have to,  use or want to use.  This of course all relates to digital capability and capacity building.

So whilst  we’re thinking about next gen platforms, learning environments, please let’s not forget people. Let’s keep pressing for time for staff CPD to allow the culture shifts to happen around understand the value of OER, of sharing, of taking time to engage with learning design and not just having to tweak modules when there’s a bit of down time.

People are the most important part of any  learning environment – next gen, this gen, past gen. But people need time to evolve too, we can’t forget them or try to design out the need for them for successful learning and teaching to take place. Ultimately it’s people that will make the product work.

Summary of #GCUGamesOn Evalution Findings

As promised this post shares the summary findings from our recent online event, GCU Games On. As I’ve written about before we developed this very quickly (in a month from idea to online) so we were very aware of some of the pedagogic shortcomings of our overall design. However given the rapid development time during the start of summer holidays when most of our subject experts were on holiday we had to make some very pragmatic design decisions.

Overall the feedback was pretty positive and the whole experience is helping to shape our developing strategy to open, online courses. (Nb the text below has been adapted from an internal report).

Background

GCU Games On was an open online event designed to celebrate, explore and share experiences during the Glasgow 2014 Commonwealth Games. It ran between 16 July and 8 August 2014.

Instigated by the PVC Learning and Student Experience, it was developed in little over a month. Due to the time constraints (one month from idea to being available openly online) a simple design was developed which included: background and contextual information with relevant links, making a wish on our digital wishing trees, at least one twitter based activity and a medal quiz challenge each week. Sharing experiences of Glasgow 2014 via twitter was encouraged each week. Daily email updates were sent to all registered participants.

The event was delivered via the new Blackboard Open Education platform.

Participation

  • Registrations: 211
  • Countries: 12 excluding the UK
  • Digital Badges issued: 174
  • Tweets: 424
  • Digital wishes: 107

Evaluation

Of the 211 registrations, 22 completed the survey giving a 10.4% response rate. In addition, due to the use of social media (and in particular, twitter) a number of informal responses to the event were shared.

Summary Findings

The majority of respondents to the survey were female, aged between 25 to 65, based in the UK with no connection to GCU. The majority of participants were based in the UK, with 36% based in both Glasgow and Scotland respectively. 18% of respondents were from the rest of the UK, and there were equal numbers (4.5%) of respondents from other Commonwealth countries and non Commonwealth countries. From registration information we know we had registrations from Australia, India, Trinidad & Tobago, Ireland, Israel, Denmark, Canada Italy, Israel, New Zealand, Spain and South Korea.

59% of respondents had no connection with GCU and 45% of respondents cited wanting to experience online learning at GCU as their main reason for participating. The vast majority of respondents had some form of formal educational qualification, 45% up to Masters level.  This correlates to general trends in open online courses, but may also reflect a network effect from the Blended Learning Team’s network and promotion of the event. 95% of respondents found the site easy or partially easy to use.  54% of respondents completed all of the activities.

Open feedback was generally positive about the experience.

“I really enjoyed this as a bit of fun.  What I got out of it most was seeing new blackboard system in operation and it looks and feels very impressive.”

“I think looking at the Twitter feed this was spot on for what it was trying to achieve. Much fun was had by all it seems and the course gave a great scaffold to talk about their experiences at the games.”

“I do know it is hard to pull together a learning experience around an event like this and I guess that was weakness of this approach.  At times I think really perhaps due to lack of substance or clear learning outcomes – the learning design was a bit hit or miss – but I think you did achieve outcome of getting folks to engage with learning platform which was I think what it was about rather than the content”

 

GCU Games On Gold Medal

GCU Games On Gold Medal

What Sheila’s seen this week

This week started with the 2nd Open Data Glasgow meet-up on Monday night. There were a fascinating range of presentations which Lorna Campbell has helpfully summarised in this blog post

Duncan Bain’s presentation on open approaches to architecture provoke a lot of discussion around the cultural barriers in adopting openness. In particular there were comparisons made between software development and the common sharing of code and the lack of similar sharing in architecture. Given the impact buildings have on all our lives, having more collaborative, open approaches does seem to make perfect sense – but when did that make a difference anywhere 🙂

Hearing an architect talking about design patterns and co-design approaches was also quite a change for me, as my introduction to these concepts has been through research around learning design where these concepts of design language and approaches have been “appropriated” or should I say re-used?  and are being used fairly successfully. The overall concepts certainly cross over well.

On Monday I also came across the QAA report on Students Expectations and Perceptions of HE report, and I’ve been having some great twitter conversations with Peter Reed and Mark Stubbs about what Mark calls technology “hygiene factors”, which are all too often not given the recognition they need.  Peter has been sharing the findings of surveys he’s conducted with staff and students around their use of TEL and he helpfully produced this post contexualising the hygiene issues too.

I found Peter’s findings around students expectations of lecture capture particularly revealing

“the most striking thing for me is that so many HEIs appear to buying into incredibly expensive, sophisticated lecture capture systems. Internal work at Liverpool costed out what it would take to rig out all our lecture rooms – the cost was around £4 million. In actual fact, the majority of students would just prefer simple audio sync’ed with the slides, which can be achieved for about £30k (I think)”

Lecture capture is something that is on our agenda here at GCU, like most we’ve had/are having mixed responses. The University of Leicester held a “great debate” on the issue this week too. Grainne Connole’s post  summarises the outcome. It’s also worth checking out Alan Cann’s What’s wrong with lecture capture post, summarising his experiences and contribution to the debate.

More steps towards wysiwyg widget authoring

One of the problems with being part of an innovation centre like CETIS is that we suffer a bit from the Dory complex. For those of you unfamiliar with this concept, it is based on the character Dory in the movie Finding Nemo who is rather easily distracted by new things. Sometimes we find that “stuff” drops off our radar as we have moved on to the next shiny thing. So it is always great when we get a chance to be involved in development for a sustained period of time. An example of this for me is the WIDGaT widget authoring tool and its development team at the University of Teesside.

The WIDE project was part of the Jisc DVLE programme which I supported, and developed a number of fully accessible widgets. The team then got further funding and were able to develop their methodology and practice into an authoring tool for widgets.

Earlier this week I joined the team and about 25 others for a “WIDGaT in Practice” workshop. We had a chance to see some examples of widget from both the HE and FE sectors and were able to get hands on and create our own widgets. Having taken part in their design bash day about 18months ago to help the team scope the design for the authoring tool, it was great to see and have a play with a useable tool which pretty much covered all the design elements the “expert” group came up with.

There are a number of pre-built templates to chose from or you can start with a blank canvas. One of the common designs for widgets from practitioners has time/task management widgets to help students be more independent in their studies/life. We were shown a number of examples including a really nice simple visual reminder of key steps for each day for a student with autism and another with key stages for final year projects. The editor also includes a number of components such as embedding youtube videos and images, and social network components such as Facebook likes and comments. Examples of using these features included a widget which embedded a number of videos with a Facebook comment link so that students could share comments on content directly into their course Facebook group. There is also a simple quiz component which is proving also proving popular.

WIDGaT authoring stage

WIDGaT authoring stage

The interface is pretty straightforward but I did find manipulation things a bit tricky and the team are working at improving layout options. However as a quick and easy way to develop and share resources online it does have a lot going for it. It also has a lot of design support functionality built in to help users think about what they are creating and who they are creating it for.

WIDGaT Personna description function

WIDGaT Personna description function

At #cetis13 next month the team are also running a workshop at the end of day 1 where they will be actively looking for new components to add to the tool as well as any other ideas for enhancements. As the tool is open source, it is a great example for the Open Innovation and Open Development session on day 2 .

Prototyping my Cloudworks profile page

Week 5 in #oldsmooc has been all about prototyping. Now I’ve not quite got to the stage of having a design to prototype so I’ve gone back to some of my earlier thoughts around the potential for Cloudworks to be more useful to learners and show alternative views of community, content and activities. I really think that Cloudworks has potential as a kind of portfolio/personal working space particularly for MOOCs.

As I’ve already said, Cloudworks doesn’t have a hierarchical structure, it’s been designed to be more social and flexible so its navigation is somewhat tricky, particularly if you are using it over a longer time frame than say a one or two day workshop. It relies on you as a user to tag and favourite clouds and cloudscapes, but even then when you’re involved in something like a mooc that doesn’t really help you navigate your way around the site. However cloudworks does have an open API and as I’ve demonstrated you can relatively easily produce a mind map view of your clouds which makes it a bit easier to see your “stuff”. And Tony Hirst has shown how using the API you can start to use visualisation techniques to show network veiws of various kinds.

In a previous post I created a very rough sketch of how some of Tony’s ideas could be incorporated in to a user’s profile page.

Potential Cloudworks Profile page

Potential Cloudworks Profile page

As part of the prototyping activity I decide to think a bit more about this and use Balsamiq (one of the tools recommended to us this week) to rough out some ideas in a bit more detail.

The main ideas I had were around redesigning the profile page so it was a bit more useful. Notifications would be really useful so you could clearly see if anything had been added to any of your clouds or clouds you follow – a bit like Facebook. Also one thing that does annoy me is the order of the list of my clouds and cloudscapes – it’s alphabetical. But what I really want at the top of the list is either my most recently created or most active cloud.

In the screenshot below you can see I have an extra click and scroll to get to my most recent cloud via the clouds list. What I tend to do is a bit of circumnavigation via my oldsmooc cloudscape and hope I have add my clouds it it.

Screen shot of my cloud and cloudscape lists

Screen shot of my cloud and cloudscape lists

I think the profile page could be redesigned to make use of the space a bit more (perhaps lose the cloud stream, because I’m not sure if that is really useful or not as it stands), and have some more useful/useble views of my activity. The three main areas I thought we could start grouping are clouds, cloudscapes (and they are already included) and add a community dimension so you can start to see who you are connecting with.

My first attempt:

screen shot of my first Cloudworks mock up

screen shot of my first Cloudworks mock up

Now but on reflection – tabs not a great idea and to be honest they were in the tutorial so I that’s probably why I used them 🙂

But then I had another go and came up something slightly different. Here is a video where I explain my thinking a bit more.

cloudworks profile page prototype take 2 from Sheila MacNeill on Vimeo.

Some initial comments from fellow #oldsmooc-ers included:

and you can see more comments in my cloud for the week as well as take 1 of the video.

This all needs a bit more thought – particularly around what is actually feasible in terms of performance and creating “live” visualisations, and indeed about what would actually be most useful. And I’ve already been in conversation with Juliette Culver the original developer of Cloudworks about some of the more straight forward potential changes like the re-ordering of cloud lists. I do think that with a bit more development along these lines Cloudworks could become a very important part of a personal learning environment/portfolio.

Ghosts in the machine? #edcmooc

Following on from last week’s post on the #edcmooc, the course itself has turned to explore the notion of MOOCs in the context of utopian/dystopian views of technology and education. The questions I raised in the post are still running through my mind. However they were at a much more holistic than personal level.

This week, I’ve been really trying to think about things from my student (or learner) point of view. Are MOOCs really changing the way I engage with formal education systems? On the one hand yes, as they are allowing me (and thousands of others) to get a taste of courses from well established institutions. At a very surface level who doesn’t want to say they’ve studied at MIT/Stanford/Edinburgh? As I said last week, there’s no fee so less pressure in one sense to explore new areas and if they don’t suit you, there’s no issue in dropping out – well not for the student at this stage anyway. Perhaps in the future, through various analytical methods, serial drop outs will be recognised by “the system” and not be allowed to join courses, or have to start paying to be allowed in.

But on the other hand, is what I’m actually doing really different than what I did at school and when I was an undergraduate or was a student on “traditional’ on line, distance courses. Well no, not really. I’m reading selected papers and articles, watching videos, contributing to discussion forums – nothing I’ve not done before, or presented to me in a way that I’ve not seen before. The “go to class” button on the Coursera site does make me giggle tho’ as it’s just soo American and every time I see it I hear a disembodied American voice. But I digress.

The element of peer review for the final assignment for #edcmooc is something I’ve not done as a student, but it’s not a new concept to me. Despite more information on the site and from the team this week I’m still not sure how this will actually work, and if I’ll get my certificate of completion for just posting something online or if there is a minimum number of reviews I need to get. Like many other fellow students the final assessment is something we have been concerned about from day 1, which seemed to come as a surprise to some of the course team. During the end of week 1 google hang out, the team did try to reassure people, but surely they must have expected that we were going to go look at week 5 and “final assessment” almost before anything else? Students are very pragmatic, if there’s an assessment we want to know as soon as possible the where,when, what, why, who,how, as soon as possible. That’s how we’ve been trained (and I use that word very deliberately). Like thousands of others, my whole education career from primary school onwards centred around final grades and exams – so I want to know as much as I can so I know what to do so I can pass and get that certificate.

That overriding response to any kind of assessment can very easily over-ride any of the other softer (but just as worthy) reasons for participation and over-ride the potential of social media to connect and share on an unprecedented level.

As I’ve been reading and watching more dystopian than utopian material, and observing the general MOOC debate taking another turn with the pulling of the Georgia Tech course, I’ve been thinking a lot of the whole experimental nature of MOOCs. We are all just part of a huge experiment just now, students and course teams alike. But we’re not putting very many new elements into the mix, and our pre-determined behaviours are driving our activity. We are in a sense all just ghosts in the machine. When we do try and do something different then participation can drop dramatically. I know that I, and lots of my fellow students on #oldsmooc have struggled to actually complete project based activities.

The community element of MOOCs can be fascinating, and the use of social network analysis can help to give some insights into activity, patterns of behaviour and connections. But with so many people on a course is it really possible to make and sustain meaningful connections? From a selfish point of view, having my blog picked up by the #edcmooc news feed has greatly increased my readership and more importantly I’m getting comments which is more meaningful to me than hits. I’ve tried read other posts too, but in the first week it was really difficult to keep up, so I’ve fallen back to a very pragmatic, reciprocal approach. But with so much going on you need to have strategies to cope, and there is quite a bit of activity around developing a MOOC survival kit which has come from fellow students.

As the course develops the initial euphoria and social web activity may well be slowing down. Looking at the twitter activity it does look like it is on a downwards trend.

#edcmooc Twitter activity diagram

#edcmooc Twitter activity diagram

Monitoring this level of activity is still a challenge for the course team and students alike. This morning my colleague Martin Hawskey and I were talking about this, and speculating that maybe there are valuable lessons we in the education sector can learn from the commercial sector about managing “massive” online campaigns. Martin has also done a huge amount of work aggregating data and I’d recommend looking at his blogs. This post is a good starting point.

Listening to the google hang out session run by the #edcmooc team they again seemed to have under estimated the time sink reality of having 41,000 students in a course. Despite being upfront about not being everywhere, the temptation to look must be overwhelming. This was also echoed in the first couple of weeks of #oldsmooc. Interestingly this week there are teaching assistants and students from the MSc course actively involved in the #edcmooc.

I’ve also been having a play with the data from the Facebook group. I’ve had a bit of interaction there, but not a lot. So despite it being a huge group I don’t get the impression, that apart from posting links to blogs for newsfeed, there is a lot of activity or connections. Which seems to be reflected in the graphs created from the data.

#edc Facebook group friends connections

#edc Facebook group friends connections


This is a view based on friends connections. NB it was very difficult for a data novice like me to get any meaningful view of this group, but I hope that this gives the impression of the massive number of people and relative lack of connections.

There are a few more connections which can be drawn from the interactions data, and my colleagye David Sherlock manage create a view where some clusters are emerging – but with such a huge group it is difficult to read that much into the visualisation – apart from the fact that there are lots of nodes (people).

#edcmooc Facebook group interactions

#edcmooc Facebook group interactions


I don’t think any of this is unique to #edcmooc. We’re all just learning how to design/run and participate at this level. Technology is allowing us to connect and share at a scale unimaginable even 10 years ago, if we have access to it. NB there was a very interesting comment on my blog about us all being digital slaves.

Despite the potential affordances of access at scale it seems to me we are increasingly just perpetuating an existing system if we don’t take more time to understand the context and consequences of our online connections and communities. I don’t need to connect with 40,000 people but I do want to understand more about how, why and how I could/do. That would be a really new element to add to any course, not just MOOCs (and not something that’s just left to a course specifically about analytics). Unless that happens my primary driver will be that “completion certificate”. In this instance, and many others, to get that I don’t really need to make use of the course community. So I’m just perpetuating an existing where I know how to play the game, even if it’s appearance is somewhat disguised.

Learning from our MOOC-stakes and sharing learning designs

It had to happen at some time, and not sure if it was karmic retribution or chaos theory, or plain old sod’s law that this week the first high profile MOOC collapse occurred with the pulling of Georgia Tech’s Fundamentals of Online EducationCoursera MOOC.

As many have already commented the route of the problem was the actual course design and implementation. From what I have seen on the twitter and blog-o-spheres, some very fundamental issues such as trying to promote group work without a clear reason as to why it was necessary coupled with technical problems with the chosen technology to facilitate the work general lack of guidance and support, all ask question of the underlying course design and quality assurance processes of (in this instance) Coursera MOOCs. But there are more fundamental questions to be asked about the actual design processes used by the staff involved.

As readers of this blog will know, I’m documenting my own “adventures in mooc-land” at the moment, and I’m in week 4 of #oldsmooc, which is all about learning design. This week is very much focused on the practicalities and planning stages of a design – be that a whole course or an individual activity. The week is led by Professor Diana Laurillard and Dr Nial Winters of the London Knowledge Lab with Dr and Steve Warburton from the University of London.

The week started with a webinar where Diana introduced the PPC (Pedagogical Patterns Collector). Designing for MOOCs were inevitably part of the discussion, and Diana raised some very pertinent points about the feasibility of MOOCS.

which led to these questions

Well it would seem that the design used by the Georgia tech course is one that shouldn’t be shared – or is that case? Elements of what they were suggested can (and have worked even in MOOCs). So can we actually turn this round and use this in a positive way?

I always get a slightly uneasy feeling when people talk about quality of learning materials, as I’m not convinced there are universal quality controls. What on the surface can look like a badly, designed artefact, can actually be used as part of a very successful (and high quality) learning experience -even if only to show people what not to do. Perhaps this is what Coursera need to do now is turn this thing around and be open so the whole community can learn from this experience. Already many, many experienced teachers have shared their views on what they would have done differently. How about using a tool like the PPC to share the original design and then let others re-design and share it? As George Siemens said so eloquently

“the gift of our participation is a valuable as the gift of an open course.”

The community can help you Coursera if you let it.