I spent the early part of this week in Milton Keynes with the OER Research Hub team as part of the second phase of the project evaluation.
When I worked with the team last year one of the things that intrigued me about the project was the fact that they were planning to apply and adapt an agile programming approach to the project.
As I pointed out then, I felt there could be challenges with this as typically the outputs from research projects aren’t as concrete as most software development products, but I could see the attraction of this approach.
Bringing researchers who form part of a globally distributed team together for set periods to focus on certain aspects of research project does make sense. As does having some kind of structure, particularly for focusing “group minds” on potential outputs (products), adaptation of peer programming could be useful for peer review etc. However implementing “proper” agile programming methodology to research is problematic.
But if we stick with the programming analogy and stop thinking in terms of products, and start thinking of research as a service (akin to software as a service) then maybe there is more milage. A key part of SaS approaches are APIs, allowing hooks into all sorts of sites/ services so that they can in effect talk to each other.
The key thing therefore is for the researcher to think of themselves more as the interface between their work, the data, the findings, the “what actually happened in the classroom” bits and focus on ways to allow as wide a range of stakeholders to easily “hook” into them so they can use the outputs meaningfully in their own context.
In many ways this is actually the basis of effective digital scholarship in any discipline and of course what many researchers already do.
A year on, and after experiencing one of the early project sprints how has it worked out?
Well everyone knew that the project wouldn’t be following a strict agile methodology, however key aspects, such as the research sprints have proved to be very effective. Particularly in focusing the team on outputs.
The sprints have allowed the overall project management to be more agile and flexible. They have brought focus and helped the team as a whole stay on track but also refocus activity in light of the challenges (staff changes, delays to getting surveys started etc) that any research project has to deal with. As this is very much a global research project, the team have spent large chunks of time on research visits, going to conferences etc so when they are “back at the ranch” it has been crucial that they have a mechanism not only to report back and update their own activities but also to ensure that everyone is on track in terms of the project as a whole.
The sprints themselves haven’t been easy, and have required a lot of planning and management. The researchers themselves admit to often feeling resentment at having to take a week out of “doing work” to participate in sprints. However, there is now an acknowledgement that they have been central to ensure that the project as a whole stays on track and that deliverables are delivered.
I was struck this week by how naturally the team talked about the focus of their next sprint and how comfortable and perhaps more importantly confident they were about what was achievable. It’s not been easy but I think the development, and the sustaining of the research sprint approach over the project lifespan has paid dividends.
Returning to the wider API issue, last year I wrote
I wonder if the research as API analogy could help focus development of sharing research outputs and developing really effective interactions with research data and findings?
Again, one year can I answer my own question? Well, I think I can. From discussions with the team it is clear that human relationships have been key in developing both the planned and unexpected collaborations that the project has been undertaking. At the outset of the project a number of key communities/agencies were identified as potential collaborations. Some to these collaborators had a clear idea of the research they needed, others not so much. In every case as the research team have indeed been acting as “hooks” into the project and overall data collection strategy.
These human relationships have been crucial in focusing data collection and forging very positive and trusted relationships between the Hub and its collaborators. Having these strong relationships is vital for any future research and indeed, a number of the collaborations have extended their own research focus and are looking to work with the individual team members on new projects. As findings are coming through, the Hub are helping to stimulate more research into the impact of OER and support an emerging research community.
One of the initial premises for the project was the lack of high quality research into the impact of OER, they are not only filling that gap, but now also working with the community to extend the research. Their current Open Research course is another example of the project providing more hooks into their research, tools and data for the wider community.
The project is now entering a new phase, where it is in many ways transitioning from a focus on collecting the data, to now sharing the data and their findings. They are now actually becoming a research hub, as opposed to being a project talking about how they are going to be a hub. In this phase the open API analogy (imho) can only get stronger. If it doesn’t then everyone loses, not just the project, but the wider open education community.
The project does have some compelling evidence of the impact of using OER on both educators and learners (data spoiler alert: some of the differences between these groups may surprise you), potential viable business models for OER, and some of the challenges, particularly around encouraging people to create and share back their own OERs. For me this is particularly exciting as the project has some “proper” evidence , as opposed to anecdotes, showing the cultural impact OER is having on educational practice.
In terms of data, the OER Impact Map, is key hook into the visualizing and exploring the data the project has been collecting and curating. Another phase of development is about to get under way to provide even more ways to explore the data. The team are also now planning the how/where/when of releasing their data set.
The team are the human face of the data, and their explanations of the data will be key to the overall success of the project over the coming months.
More thoughts to come from me on the project as a whole, my role and agile evaluation in my next post.
Reblogged this on .
Reblogged this on .
Pingback: Super September: Monthly Review |
Pingback: Reflections on the #oerrub agile approach to evaluation | howsheilaseesIT