Paper notes: Beheshitha et al 2015.

An interesting paper from Dragan Gašević & colleagues presented at LAK15. This quantitative study looked at SRL behaviours – complementary to our qualitative approach.

Beheshitha, S.S., Gašević, D., & Hatala, M. (2015) A process mining approach to linking the study of aptitude and event facets of self-regulated learning. Proceedings of the Fifth International Conference on Learning Analytics and Knowledge, 16-20 March, 2015, Poughkeepsie, NY, USA


SRL can be thought of as aptitudes (observed differences in individuals – commonly measured by self-report) or events (the actions learners execute). We can analyse trace data and associate srl events with sequences/combinations of actions in online environments then relate these to SRL aptitudes (self-report). RQ1 Can we identify groups of learners with different aptitudes (looked specifically at deep/surface learning, and goal orientation). RQ2 Do these groups behave differently? Design: Filled in questionnaires, then used nStudy (tasks monitored included: bookmark and organise resources, highlight and quote key points, take notes, define terms, write report. These were then classified as rehearsal, elaboration, and organisation). N=20. Cluster analysis allowed classification of learners as deep or surface, but was not able to identify different types of goal orientation. Comparing surface and deep learners, different patterns were seen – deep learners chose a strategy and stuck to it, and their strategies were more focused on elaboration. Surface learners used more organisation, and were more likely to adopt different strategies.


Interesting complement to our PL-MOOC work, which also seeks to measure (self-reported) aptitude, then link it to SRL behaviours though in our case behaviours are also self-report and collected via interview (as opposed to mining of trace data). Very preliminary results, but shows promise as a method, and lit. review provides pointers to some interesting recent SRL research.

Follow up

Hadwin, A.F., Nesbit, J.C., Jamieson-Noel, D., Code, J. and Winne, P. H. (2007) Examining trace data to explore self-regulated learning,” Metacognition & Learning, 2, (2–3), 107–124.

Bannert, M. Reimann, P., and Sonnenberg, C. (2014) Process mining techniques for analysing patterns and strategies in students’ self-regulated learning,” Metacognition & Learning, 9, 161–185.

MOOC Research Conference

2013-08-28 17.39.35

Back a few days from mri13 and now that the experience has settled in somewhat I thought I would try and reflect on the conference and what I learned about the current mooc landscape and its impact on our research here at the Caledonian Academy.

First off, thanks to George Siemens and Co. for bringing together a diverse group of people with key knowledge and perspectives and no shortage of ideas and opinion. I thought the conference was about perfect size, though the attendees were sometimes a little thinly spread due to the many parallel sessions. This also meant I only saw a minority of presentations from other MRI grantees (even though I prioritised these over presentations which might have had more general appeal to me), so I need to go back to the abstracts and hope I can find some online presentations for the ones I missed (ours [Professional Learning through Massive Open Online Coursesis here). 

The conference started off on a high with a charismatic performance from Jim Groom describing his work supporting learners as creators at UMW.
In the CETIS PLE project (Milligan et al, 2006, Wilson et al, 2007) we described a utopia of learner-centred learning in a mature online landscape but my own experience at institutional level has taught me how difficult this is to achieve at scale (is it a coincidence that my work has moved into more informal contexts – professional and workplace learning). Jim’s work has shown that learner-centred learning can be supported within formal settings. However, opening the conference with a presentation showing what learning could be like backfired a little for me because over the next few days I saw too much evidence of learning taking second place to other considerations – content, delivery, administration, data. At points I was left wondering whether the legacy of the MOOC bubble of the last few years will be a million hours of talking heads video lectures.

In some presentations I was left wondering whether the whole canon of literature on online and distance learning has been cast aside. A key element of our project has been to try and understand the MOOC we are studying as an online learning event (before then going on to explore the learning occurring within it). Having a robust theoretical base is essential for us as we go forward to design tools and recommendations intended to inform the design of future MOOCs. Another disappointment was that some of the presentations seemed a little dumbed-down. In one presentation, we got the ‘MOOC: every letter is negotiable‘ line, while one presenter asked us to think when a cMOOC would be best and when an xMOOC was appropriate (like I’m not allowed to consider a million other ways of delivering a course online). Surely a MOOC research conference can support  a base level of dialogue beyond this.

Having complained (I did enjoy the conference), I should say that I saw some really thought-provoking presentations that I will follow up on (see abstracts for the MRI grantees here – sorry i couldn’t find presentation links).

  • Bruno Poellhuber described an MRI study based across a number of Canadian Universities using SRL to explore motivation and engagement in MOOCs. Over the years, our work has been focused increasingly on motivation and I think it is a key issue in MOOC research, and one which has to be addressed by any MOOC provider or designer.
  • Rebecca Eynon and Nabeel Gillani from the University of Oxford presented the most interesting SNA focused talk I attended. Part of their study focused on the vulnerability of networks, removing nodes to show how those networks dissolved or degraded. This is a mixed methods study, combining big data analysis with qualitative approaches (interviews) to really understand what is going on in the course they are studying.
  • Martin Weller presented data from two complementary studies. In the first, his colleague Katy Jordan had collected and analysed a large amount of data on MOOC participation. The emerging visualisations (some with large error bars) begin to highlight some key patterns. For example: it looks like you can now start to predict participation rates in MOOCs. [update: see Martin’s blog post on Completion Data ] In the second, Martin showed how he had adapted course analysis approaches used at the OU in an attempt to describe the form of a range of MOOCs. [update: see Martin’s blog post on Learning Design of MOOCs] Although this work is still in its early stages (I liked the fact that the findings presented by all the MRI grantees at the conference were ‘interim’) I think it shows great promise.
  • Although not a MRI grantee, I liked Shirley Alexander’s presentation describing the signature pedagogy (Learning 2014) being implemented at UTS in Australia. I suppose that brings us back to Jim Groom and learner centred learning. And away from MOOCs.

I’m looking forward to the MRI projects coming to fruition. I hope they make their research results easy to find online.

Personally, it was great to think and talk about research for an extended period of time (both at the conference and at the airport/on the flights at either end). It was also great to meet other researchers in the community and makes some personal connections. The main lesson I learned: I realised that while we don’t do ‘big data’ analysis at the Caledonian Academy, it shouldn’t stop us thinking of how we can try to complement our studies with data-driven studies: I think that in the long run, combined approaches (cf the Oxford study) are going to be the most informative.


From consumption to curation – but where are we headed.


We have previously described four learning behaviours which we feel describe the ways in which an individual engages with their personal and work learning environment/ personal learning network. See Littlejohn, Milligan and Margaryan, 2011 and Milligan, Littlejohn and Margaryan, 2012 for academic papers). We term the behaviours the 4c’s: consume, connect, create and contribute. We tend to argue that learning networks and personal learning environments facilitate connection creation and contribution, whereas traditional learning environments (and their associated pedagogies) were more focused on consumption.

Although we’ve settled on 4c’s, a fifth option: curation describes an important and emergent behaviour in social networks which has clear overlap with these behaviours. This blog post (long and hastly written) captures some of my current thinking about tools in this area.

In the good old days we saved bookmarks. To our own computers. Then along came allowing public sharing of bookmarks, and through tags a great mechanism for discovering trusted knowledge sources and gaining an insight into how other people in our networks conceptualised and structured knowledge. I was an avid user of delicious from its early days – and I think it is an incredibly powerful social tool for learners and for learning. But I hardly use delicious anymore …

I used to have a simple workflow – find things via Google Reader, and save everything useful to delicious, organising with different tags. I shared stuff using the special for:username tag to individuals in my close personal network. and I used to make notes about the resources as i saved them in delicious. This is incredibly powerful in learning terms … Going back to the 4c’s I consumed content, I connected it to my existing knowledge structures (by tagging) and to people in my network (using the for:username tag), I created new knowledge associated with the new resources I’d identified (appending notes) and making all this public through an open social tool represented my contribution back to my learning network, and beyond.

Sometime in the last two years or so, my delicious tagging started to slow considerably and my sharing fell to zero. Sorry network. The reason: the other tools in my personal learning environment evolved, and new ones arrived, while delicious (and Google Reader) stagnated. These new tools have different affordances, but bring some limitations.

In the last 7 or 8 years, the web (and how we interact with it) has moved on, not least with the advent mobile devices and apps. My simple discovery and share workflow has evolved – but not for the better. Now I tend to put things in silos – I save recipes to ‘Pocket’ because they are then easily accessible when I am cooking and have my tablet with me. I store research papers to Mendeley or Zotero where I can search, music videos go on tumblr, tips and tricks articles to Evernote. I’ve gradually migrated many of my newsfeeds from Google Reader to Flipboard where the reading (consumption) interface is much nicer. Managing /curating and consuming my content streams is infinitely easier than it was before. But there’s been a silent cost. In some of these new silos I have retained networks, but not in all. Specifically. FlipBoard and Pocket feel very personal to me and not social at all (they support consume, but not so much connect).

And what about create and contribute … I really like Pocket, and if I am honest I don’t really mind that it is not ‘social’ (in the short run, I don’t suffer) but one thing that does frustrate me is that I can’t annotate the things I keep there. I want to be able to write ‘halve the quantities’ or ‘use butter not margarine’ somewhere to remind me when I make a recipe again, but this online version of a recipe folder doesn’t give that option (it doesn’t support create). And with it not being social, there is little option for me to contribute. That’s a real shame.

Likewise with FlipBoard. Up until now there was no coherent sharing within FlipBoard (you can ‘tweet out’ single articles from the Flipboard app) So it was interesting to read today that FlipBoard is adding a social layer to its app. This immediately made me think – hey, this is a great way to share stuff I discover – I can curate a bundle of content I’ve found which is interesting to me and share it with anyone else who is interested (connect). But a quick look at lunchtime confirmed that their view of social still doesn’t extend to giving me a ‘voice’to annotate my curated selection – there is no facility for creation.

So, where are we? it seems that our social web spaces are increasingly driven by content consumption and curation … and while tools supporting these actions have evolved, this has been at the expense of functionality for creation and contribution. If we are to make use of social web tools for learning, we have to recognise that the functions of the tools we find may not match with the balance of functions we would like our learning tools to have.

If anyone knows of a good tool that gets the balance right then I’d love to know. seems to get creation right (allowing annotation/commenting on resources), but I’m not sure about the others – though in truth i haven’t yet used myself, only seen other’s scoops.

Littlejohn, A., Milligan, C., & Margaryan, A. (2012). Collective knowledge: Supporting self-regulated learning in the workplace. Journal of Workplace Learning, 24(3) 226-238.

Milligan, C., Margaryan, A., & Littlejohn, A. (2012). Supporting goal formation, sharing and learning of knowledge workers. In Ravenscroft, A. et al. (Eds.), Proceedings of European Conference on Technology-Enhanced Learning (EC-TEL), LNCS 7563 (pp. 519—524). Heidelberg: Springer. Postprint (under green archiving arrangements):

Designing MOOCs


My previous post describing initial findings from our SRL-MOOC study got a spike of views yesterday suggesting that someone pointed at it (thank you, whoever you are). One comment, from Felicia Sullivan asked me to expand on the following statement from the conclusion of that post:

“While I don’t advocate creating rigid structures, I do think there are some simple things that could be done to make sure MOOCs such as Change11 are accessible by the full range of prospective participants.”

I’d been reflecting on what our study has told us about the design of MOOCs and an observation that (in Change11) while some participants found and joined networks without problems, others didn’t seem to find their place in the MOOC community so easily.

There’s an implicit assumption in this: that you need to find a network to succeed in a cMOOC. In fact I don’t necessarily believe this – we found lurkers – who chose not to attempt to interact with others. These lurkers used the MOOC as a source of knowledge, contributing back at their own level, but not expecting any particular level of engagement with other learners.  The group of participants I am most worried about are those who wanted to find a community, but didn’t. They’d write blog posts and get frustrated that no one responded, or attempt to engage with fellow MOOCers through commenting on blog posts, but get no reply. After a few failed attempts to interact, these participants gave up trying: the weak ties of the MOOC were too weak At the extreme, were those who expected the organisers to facilitate far more actively*, and those who didn’t contribute themselves (through blog posts, comments, or even tweets) but who still expected to benefit from the contributions of others.

What was the difference between those who found and didn’t find networks?

That’s complex. As the name might suggest, our study hypothesised that an individual’s ability to self-regulate their learning might impact their participation in a MOOC. We found that while ability to self-regulate is a factor, a number of other factors are also at play:

  • previous experience of MOOCs: people learn how to learn in MOOCs. We certainly saw a ‘type’ of participant who knew what to expect from the course, knew what they wanted from the course,and knew how to make their participation a success. Given its size, Change11 probably wasn’t a good ‘first mooc’ for people to experience.
  • pre-existing networks: one key element that people who had taken previous MOOCs brought to Change 11 was their pre-existing networks. When these people blogged, they already had an audience for their views, because they were part of a network that had been developed through previous courses. These networks though weak (you might never have met the people who read you blog, and don’t communicate with them at all regularly), are resilient: you’ll read blog posts from people in your twitter network, because that content has already been ‘filtered’ for you – your network is trusted. There’s another interesting observation here: that your network doesn’t necessarily have to consist of people who are studying on your course. We saw different patterns of engagement from respondents who didn’t actively try to set up new networks, but let them grow organically through the course, as well as those who were focused on creating an internal network of fellow participants.Both approaches were successful.
  • expectation and motivation: even among those who had never studied in a MOOC before, there were those who knew what to expect, and who could self-motivate and engineer learning networks. This is partly to do with technology, and partly learner disposition. I suppose the great unanswered question of our study (we still hope to answer it) is the nature of the inter-relationship between between a learner’s ability to self-regulate (planning, self-motivating, managing and reflecting learning) and digital literacies (being able to leverage digital tools and network to support ones learning).

Going back to MOOC design: how can MOOC designers create environments to accommodate the diversity (in background, motivation, skills, expectations) of learners who participate in these massive open courses?

So how could you achieve this?

I think the cMOOC concept and philosophy is great, but my observation from a number of MOOCs is that by definition, Massive courses bring in learners with a range of backgrounds, previous experience and skill levels, and it is therefore incumbent on the organisers to design a learning experience that accommodates these diverse learner profiles. I think this is particularly critical at the start of the course (in fact I would say that if you get the start right, then the cMOOC model should work once initial networks have established). The start of a MOOC is a big scary place, and providing some hooks for participants to hold onto might be all that is necessary. Here are a few suggestions.

  • cater for different interest groups: even in our relatively small sample (it was primarily a qualitative study, and we interviewed 29 participants) we saw strong evidence of people looking for people like themselves. This was particularly the case with different types of educators: on the whole, the HE participants tended to have the loudest voices (more used to blogging etc), and we saw evidence of k-12 educators becoming disillusioned with what they perceived as ‘noise’ on the network – they couldn’t find peple to identofy with, because their voices were drowned out by ‘confident’ participants form different domains. Creating spaces where people can find others they can identify with would be a really simple step – it might only be a set of hashtags: #change11-k12, #change11-workplace etc … but it might make all the difference in helping people find other’s who speak their language – reducing the initial complexity of the MOOC space. Of course you need to guard against homophily – where the great benefit of learning in a Massive Course is not realised because everyone is talking to people who are just like them – it is important to have cross-fertilisation of ideas from k-12 to higher education and vice versa, but this can come later in the MOOC, once people have found their footing.
  • goals: going one step further, finding others who have the same expectations of the course as yourself is key to continued motivation. Some of our other work has explored the possibility of using shared goals as a mechanism bringing learners together and fostering peer-learning and peer-support. Although some of our study participants expressed some resistance to defining goals, there were clear goal types and patterns evident and these could be used to seed self-organising communities. Getting people to articulate their goals is key to allowing them to find each other.
  • orientation:  Clearly, many people participating in MOOCs still need to learn how to ‘participate in a MOOC’. The Change11 mooc did provide some orientation, and there are good resources out there (I’m thinking for instance of this excellent youtube video from Dave Cormier), but this wasn’t enough for some MOOC participants, and I think a different approach might be useful. Encouraging participants to seek out others with similar backgrounds or goals (as above) would be one way of doing this, another would be to engineer interaction by setting tasks which demand that participants contribute to the course and interact with others. While these tasks might feel a little artificial (akin to icebreakers at dinner parties) they are essential in helping participants realise the importance of connecting, creating and contributing, in addition to consuming, in a cMOOC.

Returning to Felicia Sullivan’s comment, she asks:

“How do well designed structure, processes and resources aid in self-organization and connectivity?”

Last week we had a colleague Hans de Zwart (Senior Innovation Advisor for Global HR Technologies at Shell) visiting the Caledonian Academy. Hans is interested in DIY Learning (including MOOCs) and one key principle he espouses is that we should put as much effort into designing ‘experiences’ as we do to designing content. Learning is so much more than the content, and it is vital that in MOOCs, organisers create an environment where learning can occur for all those who want to learn, not just for those who already have the skills and literacies at the outset.

I hope this answers the question (or at least takes the debate forward). I’m currently writing a paper on this aspect of our study, so writing up some of the themes and implications here was useful (cathartic!) for me.

* I think for Change 11 (though not necessarily for MOOCs in general) that there is something in this. By having different presenters each week, the course lacked coherence. A greater degree of facilitation by the organisers wouldn’t have gone amiss. 

nb: this was quickly written … and as a blog post rather than a journal article, so please forgive any looseness – particularly in my near interchangeable use of the terms community and networks.

Change 11 SRL-MOOC study: initial findings


As you will remember the Caledonian Academy conducted some research during the recent Change11 MOOC run by George Siemens, Stephen Downes and Dave Cormier.  The study generated a lot of data, which has been sitting on my desk for some months now. The hypothesis for the study was that we would observe different learning behaviours and different approaches to learning in MOOCs among those with different SRL profiles.

What did we do?

SRL Profiles: The first component of the study was to ask participants to complete an SRL profile instrument* we had developed for the study. The instrument was adapted from a number of pre-existing SRL self-report  instruments (full details, and a copy of the instrument are here), most notably the Motivated Strategies for Learning Questionnaire (Pintrich et al 1991) and a more recent Self directed Learning Orientation scale developed by Raemdonck (Gijbels et al , 2010). Within the instrument there are a number of sub-scales measuring specific SRL components (task analysis, goal setting, self-reflection etc. from the list of sub-processes described in Zimmermann, 2005). 35 respondents completed an SRL profile.

Interview: Everyone who completed an SRL profile was invited to participate in a semi-structured interview. The interview explored a number of factors, including motivation, previous experience of MOOCs, goal-setting, knowledge sharing, and participation patterns.  We conducted 29 interviews generating 26 hrs of interviews.

Emerging themes from the study

I plan to go into detail about many of the findings of the study through a series of posts,  but thought an initial post outlining some of the key things we turned up would be of interest. We identified a number of themes from the interviews, and I shall discuss four briefly here:

  • patterns of engagement: we saw different patterns of engagement. In addition to an expected cluster of lurkers who purposefully did not engage with other course participants, we identified two further groups: one group of passive participants, who expected ‘to be taught’, and viewed the course as a source of information, attempting to capture all the ideas being exchanged within the Change 11 community; and a final group, more active participants, who set their own goals, established connections with other learners and linked these connections with their existing personal learning network. These overall patterns of engagement are mediated by a number of factors including: previous participation in other MOOCs, motivation for participating in the MOOC, makeup of existing personal learning network, SRL profile, and confidence. Lurkers will be present in any online course, and the final group are archetypal cMOOC participants, but those more traditional learners in the middle group really wanted more guidance than they were given.
  • patterns of tool use: we have written before about what we call “the 4c’s“: consume, connect, create and contribute, as different learning behaviours that individuals use as they learn in networked environments. You can consume content, organise it by connecting content together and to people, create new content (knowledge) of your own, and contribute that new knowledge back into the community. We have collected data about the different tools in use and the types of behaviour they support. To quote one example, of resource management. We collected data about people using a range of tools to collect resources (bookmarks). While some respondents in our study were using their (single user) browser bookmark menus (consume), most were using some social service, such as delicious or diigo (connect).  But we also saw people who were interpreting/providing commentary on the content they were collecting using tools like (create) and using such tools to ‘find their own voice’ in the community (contribute).
  • goal-setting behaviour: Goal setting is an important task in participation in MOOCs as, without a closely defined curriculum, it is incumbent on the learner to set and monitor their own goals. Most participants from the medium and high SR Scores groups set clearly defined goals, though we found no difference in the nature of the goals set with each group setting both participation and performance based goals. Those participants in the Low SR scores group were less likely to have set goals and were more likely to articulate goals categorised as ‘vague’ when prompted. Several participants in the study described how their goals had changed as they progressed through the course, showing evidence of reflection. We also uncovered evidence of other factors that might contribute to goal setting behaviour. Four participants in particular argued that the nature of this MOOC course (with a broad and only loosely defined curriculum) was incompatible with the idea of ‘goal-setting’ and had therefore explicitly chosen not to set goals.
  • motivation to participate: one problem with our study which we hadn’t anticipated (but perhaps should have) was that individual participants might have quite different (conflicting?) reasons for signing up. While some participants signed up for the content of the course, others (the majority) were primarily or exclusively interested in experiencing the Change 11 MOOC as a learning environment, often because they wanted to implement some of the features of a MOOC in their own practice. Several of the participants had previously participated in other MOOCs and when discussing their goals, these participants remarked that their goal –setting behaviour had been different in courses with a more defined topic (eg mobiMOOC and an ePortoflio MOOC, but others too).

What these results tell us about moocs

For me, the recurring theme from this research was that massive courses do need management (of learners, and their expectations), or at least a recognition of the diversity of learner backgrounds, preferences, expectations and motivations that come together in a MOOC, that is then reflected in the design of the learning space which is constructed. I suppose the prevalent (c) MOOC philosophy is that learners should be left to their own devices and they will find their place in the emerging learning networks(anywhere on the spectrum from lurking to leading). We certainly saw interesting evidence of self-organisation, especially among those who engaged with the course through the facebook group, and the twitter chats. But our findings indicate that some users either didn’t find these emerging networks (or at least didn’t identify a network that suited them), or didn’t recognise the central role that these networks play in leveraging the value of the course. While I don’t advocate creating rigid structures, I do think there are some simple things that could be done to make sure MOOCs such as Change11 are accessible by the full range of prospective participants.

Reflections on this study

I am still analysing the data from this study and we hope to publish the significant findings.  The specific nature of the Change 11 mooc (attracting many participants who wanted to ‘experience a mooc delivered by Downes, Siemens and Cormier’) complicates the analysis we can make about SRL behaviours as the different overall goals people held (primarily interested in ‘content’ or ‘process’) influenced their learning behaviour to such a great extent. Stay tuned for further blog posts after the holiday.


Gijbels, David; Raemdonck, I.; Vervecken, D. – Influencing work-related learning: the role of job characteristics and self-directed learning orientation in part-time vocational education – In: Vocations and learning:3(2010), p. 239-255.

Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). Ann Arbor: University of Michigan, National Center for Research to Improve Postsecondary Teaching and Learning.

Zimmerman, B. (2005). Attaining self-regulation: A social cognitive perspective. In Boekaerts, M.,Pintrich, P., & Zeidner, M. (Eds.), Handbook of Self-Regulation . San Diego: Academic Press.


*While we didn’t validate this instrument, we are reasonably confident of its validity as it is based so closely on existing validated instruments and was adequate for our purposes (in a related ongoing project, we are actually going to validate an SRL instrument with around 400 people).