Academics: bring your own identity

You’re probably familiar with Linked-in: it is a profile service for many sorts of people and I’ve noticed that outside the UK it is used for academic networking too, more so than inside the UK, at least in the circles I move in. It has 225 million members. You might not know about (nearly 3 million) and researchgate (2.8 million). They are examples of social networks for academics. Google scholar allows academics to manage their publications profile. is one of several personal profile tools that allows you to pull together identity over many platforms. 

Now comes ORCID, a researcher identifier scheme increasingly being adopted by big publishers and third party web services alike. In it’s own words:

“ORCID provides a persistent digital identifier that distinguishes you from every other researcher and, through integration in key research workflows such as manuscript and grant submission, supports automated linkages between you and your professional activities ensuring that your work is recognized”.

The signs are good that ORCID will take off. I hope so, particularly so that innovative third party services can come in and offer new approaches. I am a big fan of the idea of impact story, a beta service that uses ORCID to drive a whole digital footprint approach to tracing the web metrics and social shares of academic online outputs, alongside citations. This broadened attention is fundamental to the altmetrics manifesto.

And at the same time as the growth of a global infrastructure for researcher identifiers, universities are laying more claim to the published outputs of academics, through the “green” open access self-archiving route (which I strongly support). And we have an increased attention to other academic outputs: the data produced within research projects, and the extent to which other digital outputs create impact, or evidence of public engagement.

And yet, as others have pointed out, as the academic workforce becomes more transient, with more part time contracts, more semi-retirements, more people holding multiple contracts, the rights of the institutional employer become less and less.

I feel that the solution to this paradox is in the way the institution relates to an academic’s whole digital footprint, the way it accommodates the academic’s identity. Perhaps we, IT Services departments, should just embrace the reality that academics’ identities live outside the university. Their profiles live outside the university. Their outputs live outside the university. Their impact happens outside the university. It’s long been the case that universities buy and sell “academic reputation”. Maybe it’s time we fully accept that, and embrace a “bring your own identity” concept in IT Services.

I’m not sure I understand the full implications of this possible paradigm shift, but I want to.

The academic politics of data visualisation

This is just a short observation, but one I think is worth sharing.

(forgive current lack of links and references, but I got the blogfire in ma belly)

With my digital humanities hat on, this week I participated in some training laid on by Warwick’s Centre for Interdisciplinary Methodologies. We covered Gephi and Mondrian, two data visualisation tools. It was run by Bernhard Rieder and it was fantastic to get my hands on some training data sets and start to use those tools for real.

Bernhard contextualised the growth of interest in data in the humanities and social sciences.  At one point he asked himself, “is it reductionist to work with data about complex multidimensional social issues? Well, yes”. But as he contextualised, the point at which you’d use these tools is as one of many methodologies.

By the magic of the twitters, he has shared his slides with me, and here they are:

My introduction to data visualisation really came a few years ago, from Tony Hirst and then Martin Hawksey. It was in two other contexts: social network analysis, and learning analytics. The context that attracts the most controversy is learning analytics. Believe me, I’ve been in workshops where people have got upset and angry looking at barcharts and network diagrams about student progression and correlations between grades and online activities.

My observation is this: it feels that the e-learning field is first encountering data visualisation in the frame of learning analytics. Learning analytics is a highly political field in this age of funding cuts and emergent mooc business models.  It is not surprising that the e-learning community views data visualisation with skepticism, given that loaded framing.

But from a broader academic technologist perspective, I see the drivers for data visualisation from within research methods, from research altmetrics, from public engagement with scholarship. I see it as a positive move, an area needing rapid skills development. I don’t see it as reductionist. But if I didn’t have that broader angle, I’m sure I would.


In recent months I’ve been in discussions, large and small, about adoption of Moodle here at Warwick (IT Services has recently started offering a moodle service), and about, you guessed it, MOOCs. It is a historical accident that these discussions coincide at my university, but the similarities in those discussions are striking and they are leading me to change my mind about something.

There are, I think, two main levels to these discussions of moodles and moocs.

  • Pedagogy: what do you want to teach, how do you want to teach it, what affordances do the platforms offer, what is different between face to face and online, and what approaches work at which scale?
  • Production: who agrees the course structure, who takes responsibility for materials development, who produces what, how long does it take, what tools do we use?

I have been keen to emphasise that thinking about technology doesn’t make me technology-led, that I believe the teaching should come first. Obviously, this is the approach of all good e-learning experts, especially my colleagues. But likewise, I have been acknowledging that there are practical production issues that need tackling, and that if academics want technology/content support, they need to articulate their plans for a course. Particularly when we’re talking about the effort and timescale required to produce MOOCs.

So is there a common language that can bridge the challenges of pedagogy and production? I’m slowly coming round to the idea of learning designs as a way of creating MOOCs.

I remember, many years ago, a CETIS meeting that James Dalziel videoconferenced into to showcase the Learning Activity Management System, LAMs. This was my first introduction to learning designs, probably 2002/3. In the question session I asked “isn’t this a lesson planner”, which provoked some sharp intakes of breath around me for my niaivity, but he answered, yes, pretty much, but in an executable form that the VLE can understand.

I have always been skeptical about the extent to which “learning design” approaches are used within real academic workflows. It has always seemed more of a reflective activity confined to staff development, action research, and research papers. Every mention I saw at conferences dug me in further to my perception that it is a form of critical practice that I doubted exists in the wild. In fact, I have become quite phobic about the term learning design. I am particularly skeptical about executable learning designs becoming an everyday reality.

Yet I’ve found myself increasingly talking about learning design. Because that level of explicitness comes into its own when you’re trying to operate on both the pedagogy and production levels. So maybe I’m coming round to the idea of Massive Open Online Learning Designs. A model born out of necessity rather than aspiration, not to improve learning, but to orchestrate the many skills required to run a MOOC. It’s not about the transformational agenda that often accompanies e-learning thinking, it’s just … practical.