How do you move the discussion about learning analytics forwards? Sometimes I feel like we are stuck in a rut that leads to the same conversation over and over again:
[Kirsty:] What would you like to know about your students?
[Academic A:] I want to know how many times they looked at my video.
[Kirsty:] Sure, I can do that, but you could have anything! What interesting things would you like?
[Academic B:] I want to know who is in danger of failing and needs help.
[Kirsty:] Ok… there are some tools for that. But Learning Analytics can help all students, not just the ones who are at risk. Let’s think about this from another perspective. What would you like your students to see about their own learning?
[Both Academics:] Blank stare.
People can’t ask for interesting reports when they don’t know what they could get.
Recently I have been starting to think that the problem is one of analytics literacy. Counts are easy things to think of, and some people have heard of early intervention systems so they are happy to latch onto that, but it is hard to imagine what you could do with data if you have never seen what it looks like, or learned about how you could manipulate it. Indeed, the Completing the Loop OLT project has been finding that people ask for remarkably boring things when we just go out and ask them: “what do you want?”
So how can we move the conversation forwards?
Last week we tried something different at a workshop that we ran at UTS through with help of the Connected Intelligence Centre. This was the first event we have hosted as a part of our OLT grant which is discussed on other pages in this site. It was a bit of an experiment in shifting the dialogue forwards, and I am pretty happy with the results. So what did we do?
In the one day, we followed a format that went from data to reporting and what I want.. Here is a bit more explanation about what we did:
After that I gave an introduction to the CLA toolkit. I talked about why we are developing it and the kind of data it generates. We had a live demo running which meant that anybody who signed up to our demo and then tried tweeting to #clatest or posting to a test FB group found their data showing up in our demo Learning Record Store (hosted on the LL cloud as a trial account). This really gave people a feel for the type of data that we can capture using the CLA toolkit.
Next, we moved into a session that focussed on the possibilities that such data might be able to open up for analytics, both pedagogically and technically. These presentations were really aiming to push people into a more exploratory space.
First, Mandy Lupton spoke from her connectivist pedagogical background. As a long time academic teaching in the wild (i.e. beyond the LMS), she is exactly the type of person that we would like to cater for with our project (luckily she is member of the project team :) Mandy introduced a wide range of learning theories and characteristics that we might want to measure and report upon, demonstrating how they relate, and where they are different.
Then, Simon Buckingham Shum really pushed the envelope of what is possible. Drawing upon his years of working in Learning Analytics, he responded to Mandy’s ideas about what types of things were interesting, showing that many of them could perhaps be measured and reported. One of the things I really appreciated about this talk was how much Simon already had measured in his life… it really helped push people into a new way of thinking about what Learning Analytics might help them to understand.
What do you want?:
Only towards the end of the day did we get into asking people what they want.
This meant that everyone had been well and truly primed towards thinking about Learning Analytics in a more interesting way, and I think that the results payed off. We also made sure that everyone thought about what their students might like to see.. in the end Learning Analytics is for learners and if we don’t give our students interesting perspectives on their own learning then I fear we are all wasting our time. Everyone really rose to the challenge, and we all learned a lot.
So, what have I learned? I think that the conversation can get more nuanced and interesting, but it takes serious investment from both sides of the computer science/education divide. Both Learning Analytics people and Pedagogical people need to be at the table, and they all need to engage with the process. No-one should be telling people what they want, but neither should we be accepting that people know what they want at the outset. A lot of people in the workshop kept saying things like “but I doubt you could measure that” only to have someone else pipe up with an example of how they already had, or how you might be able to. I think that this is a core insight that I took from our workshop. Maybe when people are telling me about the boring things, its because they are censoring their responses by assuming that they know what a computer can’t do, but often people don’t know enough to know what might be possible… its that same old unknown unknowns problem, which, while it still gives me a giggle to watch Donald go on there, is actually a deep epistemological problem.
I also think I have learned enough about what is important that we could now run the priming bits in the space of about an hour. We would do a quick overview of data (making sure there was a demo), follow it with a pedagogical possibilities type of talk that considered what you might be able to do with that data, and finally come back to the types of analytics that have already been done to measure things that were discussed in the pedagogical talk.
Next time I am definitely trying that…