Translate to multiple languages

Thursday, October 13, 2016

Reflecting on Learning Analytics | EDUCAUSE Review

Photo: Andrea Nixon
Andrea Lisa Nixon, Director of Educational Research at Carleton College summarizes, "Technological leaders must draw on the strengths of both the proponents and the skeptics in our communities to ensure that institutional mechanisms are in place to examine the overall efficacy of learning analytics systems." 

During his remarks at a ceremony commemorating the atomic bombings of Hiroshima and Nagasaki seventy-one years ago, U.S. President Obama contextualized the moment by saying: "Technological progress without an equivalent progress in human institutions can doom us."1 Those of us working in higher education may not deal with the enormous global consequences of atomic weapons, but we do play powerful roles in helping our institutions appreciate the transformative effects, either positive or negative, of the technologies that we lead those institutions in adopting.
Many of us in the EDUCAUSE community have careers centered on examining—and helping colleagues consider ways of adapting—technologies so that they can be used to the greatest effect in teaching, learning, and research. Resistance to technological adoption can be rooted in fears of change, of technologies, or both. To some extent, our work entails engaging with colleagues to address concerns, both real and imagined, and to champion the adoption of new tools where justified. As institutional scouts in technological marketplaces, we have roles that also entail a deep understanding of our institutions' needs and a critical eye for separating the hype surrounding technological developments from the realistic uses. This is a cycle that frequently repeats itself in our field.
Learning analytics tools represent a complicated iteration of this cycle. We will need to be on the top of our game in imagining future uses and also in engaging in informed critique.
At present, colleges and universities struggle mightily to improve learning environments and, more importantly, student success. A common, high-level measure of success is a graduation rate within 150 percent of the "normal" time for completion—that is, within six years for four-year institutions and within three years for two-year institutions. In the United States, that rate currently stands at a mere 60 percent for four-year and 31 percent for two-year institutions.2 Nationally, a series of characteristics—such as institutional selectivity in admissions, race and ethnicity, socioeconomic status, and gender—are used to describe variations in these rates.
College and university leaders are increasingly trying to understand variations in graduation rates and the underlying causes. This is where learning analytics comes into the conversation. In what ways might the data relating to student characteristics and behaviors better inform the learning environments we design for our students? How might this data inform our students' choices? It is critical that we engage in systematic studies wherever possible. This is true not only for our students' sakes but also for the viability of our institutions. The stakes are high.
At best, analytic systems offer us the prospect of capturing data from student information systems, learning management systems, and other sources. With this data in hand, we have tools that promise to provide a means of identifying individual students who are struggling or institutional structures that do not serve their intended purposes. This idea motivates dedicated institutional leaders to adopt and invest in tools that fall under the category of learning analytics. So if we have data in hand and the means to analyze it, what is the problem?

Learning analytics discussions can be both fascinating and fraught, particularly to the extent that these tools do not clarify the algorithms or statistical models employed. This lack of transparency may be attributed to business models or to machine learning techniques. These techniques employ computational power not only in analyzing data but also in establishing the means by which the analysis takes place. In other words, one might end up with a list of students "at risk" but with no clear understanding of how exactly those students were identified. It is hard to overstate the degree to which this is a departure from established research methods, particularly in educational research. In contrast, it is much more common for those in the social sciences to specify statistical models based on findings in the literature or on hypotheses developed locally and then identify models with the greatest predictive power. This is a means of addressing issues of correlation or confounding variables that might otherwise lead to flawed analyses.  

If you enjoyed this post, make sure you subscribe to my Email Updates!


Nikos Andriotis said...

Analytics is big now, I think mostly due to the big business interest in big data computing. The question "what do these data tell us and how can we use it" is asked more frequently now.

And I think that is creates a good opportunity for education as well - taking advantage of this trend seems like a good chance.

Helge Scherlund said...

Hi Nikos Andriotis,

Thank you for dropping by.
I appreciate your comment.