The signal and the noise in blended learning research

The Signal and the Noise” is the title of the book by renowned data geek Nate Silver, whose understanding of probability and statistics has allowed him to make a living playing poker, predict numerous election results with far more accuracy than other pundits, and parlay his knowledge into a successful blogging career. In the book Silver discusses topics as varied as baseball and climate change, exploring the ways data should be used. In presidential elections, for example, he might point out that a poll on the website of Fox News is “noise” (because it’s a self-selecting sample of a biased audience), but that within the dozen legitimate polls coming out of Ohio there is a pretty good signal that one candidate has the lead—even though some polls will show the other candidate winning. The signal and the noise is also a pretty good description of the state of research into digital learning. A recent article from Education Week would have us believe that Blended Learning Research Yields Limited Results. On the surface, the article’s observations are accurate. But the article focuses on the many issues that obscure what is happening in digital learning (the noise), while shortchanging the issues that are becoming more clear in digital learning (the signal).

Among the legitimate sources of noise are:

  • Digital learning is implemented in countless different ways, and many elements differ between programs. These include the types of content, technology platforms, the role of the teacher, the extent of teacher professional development, the degree of planning prior to implementation, the control that students have over their learning, the amount of and success of communications with stakeholders, and many more. Any one of these can have negative impacts on success if implemented poorly.
  • Experimental research in education often takes five years or more from the point of study design to when the results are reported. In a field that is changing as rapidly as digital learning, that time period makes research difficult and relatively less valuable than in other, slower-moving fields. By the time a study is published the school and the field in general have changed multiple times.
  • The majority of studies into online and blended learning have been of college students or adult learners. Some of these findings may apply to students in high schools or younger grades, but many do not.

And yet, a discernible signal about outcomes of digital learning programs exists. Distinguishing the signal requires a basic understanding of several points:

  1. Meta-analyses of student outcomes related to educational technology repeatedly show that the application of technology shows no significant difference in student outcomes. This finding has been true over decades of study.
  1. The meta-analyses show no significant difference overall because they include studies of programs that are implementing technology well and with enough time to mature (often with positive results) and programs that have not implemented well or are in very early stages (therefore with flat or negative outcomes.) This is currently true of studies of blended learning.
  1. Within these studies that collectively show no significant difference are those examples of success—the schools and districts that are using online and blended learning to improve student outcomes. They exist, as do the schools and districts that have implemented poorly and have not produced positive outcomes.
  1. Schools are producing data that can be studied without setting up an experiment to test blended learning, because of the presence of state assessments, AP exam scores, NWEA MAP, and other tests. None of these is perfect. But as the data set collectively becomes sufficiently large, it produces useful findings even if any single data set is not as strong as an experimental design with a randomized controlled trial would be.

The noise suggests that we know very little about blended learning outcomes. But the signal paints a different story. It shows that the use of blended learning does not automatically produce positive outcomes. But if well thought out, and implemented with fidelity to the plan and to the ways in which the technology was intended to be used, it can be successful.

How do we know? In part, because we looked at results from some blended learning programs, primarily in charter schools, for some of the research in Keeping Pace 2014. More recently, working in conjunction with the Christensen Institute, we have been seeking examples of success in traditional school districts. We have found them, and in just a few days we will be releasing the first profiles documenting these successful blended learning implementations.