How good is education research? (Post 1 of 2)
An article in the November 2010 Atlantic (yes I’m a bit behind on my reading), titled Lies, Damn Lies, and Medical Science, got my attention and made me wonder if there are corollaries to education research. First, the article background: it details work by a meta-researcher, John Ioannidis, who has shown that a distressingly high proportion of medical studies published in journals are wrong. In one example, he looked at 49 of the most-cited articles in journals that are themselves most cited. He found that the research in 35 of the studies had been retested, and 14 of them (41%) were subsequently shown to be either incorrect or highly exaggerated. This is just one example, and the article makes a strong case that medical science is often not on the solid ground that it appears when your doctor prescribes a certain drug for your condition. Research suffers from the following issues, among others:
- - Studies showing an important new finding are much more likely to be published than studies that don’t find something new. This creates an unconscious bias in researchers to find positive results.
- - Studies refuting earlier findings are much less likely to be prominently reported than the original studies that have been widely disseminated. The result is that doctors continue to cite the original, refuted article and findings.
- - So many factors impact patients’ health that teasing out the effect of one factor is extremely difficult, and even if an impact is found with some patients it may not apply to others.
The article is well worth a read on its own merits, and it made me think of the many ways in which “education research” could be substituted for “medical research.” I’ve come to believe three things about education research:
- - Doing randomized controlled experiments is so time-consuming and expensive that most research, even widely cited research, does not reach this standard.
- - Even the gold standard has drawbacks (Ioannidis believes that 25% of randomized controlled experiments in medicine are wrong).
- - The subtle explanations and caveats that are often explained in the studies are not well represented by media accounts of the research.
What does this mean for online learning? I’ve not yet determined what I think the ultimate answer is to that question. My interim sense is that the implications include:
1. We should be careful about citing studies, particularly ones that we haven’t read ourselves.
2. Foundations and policymakers should temper their expectations about how quickly research into online learning results will be available, and how quickly a body of evidence will have been created.
3. A lack of empirical studies is not reason to hold back on creating online learning options.
More on item #3 in the next blog post.
I’m sure there are some education researchers with a more robust perspective than mine, and I look forward to hearing from them.
 This approach to rating articles and citations is common across fields of science.