Can we really trust educational research?

I recently came across an article by Elizabeth Gilbert of the University of West Virginia and Nina Strohminger of Yale University presenting their findings that only a third of published psychology research is reliable.

Another article confirms that in the field of biomedicine (the basis of so much news coverage of medical advances) less than 50% of research proves reliable when the “reproducibility factor” is applied.

And astonishingly, we read elsewhere that “just 11% of preclinical cancer research studies could be confirmed”.

We might well speculate as to why such a body of inaccurate “research” is being published; certainly there are important questions here. And let’s be clear that it is academics themselves who are drawing attention to the problem, and expressing frustration.

If psychological and medical research are this unreliable, shouldn’t we also be concerned about the “research” that underpins educational theories and methods?


What is research?

According to the dictionary, research is,

“the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions”

Hearing that information is based on “research” gives us a sense of assurance that it is reliable. We assume that others have systematically investigated the facts presented, meaning that we don’t really need to think about them too hard, and can simply accept what we are hearing. This is natural, a sign of our brains working efficiently.

But this is also why it is so shocking to us when confronted with scientific evidence that only a third of psychological studies are reliable.

We mistakenly assume that research includes ‘double-blind’ clinical studies, is independently peer-reviewed, compared with alternative findings, and subjected to further meta-analyses: “in order to establish facts…”. In reality, it would seem this is rarely the case.

When we hear or read the words, “studies suggest that…”, we have to bear in mind that other studies may have reached quite different conclusions.

With that in mind, let’s consider a couple of the education theories which in recent years have been popularly seized upon and regularly cited in discussions about piano teaching…

Theory 1: The Growth Mindset”

We’ll start with perhaps one of the biggest educational crazes of recent years: the “growth mindset” theory. Based on the work of Carol Dweck, this predicts that educational attainment is positively linked to a student’s belief that their ability is malleable rather than fixed.

The well-known education writer David Didau explains:

Dweck’s theory is more complex than simply ‘try, and you will succeed’. She is actually making three claims. First, that having a growth mindset leads to better academic achievement. Secondly, that having a fixed mindset leads to worse academic achievement – and lastly, that providing students with a growth mindset intervention changes students’ mindset and thereby improves their academic performance.”

Sounds great – and certainly plausible. But increasingly Dweck’s theories and research are being challenged.

Didau cites the Education Endowment Foundation trial Changing Mindsets, which he points out was unable to replicate Dweck’s results, and which concluded that any improvements observed may have been down to chance.

Meanwhile, the 2016 Report Mindset in the Classroom surveyed a group of teachers who believe in growth mindset theory, finding that just one in five of them confidently reported success implementing it in their classrooms.

And in an article on the eteach website, Katie Newell hints at the corrosive impact that may accompany the sense of failure:

“Having tried the methods for some years, teachers and parents alike are finding many children trying their best again and again, but still not succeeding.”

Like others, Newell points to the fact that empirical studies have been unable to reproduce Dweck’s findings.

The most significant study is perhaps that of Yue Li and Timothy Bates of the Department of Psychology, University of Edinburgh. Li and Bates sought to confirm Dweck’s theory, but came to this staggeringly different conclusion:

“Children’s own mindsets showed no relationship to IQ, school grades, or change in grades across the school year, with the only significant result being better performance in children holding a fixed mindset.”

Oops. And there’s more:

“Fixed beliefs about basic ability appear to be unrelated to ability, and we found no support for mindset-effects on cognitive ability, response to challenge, or educational progress.”

Meanwhile, Dr Hugh Morrison of the Queen’s University, Belfast has written an in-depth academic analysis of the flaws in Dweck’s mindset research, which is well worth a read.

But I’ll leave the last word to Didau, who points toward the ethical and human consequences of the growth mindset craze:

“I’m not saying growth mindset is completely wrong or useless. But it does contradict a lot of research in other fields, and it also flies in the face of many people’s lived experience… In fairness, Dweck isn’t the problem; it’s the legions of mindset fans who’ve popularised the belief that all people who fail just weren’t trying hard enough. And this message is both harmful, and wrong.”

Theory 2: 10,000 Hours”

In unpacking the Growth Mindset theory, Didau tantalisingly points out:

“There is good research supporting the idea that inborn ability (including but not limited to intelligence) matters, a lot.”

As with “growth mindset” theory, the 10,000 Hours Rule is often seized upon by those who would prefer to reject the possibility that we have “inborn ability”.

The so-called ‘10,000 hour rule’ was popularised by Malcolm Gladwell in his bestselling book Outliers: The Story of Success. The book claimed that if you wanted to become one of the best in the world at something, you simply had to practice it for 10,000 hours.

The theory drew on research by Anders Ericsson, who quickly went on record tempering and distancing himself from Gladwell’s conclusions, writing:

“Unfortunately, this rule — which is the only thing that many people today know about the effects of practice — is wrong in several ways.”

The 10,000 Hours Rule has subsequently been repeatedly and far more firmly debunked, perhaps most clearly in a 2014 study from Princeton University, which concluded:

“More than 20 years ago, researchers proposed that individual differences in performance in such domains as music, sports, and games largely reflect individual differences in amount of deliberate practice. This view is a frequent topic of popular-science writing – but is it supported by empirical evidence?

“To answer this question, we conducted a meta-analysis covering all major domains in which deliberate practice has been investigated. We found that deliberate practice explained 26% of the variance in performance for games, 21% for music, 18% for sports, 4% for education, and less than 1% for professions.

“We conclude that deliberate practice is important, but not as important as has been argued.”

This of course begs the intriguing question:

If deliberate practice actually accounts for just a 21% variance in performance between musicians, what factors account for the other 79%..?

It would seem that researchers still have many important questions to answer.

Searching for the truth

Our first responsibility as educators is to be educated. But what does this mean in practice?

A teacher commenting on David Didau’s site summed up what some readers might by now also be thinking:

“Like falling dominoes, all the things that we were told were true at conferences and in CPD turn out to be bollocks, pretty much as expected.”

I’m neither surprised by the sentiment, nor by the language and strength of feeling expressed here. Growing numbers of teachers, like the one quoted above, are becoming weary of a culture in which conference speakers allude to obfuscated “research” before swiftly promoting their latest book or method.

Over three decades of teaching, I have seen plenty of fads, trends and bandwagons come and go. I have seen good, experienced teachers season their CPD with a large pinch of salt; others have been left confused and deflated when promoted ideas didn’t work in practice.

The one thing I’ve often observed is that there can be a significant gap between professional advice rooted in authentic real-world teaching, and that which is based primarily on academic and marketing research.

Those who regularly start their advice with “studies suggest…” have at least one foot in the latter camp. That’s fine, necessary, and often interesting, but like many, I’m especially keen to take advice from those who have themselves successfully delivered the quality of piano teaching that I aspire to.

As a mentor advising other teachers, I try to apply the same rule, digging into my musical knowledge and teaching experience before heading to the library for second-hand answers.

Here, then, are some suggestions that will hopefully help us to balance practical experience with fresh ideas in our own field of music education.

Evaluating Research

First of all, it’s a good idea to find out what the latest research really says. Happily, a general internet search is usually sufficient to begin an investigation into any subject, and in a bid to counter misinformation most academic papers are now freely available online, at least as an abstract (summary).

There are two important issues to consider:

  1. the source of the research;
  2. the quality of the research.

When considering the source of the research, we must bear in mind that a lot of research is symbiotically linked to selling a product. Consider who funded, who undertook the research, and why. To what extent might confirmation bias have influenced the outcome?

When considering the quality of research, we might question the scale of the study, how it was conducted, and the independence and rigour of the analysis. Importantly, what are alternative studies and researchers saying? We should always consider the counter evidence and arguments before blindly accepting the outcome of a single study.

Lest this seem unduly cynical or “anti-academic”, let’s remind ourselves that researchers themselves are lamenting the fact the only a third of published psychology research is reliable. If they are concerned, so should we all be.

When considering implementing new ideas and approaches:

  • Never reject tried and tested methods that you know work, just because you hear somebody claim they don’t.
  • Some of the approaches your teachers used apparently worked well for you, and may well work equally well for your students.
  • Be willing to refresh (without discarding) your approach by integrating new ideas alongside existing ones in a balanced way.
  • Be authentic; if a new theory is alien to your real-world experience, you should almost certainly bin it.
  • Be wary of any one-size-fits-all approach. Every student is unique.
  • Start with the big picture, not the details; look for the seed of truth which you can take away and use right away.

Concluding thoughts

It would of course be foolish to disregard research, despite the concerns academics are raising. Indeed we must hope that its cumulative impact is a positive one, driving the music education agenda forward.

But it would perhaps be equally foolish to unquestioningly accept everything we read and hear without an intelligent and inquisitive attitude. In a sense, we each of us need to be researchers.

Some like to be seen as “leaders of the pack”, being the first to adopt the latest theory. It is often more wise to take a careful, balanced and reflective approach, allowing our teaching to evolve naturally, and above all, well.


Recent Articles for Teachers

Pianodao is FREE to all, but funded with the help of reader donations.
Supporters enjoy extra benefits by joining The Pianodao Tea Room.



Published by

Andrew Eales

Andrew Eales is a pianist, writer and teacher based in Milton Keynes UK, where he runs Keyquest Music - his successful independent music education business, private teaching practice and creative outlet.