What makes great teaching? It’s a question most people in education think they can answer – but many of us would be wrong. Prof Rob Coe, lead author of the 2014 report What makes great teaching? Review of the underpinning research, and head of Durham University’s Centre for Evaluation and Monitoring, explains…
His answer to that key question is: “we don’t know as much about it as we’d like to. We get glimpses at the moon through a cloudy sky – snippets that give insight. We try and bind them together, but we have a precarious grasp of something that is very complicated.”
Maybe it always will be that way: certainty is inherently elusive in education research, he notes. “But that’s not a reason to not act. There are some implications for practice and policy.
“It’s important we recognise that knowledge is quite uncertain. We might change our minds about what is important, how influence works. For example, how directive should teachers be in the classroom? What is the evidence? Explicit, direct instruction is characterised as traditional: the teacher talks, students listen. Against that is put education approaches seen as more progressive – offering students choice in what they do, discovery learning, ‘child centred’ learning. Who leads, who takes the initiative? These are actually separate issues, but they tend to stick together. And people polarise to one side of the other.
“As the learner becomes more expert, there is more scope for investigation and learner-led approaches.”
“The reality is there is a place for direct instruction, especially if people are learning something that is not in a familiar context for them. As the learner becomes more expert, there is more scope for investigation and learner-led approaches.”
General advice helps little
“But what does that mean for me, teaching simultaneous equation to year 9 on a Friday afternoon? The answer is, it’s hard to give general advice that is useful. It depends on the context – including students’ levels of interest, knowledge and motivation.
“Increasingly I think this is not about prescriptive approaches – ‘phonics is the best way to learn to read’, or ‘we shouldn’t be doing group work’. Actually, when you simplify in this way you lose as much as you gain. Or the overall gain is only marginal.”
What we really need, prof Coe maintains, is teachers with “greater wisdom. Teachers who know the research evidence and debates about explicit v implicit presentation, and can integrate this into their own experience, skill base and practical repertoire, so their teaching is more creative, more individualised.
“The problem is this could be taken as a licence to say, whatever feels good, is good. The dilemma is too much freedom vs too much constraint. Neither quite works. Somewhere in between is right.”
So how do you identify what works?
He is intrigued by the tendency for people in education, even researchers, to take different perspectives about what counts as evidence. But in the research for the Sutton Trust report, “we took a clear line that what counted would be tight, not inclusive. We would adopt a high threshold. That changes the story you end up telling. Prevalent practice is often supported by some research, but not the best research.”
The high threshold is very important, he insists. “That was explained in the report – but it’s a point not often picked up in the subsequent discussion.”
He points to three kinds of evidence:
1. The best is when specific changes are made to teaching practice or pedagogy and tested in a randomised control trial; or a well-controlled comparison, where one group experienced an event or programme, versus another that did not. Then you look at the differences in learning outcomes. The EEF/Sutton Trust toolkit specifically seeks out those examples.
2. The next kind of evidence is correlational. Most school effectiveness research fits this type, says prof Coe. You look at which schools, or which students, achieve well; you ask what are the differences between them and those not doing well. You try to keep out factors such as prior attainment, but you are still seeing different levels of feedback, and attainment assessed in different ways. However, he concedes, “if you combine a number of these studies, and if the approach is close to that in the first kind, then you can get good reliability.”
3. The third kind is theoretical: studies people have built up from cognitive psychology, for instance. This kind of educational research tends to be theoretical, for example how working memory operates, how long-term memory retrieval works. Often these are examined in laboratory studies. “Then you seek implications for classroom practice, to discover which teaching approaches are likely to lead to enhanced outcomes.”
The most powerful studies, and the most reliable, are those where the three approaches converge: when you get the same story from the different approaches.
In fact, there is substantial evidence that triangulating measures that are individually unreliable can lead to a combination that is collectively ‘adequate’, prof Coe states: “Each measure on its own is not good enough. But all have different strengths and weaknesses, so collectively they can provide something quite reliable. If we do it well, it has value.”
However, research that meets even this modest standard is hard, and rare, in schools, he maintains. It’s more easily done in research practice “where there is more emphasis on validity – and more human resources available, if we’re honest. It’s hard for schools, and often we outsiders don’t realise how difficult it is.”
For example, if you observe a teacher and students in class, you get a strong sense of whether the teaching is good or not. Someone else comes along and says “you shouldn’t trust that judgement”. You won’t accept their opinion, because of the strength of your own impressions.
“So if someone else seeing the same lesson often disagrees, how do I know my opinion is right and theirs isn’t? It’s such a compelling, strong feeling the observer gets… ”