Is student progress the same as teacher effectiveness?

Featured Image

There’s plenty to cry ‘unfair’ about in teaching right now.

Your classes are getting bigger; there was no money for decent CPD again this year; and you were given bottom set Year 11 for the fourth year running.

And now the value-added results are in. Negative value-added for those Year 11s again? It’s bad enough that they have not met expected progress, but how can they have actually gone backwards? All that extra work you’ve done. All the lunchtimes you’ve missed for those catch up sessions. You are exhausted and now, it seems, you are a terrible teacher.

Or are you?

It’s now 15 years since CEM first published findings on the dangers of comparing teachers’ performance using value-added data for subjects set by ability.

And this phenomenon is still as important, still as relevant, and quite possibly still as misunderstood today as it was back then. This is a dangerous thing in a world where:

  • Value-added progress measures are still very much in favour
  • Misinterpretation of the figures can impact on the judgements made about individual teacher effectiveness.

Research conducted by CEM in conjunction with schools using CEM’s Yellis assessment found that value-added scores attributed to any particular teacher's performance were dependent upon which set they were given to teach.

These findings are known as the ‘Critchlow-Rogers Effect’. John Critchlow and Steve Rogers were, at the time, Headteacher and Deputy Headteacher at different schools in Yorkshire. Over a number of years they had independently spotted simple but intriguing patterns in their school’s average GCSE value-added results.

Their key question was ‘Why do students in top sets consistently achieve higher value-added scores than students in lower sets?’

Let’s suppose

Suppose your school uses an on-entry baseline assessment, (eg MidYIS or Yellis) to get a picture of students’ general ability.

Now, suppose that your school calculates its value-added measures based on student progress from the baseline assessment of general ability to an examination in a specific subject, let’s say French GCSE.

And suppose that French is taught in sets based on student ability in French.

In the higher sets there may be:

  • Some students who are there because of their general ability.
  • Some students who have demonstrated their ability in French, despite their lower general ability scores.

CEM’s research found that, on average, the higher sets will achieve higher value-added results since the students will tend to outperform their potential as suggested by their general ability baseline scores.
Whereas, in the lower sets there may be:

  • Some students who are there because of their lower ability in French, even though they may have high general ability scores.

CEM’s research found that those students will, therefore, tend to achieve lower value-added results than others in that subject, and on average, these lower sets will have lower average value-added results.

What does all this mean?

So, essentially, if you have repeatedly been given higher ability sets, regardless of your performance in delivering the curriculum, the chances are that your value-added scores will be better than your colleagues who are repeatedly given lower ability sets.

This will be apparent in any subject that has been set by ability or competence in that subject.

And this is the case regardless of the age of the students, the exam being sat, or the baseline assessment that is being used as a basis for calculating value-added results.

Three things to remove the Critchlow-Rogers effect

Value-added measures, in themselves, are perfectly valid, as long as interpreted with the usual caution and they can help schools to ask important questions.

There are three possible approaches that schools can use to remove the Critchlow-Rogers Effect in order to make more accurate class comparisons:

  • Set classes by the assessment scores that serve as a baseline for the value-added calculations, (e.g. according to the baseline measure of general ability gained from CEM’s MidYIS or Yellis assessment)
  • Have the students in mixed-ability classes.
  • Do not compare average value-added measures for ability-based sets with those classes that are not set by ability.

While it seems to make sense to judge the effectiveness of teaching from its impact on assessed learning, we know that, in theory, a number of factors will influence students’ achievements.

We know that value-added is not always the same as effectiveness and it is those who work in the schools who can best interpret it.

Perhaps the valuable thing is not so much the result as the questions it helps you to ask.

 

For further reading about value-added, read:

The CEM Blog ‘Five things you need to know about value-added’

Information about exam results and value-added

Centre for Education Research and Practice ‘Where’s the value in value-added'