In their pursuit of a number - a neat proxy for the distance travelled between two points - schools can do some crazy stuff.
Levels are reinvented: one questionable number is subtracted from another and the result is divided by the number of 'data drops' to create increments that are supposed to indicate that expected or better progress has been made. The difference between two standardised scores is viewed as hard evidence of progress, and teacher assessments are converted to scaled scores to give an illusion of accuracy.
Teachers tick and RAG-rate interminable lists of learning objectives to ‘prove’ micro-steps of progress. Colour-coded flight paths that link key stage 2 results to GCSE grades are conjured via the magic of a straight line, counting back to create annual 'working at' grades. Each pupil's position relative to this gradient will determine whether they are above or below expected; or on- or off-target; or emerging or at greater depth.
In the pursuit of progress measures, data is not just bent beyond its elastic limit, it is quite literally made up.
But why do schools persist with such measures even when they admit to them being of little or no value and a massive drain on staff time? It’s either inertia - it’s what they’ve always done - or because it’s expected of them.
Subjective measures are still seen as essential 'evidence' of progress resulting in schools developing increasingly complicated systems to show it. Efforts to simplify processes are thwarted by the advice of an army of advisors - SIPs, consultants, even governors - who wince at the perceived risk and push hard for the safety of the numbers.
Accountability is, of course, at the heart of the problem. National measures hold such sway that schools feel the need to emulate them, or at least invent something that looks vaguely similar. In schools where attainment is low there is, quite understandably, an even greater desire for such data ‘to keep the wolf from the door’; and ‘measure more, more often’ becomes the mantra.
Regardless of how robust the data is, most schools would feel naked without their progress measures and any suggestion to scrap them is met with a look of utter incredulity. Data can be a comfort blanket and sometimes it feels as if bad data is preferable to no data at all.
But how robust are the national measures that push schools into this murky world of pseudoscience?
At least Progress 8 involves standardised data at each end and is, therefore, reasonably reliable. Primary progress measures, on the other hand, are far more fuzzy and major question marks hang over their reliability.
First, they are measured from a broad teacher assessment at key stage 1 to a test at key stage 2. Second, key stage 1 is not the start of statutory education, it is three years in. And third, unlike secondary schools, primary schools are in control of their baselines. Well, nearly all primary schools. Junior and middle schools must accept whatever KS1 results pupils arrive with. Next year, as the first cohort with 'new' KS1 assessments reaches the end of KS2 - there are fewer KS1 outcomes and therefore fewer prior attainment groups - primary measures will become even more clunky. And then the highly controversial reception baseline kicks in, ultimately resulting in junior and middle schools’ exemption from progress measures in 2027, handing them “responsibility for evidencing progress based on their own assessment information.”
Despite all this - and the myriad issues relating to mobility, inclusion of pupils that don't take tests by assigning nominal scores, and the impact of outliers - a lot of stock is placed in these measures. Once calculated, a confidence interval is applied to the progress score, resulting in a colour-coded performance descriptor displayed in the performance tables for all to see. The certainty with which people form judgements about school effectiveness - the quality of teaching! - from the colour of a box on a website is deeply worrying. It’s little wonder that schools feel the need to second guess these measures by concocting their own proxies.
But can we live without them?
In terms of the statutory measures, Progress 8, whilst not without issues, is at least based on reasonably reliable data. But we have to consider whether primary school progress measures are worth the bother. Despite how much we want progress data to make up for low attainment, if the only reliable data are the key stage 2 tests, then perhaps that’s all we can realistically report. And as for the flight paths and quasi-levels that schools still cling on to, it really is time for those to die out.
Progress measures are the root of all evil. It’s time to exorcise our data demons and find a better way.
Find out more:
Hear more about progress from James Pembroke, Richard Selfridge, Becky Allen and Stuart Kime at the Festival of Education. Friday 21st June, The Chapel at 2.15 pm
Read more from James Pembroke on the CEM blog: The Level Illusion
No robust evidence to prove learning styles exist
This week leading academics from the worlds of neuroscience, education and psychology expressed...
The Level Illusion
By James Pembroke Back when I worked in a local authority I was asked to build an Excel-based...
Measuring Progress in Education
At the 10th Annual Festival of Education, held at the beautiful Wellington College, CEM hosted a...