The Level Illusion

Featured Image

By James Pembroke

Back when I worked in a local authority I was asked to build an Excel-based tracking system, something primary schools could use to track pupil progress through levels. “Pupils make three points per year” I was told. I nodded earnestly. “Three points is expected progress”.

This, I would soon discover, was an irrefutable truth; the truth on which all tracking systems were built.

“So, pupils need to make more than that if schools are to do well”. I stared solemnly at the blank Excel window on my computer screen – a huge responsibility seemed to be settling on my shoulders.

“Can you make it do that?”

“And can you make it change colour?”

Expected standards?

When national curriculum levels were first devised, as broad indicators of attainment at each key stage, it was decided that Level 2 was the expected standard at key stage 1 and Level 4 was the expected standard at key stage 2.

Levels were then split into a system of fuzzy sublevels, which supposedly afforded a greater degree of granularity and allowed for progress measures over shorter periods.

Levels and sublevels were then assigned point scores, and pupils making the ‘expected progress’ of ‘two whole levels’ had made twelve points of progress in total. Twelve divided by the four years of key stage 2 equalled three points per year, or a point per term; and if three points was our annual expected rate of progress then above expected progress must be higher than that. What’s higher than three?

Four! And this became our common currency.

But wait! A point is half a sublevel. How can we track in half sublevels? Before we knew it, we had convinced ourselves there really was such a thing as 3b+, and we could even see it in pupils’ books.

It was, of course, a mass illusion. And we built an entire accountability system around it.

The removal of levels

But in 2014 the DfE announced that levels would be removed from the national curriculum.

They were too broad, too best fit. They emphasised pace through the curriculum above consolidation and deepening of learning. They implied that progression was linear. They suggested that pupils at opposite sides of a level boundary were at quite distinct points in their learning when, in actual fact, they could be far closer than those within a level.

The reasons for the removal of levels were compelling. It was time to radically rethink how we tracked pupil progress.

Levels by another name

Yet in the four years since their supposed removal, levels have proliferated and spread, not as blatant copies of the original system, but covertly under numerous guises, and often underpinned by a remarkably similar series of point scores. Levels by another name.

In fact, very shortly after the DfE’s announcement in 2014, various solutions were rushed out by competing software companies seeking to capitalise on the chaos, or by well-intentioned LAs wanting to offer schools a common approach.

These were usually variations on the ‘emerging-developing-secure’ theme where all pupils began the year categorised as ‘emerging’ and moved through the bands as they covered more content, becoming ‘secure’ after Easter.

Regardless of how the curriculum was designed, the system assumed content to be delivered in neat 33% blocks for the convenience of the school term calendar, and each term’s content was worth a point.

The system may not have been aligned with the school’s curriculum but no matter, everyone breathed easier as pupils continued to make the familiar expected progress of three points per year.

But wait! If three points is still ‘expected progress’, how could anyone make more than expected progress in a curriculum where pupils are pretty much constrained by the parameters of their year’s curriculum content?

Cue the invention of the ‘mastery’ band, a complete misinterpretation where mastery is not viewed as a process but an outcome that happens at some point towards the end of the year. Or, to put it another way: a bonus point for the bright kids.

Progress measures driven by requirements

And what about those schools that are under pressure to ‘prove’ progress over even shorter periods? Those schools that must report data every half term?

Invariably the solution came by subdividing the bands described above into emerging, emerging+, developing, developing+ and the like, labouring under the illusion that the more bands we invent, the more progress pupils apparently make.

Often driven by the requirements of MATs and LAs, six-, seven-, and even nine-point systems are common; and most worryingly, such systems still appear to satisfy Ofsted inspectors despite all the reassuring guidance to the contrary.

In many schools up and down the country these points-based progress measures derived from subjective, bias-prone teacher assessment are treated as SI units and used for multiple purposes: tracking pupil progress, reporting to governors, external oversight, teacher performance management, even comparing the performance of schools.

Impact on teacher workload

All of this is an illusion. Such systems clearly replicate the issues of levels; in fact, they magnify those issues by creating even more subdivisions in order to give the impression of accuracy. And it is deeply troubling that so many have bought into it.

Even more troubling is how much teacher time is taken up by these procedures; procedures that have little or no impact on learning and are generally devised solely to meet the needs of external agencies.

All teachers need to record is if pupils are where they expect them to be at a given point in time based on what has been taught so far. Working below, working at, or working above expectations should suffice for most reporting purposes.

If schools want more granularity then they should consider using a high-quality standardised test that provides insight into strengths and weaknesses, and places pupils’ attainment onto a national bell curve.

And if flawed, neo-level tracking practices are driven purely by a desperate desire to measure progress, then perhaps it’s time we considered whether we can really measure progress at all.

About the author

James Pembroke (@jpembroke) established Sig+ in 2014 after working as a data analyst in LA school improvement and post-16 sectors for 10 years. He now works with Insight (insighttracking.com) and blogs at sigplus.co.uk/blog.

More information

Find out more about CEM’s standardised assessments for schools.