We all know that academic progress is an individual thing. Making progress relies on a whole range of influencing factors and students make progress at different times and at different rates.
All too often the focus on exam results fails to take account of the mammoth steps students and teachers have sometimes taken on the way to attaining their personal summits.
So, in the interests of fairness, value-added measures are intended to offer
The concept of value-added was pioneered by CEM founder Carol Taylor Fitz-Gibbon, whose vision of value-added systems was not led by governments as part of a top-down accountability process; rather, the demand for value-added was school-led, fed by teachers’ desire to have trustworthy, confidential information about how well they were doing.
Value-added feedback is a fair measure of the progress that students have made. Rather than relying solely on exam results, it takes account of where each student started from and the progress they made relative to other, similar students. And we know that part of what is captured by value-added estimates reflects the genuine impact of a teacher on students’ learning.
We also know that value-added measures can drive improvement by understanding progress across the whole institution, identifying performance above or below expectation across all curriculum areas, and comparing performance to other schools and other school types.
Value-added measures can help schools understand what is working for them and what isn’t.
But, when it comes to success, it’s clear that manifold influencing factors might mean that what works for one school may not work in another, and what worked in your school one year may not work the next.
Value-added reports, therefore, allow you to target improved outcomes by helping you to ask the right questions about individual subject strengths, share best practice between departments, support judgements about assessment and support, and tailor aspirational target-setting.
While it seems to make sense to judge the effectiveness of teaching from its impact on assessed learning, we know that, in theory, a number of factors will influence students’ achievements.
For example, one of the simplest ways for a teacher to achieve high value-added scores is to follow a teacher whose value-added is low. Another is to teach top sets, selected for their likelihood of attaining high grades.
In practice, therefore, to attribute an ‘effect’ to an individual teacher or school is to judge what is outside the control of that individual.
CEM has worked with schools and teachers for more than 30 years providing value-added measures and helping them to evaluate how well they are doing and driving improvement.
The intention has always been that while CEM sends performance information into schools, it is those who work in the schools who can best interpret it. Perhaps the valuable thing is not so much the result as the questions it helps you to ask.
Ultimately, for a judgement about whether teaching is effective, to be seen as trustworthy, it must be checked against the progress being made by students.
If you use CEM assessments, you can get a head-start on asking those important questions about impact on the day you receive and upload students’ exam results. In most cases this takes less than a day, which means you can get an understanding of performance, start asking the right questions and get ahead with planning for the new term.
For more information on improving educational outcomes read What Makes Great Teaching? and Improving Education: A Triumph of Hope over Experience