The results are in… so what next?

Featured Image

By Sue Holt

The GCSE and IGCSE results have been published so no doubt there have been celebrations, commiserations and, in some cases, recriminations.

So what factors should schools consider when looking at the results?

In most English-curriculum international schools, tracking overall value-added at GCSE/IGCSE is the best measure of school performance. If you can show that year on year the students in your school perform significantly better at age 16 than students of similar starting ability elsewhere, then you certainly have cause to celebrate - but what is it exactly that you are doing right?

 

Are factors other than your brilliant teaching influencing these results?

If you had an influx of bright EAL students in year 9 or at the beginning of year 10, you probably noticed that their overall baseline scores, as measured for example by MidYIS or Yellis, were depressed by their vocabulary scores.

School interventions and immersion in an English speaking environment can have dramatic positive effects on outcomes in all subjects. I have often seen EAL students achieve on average two or three grades above predictions, thus really increasing the cohort’s overall value-added. Does this apply in your school?

 
 

What does your baseline tell you?

Of course, you cannot track value-added unless you have a reliable baseline. I have found that school reports are, in general, a poor indication of what students know and can do when interviewing students for admission.

In the past, students arriving from schools following the English National Curriculum have had reports indicating National Curriculum Levels which were reasonably reliable, at least in the core subjects of English, maths and science. The removal of ‘levels’ has exacerbated the problem of placing students, as a report which states ‘making good progress’ gives no indication of a starting point or final goal.

Most international schools are fairly non-selective but all schools need to know something of a student’s starting point so that they can address their individual needs. A good baseline test is a useful tool in the speeding up of this process.

 
 

How does staff turnover affect your value-added?

The rapport between teachers and students is an important factor in student motivation and students, in general, do not like change.

Teachers taking over year 11 classes often feel stressed - I am sure many headteachers and heads of department have heard anxious comments along the lines of ‘They have huge gaps, I don’t how I’m going to get all this done before the exam’. Often these comments are groundless but teachers fear that they will be judged on the results even if they have not taught the class for the whole course.

Therefore, when looking at individual subject results it is a good idea to consider ‘continuity’ as a factor which may have influenced the results either positively or negatively. If your staff turnover is high, then this effect may be more pronounced.

It is therefore even more essential that good data is passed on to new staff. Of course, the effectiveness of the teaching staff is only one factor to influence value - added: resourcing, school culture and quality of leadership all have significant effects to name but a few.

Whilst teachers are obviously accountable for their performance, value-added results should be used with extreme caution if they are used as part of the performance management process.

 
 

How do you measure value-added with a mix of new GCSEs and old GCSEs?

This year the exam results have hit the headlines because of the phased-in move from A*- G grades to 9-1 grades, with almost everyone describing the new grades as their equivalents in ‘old money’. But as this is not a direct conversion, it has caused confusion.

What are employers and universities going to require in terms of ‘standard passes ( grade 4) and strong/good passes ( grade 5)? Might some schools recommend that students who have only achieved a standard pass try again to improve this? How will these grades affect entry into post-16 courses?

At this point, it is worth looking at the baseline again. If a student has disappointing GCSE/IGCSE results we may tend to lower our expectations of them for post-16 courses; conversely, if they have ‘over-performed’ we may have unrealistically high expectations.

I have found that using the CEM IBE/ Alis assessment is useful in re-evaluating the baseline (and of course for new students who have not taken GCSE/IGCSE). The use of the chances graphs helps prevent ambition being limited by a stark, single predicted grade and can be used to promote positive conversations about strategies to be used with both staff and students.

 

Just how good are your results?

In the 20 years that I have been using CEM data in schools in seven different countries, I have found it a useful tool to ask questions at all stages in development planning: What is the cohort like? Does the curriculum meet their needs? If not, how can we design it to meet their needs? Does the teaching and learning in the classroom show that students’ potential is being maximised? Which of our strategies have worked?

Of course, the results are not an end point, simply a staging post as we continue on this journey of maximising student potential and making the most of school performance data.

Find out more:

CEM’s value-added feedback is available for schools on results day. Upload your GCSE and A Level results to get your value-added feedback on the same day.