Cambridge CEM Blog

Data Quality at CEM

Written by Andrew Lyth | Sep 16, 2021 9:31:04 AM

Each year CEM processes the results for hundreds of thousands of students participating in our assessments. The results from those assessments are used in multiple ways:

  • To help teachers understand their students’ strengths and weaknesses.
  • To help teachers and school leaders to set appropriate targets for their students and for the school as a whole.
  • To help school leaders assess how well the school is performing in comparison to other schools.

Given the uses of the results, it is clearly important that the information we provide is accurate, fit-for-purpose, complete, timely and easily understood. Achieving this requires a range of quality-assurance processes to be place and draws on expertise from different teams across CEM.

Quality assurance on CEM assessments

When we develop new assessment questions and new assessments, these must meet several quality control criteria:

  • Validity
    • Does each assessment section measure the construct that it is intended to measure?
    • Are the inferences that may be made from the assessment’s outcome appropriate?
    • Does an assessment cover a sufficient range of the construct and not include construct-irrelevant items?
    • Are questions statistically robust, and do they perform well across a range of abilities and at the appropriate difficulty for the assessment year group?
  • Reliability
    • Do the questions in a section measure the same construct?
    • Is the marking of questions consistent and accurate?
  • Fairness
    • Are the questions unbiased towards different sub-groups within the assessment year group, e.g. gender, culture and ethnicity?
    • Are questions accessible to the assessment year group regardless of their context and background?
  • Accessibility
    • Is the assessment accessible to all or are there adaptations that need to be incorporated such as extra time provision?

Checks on student progress through the assessment

There may be circumstances which affect a student’s progress through an assessment to the extent that their results may be unreliable, for example if they feel ill, disinterested, or do not complete the assessment in the time available. Where possible we identify these issues, and those results are automatically flagged.

For example, on the Vocabulary and Mathematics sections on CEM’s MidYIS, Yellis and Alis assessments, students’ results must comply with the following:

  • A minimum number of questions must be attempted.
  • The student’s ability estimate measure must have stabilised.
  • A minimum amount of time on the assessment must have been taken by the student.

Meaningful and accurate scores

To make sure that our standardised scores are meaningful, we have a clearly defined target population that we standardise against. For example, for MidYIS and Yellis we provide nationally standardised scores, with the target population being all students in mainstream Secondary schools.

Additionally, to check that our standardised scores are representative of those student populations we need to obtain accurate estimates of the mean and standard deviation of students’ performances in those populations. To achieve this, we use the following techniques:

  • We weight our sample of students’ performances by school sector (state, grammar and independent), so that the weighted percentages of students in each sector match the national proportions of 89.1%, 3.9% and 7% respectively.

  • We control for any bias in the ability profile of our school sample by using regression models that link schools’ performances on the CEM assessments to their performances at GCSE or A-Level.

Predicted grades and value-added data

The predictive data we provide are calculated using data from previous students who have taken a CEM assessment and then later taken external examinations such as GCSE, Scottish Nationals, IB Diploma, or A-Levels.

To be included in the set of subjects we provide predictions and value-added data for, a subject’s sample must meet our quality control criteria in terms of sample size, number of schools, sampling error and correlation.

To ensure that the samples are representative in terms of school sector, we apply weighting factors so that the percentage of students from independent schools matches the national figure of 7%.

Rapid results

As well as making sure that students’ results are accurate and meaningful it is also important that this information can be accessed in good time and that it can be understood and interpreted correctly. Students’ results from our assessments are available in full just a few hours after their completion. Each set of results is presented with a variety of tables and charts with helpful documentation and support.

Responding to change

Many of these tasks are ongoing because over time students change and the education environment for them changes. Additionally, as education changes, the information school leaders and teachers require also changes.

By having ongoing dialogues with our assessment users, we keep our information up-to-date, relevant and useful.

 

Want to share your ideas about assessment with us?

Tell us your ideas