Reading Time: Approx 4mins
By Alex Quigley
Having been a teacher for nearly fifteen years, it may surprise you to hear that I hold a persistent fear that I have been grossly undertrained on a crucial aspect of learning in the classroom: assessment.
I don’t think I am very unique. Teachers are a spectacularly undertrained profession on the whole, given the vast complexity of learning. We do our initial teacher training under immense pressures and tensions, and then, after little under a year, we are chucked our teaching qualification and launched permanently into the deep end of the classroom.
Understanding assessment is too often a singular item on the agenda for a training day here or there. Usually, it is in the guise of analysing some high stakes assessment or other. As we seek out cryptic answers from examiners and exam boards, hunting down marginal improvements in exam performance, we miss the important stuff. We miss out high quality diagnostic assessment in the chase for pressurised exam results, or we pay it mere lip service.
What we need, as a profession, is a much better understanding of assessment and its integral role in all learning. We need to immerse ourselves in the complexities of assessment if we are to read the runes of learning more successfully. In my recent travails into reading evidence and seeking out training on assessment, I have learned the following five things:
- We need to separate out the value of tests from the damaging effects of accountability. Right now, like most years, high stakes testing is under scrutiny. The validity, and sheer existence, of KS2 SATs is being challenged. Unfortunately, our experience of bad testing has clouded our sense that good testing can prove useful for learning and prove a reliable measure of learning. We must separate the withering effects of high-stakes accountability and recognise that testing is the most reliable way we have of measuring the complexities of learning. Indeed, good tests are an integral part of learning and can contribute to increasing our students’ learning.
- We need to utilise the ‘Testing Effect’. The beneficial effects of taking tests are one of the most well researched findings in all of education. The problem is that if you mention the word ‘test’ it too often provokes revulsion and fear! Cognitive scientists have helpfully rebranded testing as ‘retrieval practice’ and we probably should too. A good diagnostic test could include a quiz, self-testing using flashcards, or getting a student to put away their notes and try to reconstruct what they have learned in a graphic organiser. Rather than reading and re-reading their notes, our students benefit from tests in this fashion. It helps strengthen their memory of what they have learned.
Read more about the testing effect here: ‘The Testing Effect is Alive and Well with Complex Materials’.
- Practising mock examinations over and over is no guarantee of success. Although the ‘testing effect’ is a real and established phenomenon, we shouldn’t mistake our hulking great summative mock exams as the ideal test to leverage that desired effect. Given how complex (and long) ‘mock exams’ prove, it can be hard to perform and remember from such an experience. Mock exams may enhance our students’ ‘exam craft’, but they may not learn much more in the time-pressured act. Instead, we should encourage more concise, memorable tests that lower the stakes but increase the likelihood of learning. Although cumulative quizzing – a ‘test’ remember – may not churn out data for whole-school data trawls, it may better consolidate the knowledge and understanding our students’ need. I have written about how this has changed my teaching and approach to diagnostic assessment in English, by way of example, here: ‘How to Train a GCSE Essay Writer – Part 1’ and here: ‘How to Train a GCSE Essay Writer: Part 2’.
- Teacher assessment is biased. Now, it is rather uncomfortable to contemplate the fact that our decisions and assessments aren’t wholly reliable, but it is the truth. Large-scale evidence, such as the meta-analysis from John M Malouff (‘Bias in Grading: A Meta-analysis of Experimental Research Findings’), has shown that we are habitually biased around characteristics like prior performance, race, SEND labels and even physical attractiveness! Now, teachers of course offer fantastic, personalised feedback that can be invaluable, but we should also recognise the value of standardised testing in – well, setting a consistent standard. It can save on workload too.
Daisy Christodoulou writes a great blog on ‘Why is teacher assessment biased?’
- Why assessments need to have validity and reliability. We all want to use the best assessments available to us in the classroom. The problem is that we have too little expert knowledge in the characteristics of good assessment. Validity and reliability are crucial concepts. To put it simply, a test is valid if it measures what it is supposed to measure. So, a mathematics test may have ‘wordy’ questions to the degree that it becomes a test of literacy as much as the actual mathematical concepts. An assessment is reliable if you take a test multiple times with similar results (longer tests with more items are usually better in this regard). This handy guide is useful in working out how you can design better assessments: Reliability and Validity.
There is a lot more I need to learn about assessment and testing, and as a profession, if we are to better manage the workload of teachers, as well as respond to the good, the bad and the ugly of national tests, we need to learn much more about the intricacies of good assessment.
Alex Quigley is Director of Huntington Research School. He is the author of ‘The Confident Teacher’ @huntingEnglish