Finding the ‘Sweet Spot’ of Teaching, Learning and Assessment

Featured Image

By Mark Frazer, Teaching and Learning Lead, CEM

In my previous blog post, I considered some of the findings from the 2015 PISA study.

The data appear to suggest that independent or student led activities do not support outcomes as effectively as more traditional teacher led activities.

In reality, can either approach be solely relied upon?

Blended approaches to teaching and learning

A 2017 report by McKinsey and Company describes a ‘sweet-spot’ or blend of teaching styles which are likely to work best.

This probably doesn’t come as a surprise to many teachers who will regularly employ a combination of teaching and learning strategies in their classrooms.

For many, this is simply the craft of teaching, learning and assessment.

Performance expectations

Most classroom practitioners will be able to call to mind a situation when a child has not performed as well as expected when assessed.

I have a vivid memory, from my early days as a headteacher, of a very capable Year 6 child who had an unfortunate experience immediately prior to taking one of the KS2 maths SATs papers.

During a short break before the test, one of the girls fell over in the playground and burst her lip. Although this was undoubtedly unpleasant for her, she felt well enough to come back into the classroom and attempt the test. Consequently, a child who we were sure was (in the old parlance) ‘a secure Level 5’ came out of the experience with a Level 3 overall.

My point is that a single assessment outcome does not always reflect a student’s true ability.

If only there was a way of measuring performance over a longer period of time and gathering evidence to build a more representative picture…

Gathering the evidence

A programmatic approach to assessment draws upon a range of evidence and aggregates multiple data points over time.

The evidence should pull together assessment information from a range of sources, and reflect outcomes from a variety of learning activities; this might take the form of test scores, observations, examples of classwork, photographs, or anything else of value.

Van der Vleuten et al (2015) suggests Twelve Tips for Programmatic Assessment, which makes it clear that no one instrument will reliably assess everything and it is highly likely that such a ‘holy grail’ does not exist.

I am not about to summarise all twelve, but there are three tips which, I suggest, should inform all schools’ assessment policies.

  1. Adopt a robust system for collecting information:

In programmatic assessment, large amounts of information about the learner are gathered over time. Being able to handle this information in a manageable and meaningful way is vital. Involving users in the development of a system is essential if they are to invest in it.

  1. Ensure trustworthy decision making:

To be valid, high-stakes decisions must be based on multiple data points, broadly sampled across contexts, methods and by multiple assessors over time. In this way, unexpected events will not have such a detrimental impact on the reliability of your data and your students will be fairly represented.

  1. Monitor and evaluate the learning effect of the programme and adapt it with use:

Just as a curriculum needs evaluation in a plan-do-act cycle so, too, does an assessment programme. Assessment systems can sometimes produce unexpected or unintended consequences, which must be guarded against. In addition, routine activities may become trivialised and irrelevant. Monitor, evaluate, and adapt your assessment programme as you go to ensure it remains useful and fit for purpose.

Find out how to adapt your assessment programme: Assessment without Levels: Using CEM data

References

Driver, R., Newton, P., & Osborne, J. (2000). Establishing the norms of scientific argumentation in classrooms. Science education, 84(3), 287-312.
Duschl, R. A., & Osborne, J. (2002). Supporting and promoting argumentation discourse in science education.
Kind, P. M. (2013). Establishing assessment scales using a novel disciplinary rationale for scientific reasoning. Journal of Research in Science Teaching, 50(5), 530-560.
Lazonder, A. W., & Harmsen, R. (2016). Meta-analysis of inquiry-based learning: Effects of guidance. Review of Educational Research, 86(3), 681-718.
Lederman, J. S., Lederman, N. G., Bartos, S. A., Bartels, S. L., Meyer, A. A., & Schwartz, R. S. (2014). Meaningful assessment of learners' understandings about scientific inquiry—The views about scientific inquiry (VASI) questionnaire. Journal of Research in Science Teaching, 51(1), 65-83.
Macpherson, A. C. (2016). A comparison of scientists’ arguments and school argumentation tasks. Science Education, 100(6), 1062-1091.
Schuwirth, L. W., & van der Vleuten, C. P. (2012). Programmatic assessment and Kane’s validity perspective. Medical education, 46(1), 38-48.
Van der Vleuten, C. P., Schuwirth, L. W. T., Driessen, E. W., Dijkstra, J., Tigelaar, D., Baartman, L. K. J., & van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical teacher, 34(3), 205-214.
Van Der Vleuten, C. P., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical teacher, 37(7), 641-646.