Cambridge CEM Blog

What is PISA telling us, and what can teachers do about it?

Written by Mark Frazer | May 8, 2018 12:09:00 PM

By Mark Frazer, Teaching and Learning Lead, CEM

Since the publication of the results from the most recent Programme for International Student Assessment (PISA) by the Organisation for Economic Co-operation and Development (OECD), many researchers and educational bloggers have drawn attention to, and have questioned, the results. For example, Greg Ashman has discussed the implications of the 2015 study on several occasions.

The 2015 PISA study involved assessing the performance of over half a million 15 year olds in 72 countries using a series of computer-based tasks and questionnaires. The 2015 assessment focused on science, mathematics, reading, collaborative problem solving and financial literacy. Following the publication of the study’s data in December 2016, it has been possible for anyone to examine the findings.

My area of interest is the teaching of science and, in particular, the apparent value of students engaging in meaningful scientific discourse and reasoning with each other.

One item from the questionnaire which caught my attention was: ‘When learning science, students are asked to argue about science questions’. Participants were required to respond by indicating whether this happened either: in all lessons, in most lessons, in some lessons or never or hardly ever.

The data was not as I expected to find it.

Arguing about science

There is a widely held view that encouraging students to engage in independent and collaborative work with others is a positive approach, which will improve educational outcomes. Furthermore, it seems to be accepted by many that traditional and strongly didactic teaching styles do not have such an impact. Inquiry-led learning has been popularised and now sits at the heart of many science curricula.

During the past two decades a number of researchers have placed the development of skills related to scientific practice at the centre of science education. For example, Driver, Osborne and Newton (2000), Duschl and Osborne (2002), Kind (2013), Macpherson (2016) all support the importance of argumentation, scientific reasoning and critique by students when learning about science.

By engaging in these practices it is thought that students will learn to more effectively communicate scientific knowledge, improve their procedural understanding and allow them to develop as authentic scientists.

However, it seems that the findings from large-scale studies, such as PISA, do not unequivocally support this view, indeed, they appear to contradict currently accepted theories relating to teaching and learning.

Student-led versus teacher–led learning

The graph below shows a puzzling relationship between the frequency with which students are involved in scientific argumentation and their overall score in the PISA assessment. The data appear to suggest that scores decrease with more frequent engagement in scientific argumentation. The graph shows only a selection of participating nations (the UK and some of its closest neighbours), but the picture across the world seems to be quite similar. (Apologies to maths specialists who probably will not like my use of a line graphs in this situation, but I think they help to illustrate the point quite well).

In addition, if we look at the reported frequency of the participating UK students’ involvement in a range of pedagogical strategies and their overall PISA scores, we again see some surprising results.

The data appear to suggest that independent or student led activities do not support outcomes as effectively as more traditional teacher led activities, which seem to have a more positive impact, in terms of overall PISA scores. The green line indicates the impact of teachers clearly explaining scientific ideas to their students. The more this happens, the better the outcome for students.

Be aware

Amongst these possibly counter-intuitive messages, the graphs do appear to suggest that there is likely to be an optimum combination of teaching and learning strategies. A 2017 report by McKinsey and Company describes this ‘sweet-spot’ or blend of teaching styles which are likely to produce the best outcomes in PISA.

What then, should we remember when looking at these sometimes surprising findings?

  1. PISA is just one assessment and, as such, we should not base our judgements on this evidence alone.
  2. The PISA data and the associated questionnaire responses should be considered with caution as they do not tell us anything about the quality of teaching, or the support, the respondents received.
  3. Learning is a nebulous and often imperceptible process which is uniquely personal individual students. Consequently, we need to take a number of measurements and use expert opinion in order to validate our assessment judgements.
  4. Like many things in life, a healthy balance is generally needed. In the educational context, that may mean that teachers should plan to teach their students using a combination of instructional and inquiry based strategies.

Effective teachers are skilled in drawing together different strands of evidence to form an overall impression of how well their students are performing. This is the art of assessment and the most effective practitioners will act upon this collated information, using it to inform their teaching to best support their students, helping them to make further progress.

In the next post, we will explore the evidence behind a blended-strategies model which teachers may wish to consider.

Find out more:

Read 6 Elements of Great Teaching
Download Professor Rob Coe’s presentation How can teachers learn to be better teachers?

References

Driver, R., Newton, P., & Osborne, J. (2000). Establishing the norms of scientific argumentation in classrooms. Science education, 84(3), 287-312.
Duschl, R. A., & Osborne, J. (2002). Supporting and promoting argumentation discourse in science education.
Kind, P. M. (2013). Establishing assessment scales using a novel disciplinary rationale for scientific reasoning. Journal of Research in Science Teaching, 50(5), 530-560.
Lazonder, A. W., & Harmsen, R. (2016). Meta-analysis of inquiry-based learning: Effects of guidance. Review of Educational Research, 86(3), 681-718.
Lederman, J. S., Lederman, N. G., Bartos, S. A., Bartels, S. L., Meyer, A. A., & Schwartz, R. S. (2014). Meaningful assessment of learners' understandings about scientific inquiry—The views about scientific inquiry (VASI) questionnaire. Journal of Research in Science Teaching, 51(1), 65-83.
Macpherson, A. C. (2016). A comparison of scientists’ arguments and school argumentation tasks. Science Education, 100(6), 1062-1091.
Van der Vleuten, C. P., Schuwirth, L. W. T., Driessen, E. W., Dijkstra, J., Tigelaar, D., Baartman, L. K. J., & van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical teacher, 34(3), 205-214.
Van Der Vleuten, C. P., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical teacher, 37(7), 641-646.