- How effective are interventions designed to help under-aspiring pupils?
Carol Taylor Fitz-Gibbon and Neil Defty, Curriculum, Evaluation and Management Centre, Mountjoy Research Building 4, Science Park, University of Durham, DH1 3UZ
- ‘True’ (i.e. randomised) field experiment
- A request was sent to schools to allow us to provide them with only a random half of the list of ‘under-aspiring pupils’. Under-aspiring students are defined as those who reported, on the YELLIS questionnaire, intentions regarding staying on in education that were lower than was usual for similar students (defined as students with the same score on the YELLIS baseline test.) In other words, considering their developed abilities, as evidenced on the YELLIS test, they were not aspiring to stay on in education as long as were others of the same developed abilities.
- The random selection used random numbers and was undertaken in the CEM Centre where no one knew any of the students. It was therefore entirely blind. With only 2 to 12 students per school, stratification prior to randomisation was not feasible.
- Students were not aware of this experiment although those named to the school as ‘under-aspirers’ may have become aware of the list.
- Pre-test was at the beginning of year 10 (September, 1997) Post-tests were GCSE results (i.e. externally set and graded tests) at the end of year 11(results given out in August, 1999).
The first experiment (reported here) took place during the school year ending 1999 with the 15 schools
- 120 fourteen and fifteen year olds in the fifteen English schools that agreed to let us withhold half the list of under-aspirers
- Boys outnumbered girls on the total list of under-aspiring pupils (75 boys, 45 girls, i.e. 62.5% male).
- Some 21 schools were invited to participate (by letter) and 15 agreed to do so.
- How were the groups treated differently? In total, in the 15 participating schools, a random half of 120 students were ‘named’ to their schools as under-aspirers, ie were on the list that the schools were used to receiving each year. The other half were simply not mentioned and therefore not known to the schools in terms of their responses to the YELLIS questionnaire.
- How long did the intervention last? For approximately eighteen months it was possible for the students named to be monitored, counselled, supported etc. by staff at the school. The exact interventions were entirely at the discretion of the school
- Who provided the intervention? The school
- What did it cost? Time and effort of staff, varying from school to school but from no effort up to the provision of as many as 23 counselling sessions.
- Was there a control group ‘intervention’ No. The ‘control’ group was a no-treatment control group. The pupils had simply not been named. Nevertheless many would receive attention even though they were not on the under-aspirers list because they came to the attention of the school in other ways.
- What outcomes were recorded? The ‘value added’ in the GCSE examinations.
- Were the outcome measures planned before doing the experiment? Yes
- Were there any checks on how / whether the intended interventions were implemented? Yes. Many schools completed at questionnaire about the amounts of counselling and other activities provided to those on the under-aspirers list.
- How were data collected? Teacher-completed questionnaires for the process variables (e.g. amounts of counselling) YELLIS procedures for the collection of baseline, GCSE results and value added.
- How reliable were any measurements? YELLIS test reliability generally 0.92. GCSE does not report reliability. The reports of the amounts of counselling might have varied in reliability but this would not be likely to affect the interpretation. Measuring what schools did was only used to generate hypotheses and descriptions. The manipulated variable was 100 percent reliable: named or not named.
- How valid were any interpretations ? Strong design with respect to being named or not.
- What data were missing and why? Examination data was available for 99% of the relevant students but some process data was not available. However, there was no sample attrition on the manipulated variable: named or not named.
- What were the main outcomes of the study? In 12 out of 15 schools the named under-aspirers made less progress (showed lower value added) than those not named to the schools.
- How big was the difference between groups? The average residual (i.e. the grade achieved, relative to the grades achieved by all students with the same initial ability) was –0.86 for the named under-aspirers and -0.55 for the unnamed. Thus, those who were not named outperformed those named by 0.31 of a grade The effect size for this difference is -0.38 (with 95% confidence interval [-.74,-.02], p=0.04), i.e. the average unnamed student did better than 65 percent of the named group.
- How important are these differences? The amount of counselling did not differ significantly between the groups. If additional efforts were made by teachers (as reported e.g. parent meetings and letters, homework checks) these appeared to have been counter-productive. Saving teachers’ time and effort is important and therefore the results of this experiment, if they replicate, are important.
Naming under-aspiring pupils resulted in their making less progress than pupils not named.
Further replication is needed, particularly since this is a result that is in line with other examples of well-meaning interventions yielding damage rather than better results (McCord, 1981; Dishion et al. 1999) Teachers’ time and efforts could be saved for other activities. Under-aspirers may do better if not bothered.
McCord, J., A thirty-year follow-up of treatment effects. American Psychologist, 1978. 33: p. 284-289.
Dishion, T.J., J. McCord, and F. Poulin, When Interventions Harm. American Psychologist, 1999. 54(9): p. 755-764.
McCord, J., Considerations of Some Effects of a Counseling Program, in New Directions in the Rehabilitation of Criminal Offenders, S.E. Martin, L.B. Sechrest, and R. Redner, Editors. 1981, National Academy Press: Washington DC. p. 393-405.