Towards Experimental Research Synthesis in Education

What is a TERSE report?

A TERSE Report is a short summary of the most important information about an educational experiment. It includes details of the design, context, intervention and results of the experiment, under structured headings. A TERSE Report is therefore similar to a structured abstract, but with just a little more information. TERSE Reports were originally devised by Carol Taylor Fitz-Gibbon for the journal Evaluation and Research in Education in 1988.

Why TERSE reports?

The aim of TERSE reports is to make short, succinct summaries of small-scale experiments in education available to a wide and non-specialist audience. Its format offers many advantages:

  • The length is just enough to convey the important information about an experiment, but short enough to make it easily absorbed. It is not intended to provide enough information to enable the reader to replicate the study (although many conventional reports also fail to do this), so where possible it provides references to more extended reports.
  • The structured format has been widely adopted for abstracts in many journals in medicine, psychology etc, since it has been shown to aid comprehension.
  • The length and structure make it very easy to write. This means that even small-scale studies can be reported and accessed


TERSE Library

  • Chalmers, Hamish (2002) Did Microsoft commission a fair test of the geographical knowledge of British children?
  • Coe, Robert (1999) The effects of giving performance feedback to teachers: a randomised controlled experiment
  • Dowson, Val (1999-2000) Time of day effects on children's learning
  • Fitz-Gibbon, Carol and Defty, Neil (2000) Effects of providing schools with names of under-aspiring pupils
  • Goodson, Vicky (1999) Effects of different testing environments on children's performance and attitudes
  • Taylor, Kathryn (1999) A comparison of different teaching methods in MFL with MLD students

Guidelines to Authors

The aim of TERSE REPORTS is to make short, succinct summaries of small-scale experiments in education available to a wide and non-specialist audience. Each report should be no more than 300 words in total and should be accompanied by a FULL REPORT and full references to any published reports of the experiment, or details of any unpublished reports. TERSE REPORTS should use the following headings, addressing where appropriate the questions listed under each. It is unlikely that authors will be able to answer fully all these questions in the space allowed, so only the most important must be given, saving the others for the full report.

Title

  • What was the main research question addressed?

Author(s)

Full name of the person(s) who conducted the research. Include postal and e-mail address for correspondence.

Design

  • Was a control or comparison group used?
  • How were individuals or other units allocated to different treatments (randomised, self-selected, natural etc)?
  • If random, how was the random selection generated? Were there any checks to ensure randomisation could not be subverted?
  • Was there any matching / stratification / minimisation before randomisation?
  • Were participants / observers aware of allocation (blinding)?
  • At what times were measurements made (pre-test, post-test, delayed, etc)?
  • Were any other standard designs used (cross-over, factorial, interrupted time-series, etc.)?

Setting

  • Where and when did the study take place (eg what kind of school, social context)?

Population

  • Who was involved? What were their significant characteristics (age, aptitude, etc)?
  • How were participants chosen?
  • What criteria for inclusion / exclusion were used?
  • How many were asked / agreed to take part?
  • Of what population could they be considered representative?

Intervention(s)

Ideally the reader should be able to recreate the intervention from this description, or at least have a clear sense of what was done.

  • How were the groups treated differently?
  • How long did the intervention last?
  • Who provided it?
  • How was it developed (e.g. who contributed, use of pilot, in response to needs/views of participants)?
  • What did it cost?
  • Was there a control group ‘intervention’ (eg treat as normal, ‘placebo’ treatment, etc)?

Data collected

It is important that the reader has a clear sense of what was measured. For example, if a report claims to be about the effects of different teaching styles on ‘mathematical understanding’, it must explain what is meant by this: what kinds of tests were used (perhaps give examples of test items / criteria for judgements)?

  • What outcomes were recorded?
  • Were the outcome measures planned before doing the experiment?
  • Were there any checks on how / whether the intended interventions were implemented?
  • How were data collected? (using what instruments, by whom, from whom?)
  • How reliable (accurate, robust, stable) were any measurements?
  • How valid were any interpretations of them (can we be sure they mean what we think they mean)?
  • What data were missing and why (eg due to sample attrition, non-response)?

Results

  • What were the main outcomes of the study? (For frequencies, give actual numbers, not just percentages.)
  • How big was the difference between groups? (Quantify as an effect size with confidence interval, where possible.)
  • How important are these differences? What do they mean?
  • If any subgroup or covariance analyses were done, were they planned before data collection?

Conclusion

A single sentence summary of the study’s contribution to knowledge and implications (conclusions must be supported by the evidence reported).

 

 

Research

BASE Reception Baseline Assessment

We use cookies to improve our website and your experience when using it. Cookies used for the essential operation of the site have already been set. To find out more about the cookies we use and how to delete them, see our Privacy Policy.

I accept cookies from this site