March 29, 2024

Checklists work to improve science

Five years ago, after extended discussions with the scientific community, Nature announced that authors submitting manuscripts to Nature journals would need to complete a checklist addressing key factors underlying irreproducibility for reviewers and editors to assess during peer review.

The original checklist focused on the life sciences. More recently we have included criteria relevant to other disciplines.

To learn authors’ thoughts about reproducibility and the role of checklists, Nature sent surveys to 5,375 researchers who had published in a Nature journal between July 2016 and March 2017 (see Supplementary information and for the raw data).

Of the 480 who responded, 49% thought that the checklist had improved the quality of research published in Nature (15% disagreed); 37% thought the checklist had improved quality in their field overall (20% disagreed).

Man climbing stairs in high-rise building

Respondents overwhelmingly thought that poor reproducibility is a problem: 86% acknowledged it as a crisis in their field, a rate similar to that found in an earlier survey (Nature 533, 452–454; 2016). Two-thirds of respondents cited selective reporting of results as a contributing factor.

Nature’s checklist was designed, in part, to make selective reporting more transparent. Authors are asked to state whether experimental findings have been replicated in the laboratory, whether and how they calculated appropriate sample size, when animals or samples were excluded from studies and whether these were randomized into experimental groups and assessed by ‘blinded’ researchers (that is, researchers who did not know which experimental group they were assessing). Of those survey respondents who thought the checklist had improved the quality of research at Nature journals, 83% put this down to better reporting of statistics as a result of the checklist.

Is the checklist addressing the core problems that can lead to poor reproducibility? Only partly. Taken as a whole, the responses indicate that we need more nuanced discussions, and more attention on the interconnected issues that result in irreproducibility: training, transparency, publishing pressures and what the report Fostering Integrity in Research by the US National Academies of Sciences, Engineering, and Medicine deems “detrimental research practices”.

Journals cannot solve this alone. Indeed, 58% of survey respondents felt that researchers have the greatest capacity to improve the reproducibility of published work, followed by laboratory heads (24%), funders (9%) and publishers (7%).

What role, then, should publishers take? Reproducibility cannot be assessed without transparency, and this is what journals must demand. Readers and reviewers must know how experiments were designed and how measurements were taken and deemed acceptable for analysis; they need to be told about all of the statistical tests and replications. As such, the checklist (or ‘reporting summary’) provides a convenient tool for revealing the key variables that underlie irreproducibility in an accessible manner for authors, reviewers, editors and readers.

Two studies have compared the quality of reporting in Nature journals before and after the checklist was implemented, and with journals that had not implemented checklists. Authors of papers in Nature journals are now several times more likely to state explicitly whether they have carried out blinding, randomization and sample-size calculations (S. Han et al. PLoS ONE 12, e0183591; 2017 and M. R. Macleod et al. Preprint at BioRxiv ; 2017). Journals without checklists showed no or minimal improvement over the same time period. Even after implementation of the checklist, however, only 16% of papers reported the status of all of the crucial ‘Landis 4’ criteria (blinding, randomization, sample-size calculation and exclusion) for in vivo studies – although reporting on individual criteria was significantly higher. Preliminary data suggest that publishing the reporting summaries, as we have done since last year, has resulted in further improvements.

Fortunately, the trend indicated by the survey is positive. Most respondents had submitted more than one paper using the checklist. Nearly half of respondents said they had not considered the checklist until after they had written their first submission; that fell to 31% for subsequent papers, with authors more likely to consider the checklist while planning or performing experiments. Encouragingly, 78% said that they had continued to implement the checklist to some extent, irrespective of their plans to submit to a Nature journal in the future.

Progress is slow, but a commitment to enforcement is crucial. That is why we make the checklist and the reporting of specific items mandatory, and monitor compliance. The road to full reproducibility is long and will require perseverance, but we hope that the checklist approach will gain wider uptake in the community.

Leave a Reply

Your email address will not be published. Required fields are marked *