Abstract
There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB) to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported). Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies.
| Original language | English |
|---|---|
| Article number | e2002212 |
| Journal | PLoS Biology |
| Volume | 15 |
| Issue number | 6 |
| DOIs | |
| State | Published - Jun 29 2017 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2017 Benjamin et al.
ASJC Scopus Subject Areas
- General Neuroscience
- General Biochemistry,Genetics and Molecular Biology
- General Immunology and Microbiology
- General Agricultural and Biological Sciences
Disciplines
- Business