Systematic reproducibility assessments of published research
This repository contains detailed reproducibility analyses of influential research papers, verifying calculations, reproducing statistical analyses, and documenting discrepancies to promote transparency and improve research reliability.
Many published studies contain calculation errors, methodological discrepancies, or irreproducible results that go undetected. ReproCheck aims to:
- β Verify calculations and statistical analyses from published papers
- π Reproduce results using reported data and methods
- π Document discrepancies and potential errors
- π¬ Promote research transparency and reproducibility
- π€ Contribute constructively to scientific accuracy
Paper: Bucher HC, Guyatt GH, Griffith LE, Walter SD. The Results of Direct and Indirect Treatment Comparisons in Meta-Analysis of Randomized Controlled Trials. J Clin Epidemiol. 1997;50(6):683-691.
Status:
Key Findings:
- 3/23 individual study ORs had >25% calculation errors
- Published indirect comparison OR (0.37) cannot be reproduced using the described Bucher method
- Correct calculation yields OR = 0.54, a 46% difference
- Discrepancy changes the paper's main conclusion about inconsistency between direct and indirect evidence
Files:
- π
bucher1997/- Complete analysis - π
bucher1997/analysis.R- Full reproducibility script - π
bucher1997/data/- Extracted data - π
bucher1997/results/- Output files and figures - π
bucher1997/FINDINGS.md- Detailed summary
Each case study folder contains its own requirements.txt or package dependencies. Generally, you'll need:
- R (β₯ 4.0.0) or Python (β₯ 3.8)
- Common statistical packages (specified in each analysis)