Introduction
Reproducibility is a cornerstone of scientific advancement1; however, many published works lack the core components needed for reproducibility and transparency. These barriers to reproducibility have presented serious immediate and long-term consequences for psychiatry, including poor credibility, reliability and accessibility.2 Fortunately, methods to improve reproducibility are practical and applicable to many research designs. For example, preregistration of studies provides public access to the protocol and analysis plan. Reproducibility promotes independent verification of results2 and successful replication,2 3 and it hedges against outcome switching.4 Supporting this need in the field of psychology, the Open Science Collaboration’s reproducibility project was an attempt to replicate the findings of 100 experimental and correlated studies published in three leading psychology journals. Researchers found that 97% of the original reports had statistically significant results, whereas only 37% of the replicated studies had significant results.5 With regard to outcome switching, a recent survey of 154 researchers investigating electrical brain stimulation found that less than half were able to replicate previous study findings. These researchers also admitted to selective reporting of study outcomes (41%), adjusting statistical analysis to alter results (43%) and adjusting their own statistical measurements to support certain outcomes.6 Leveraging good statistical practices and using methods that promote reproducibility, such as preregistration, are necessary to protect against similar incidents of selective reporting.7
Considerable advancements have been noted to promote and endorse reproducible and transparent research practices in the field of psychology. For example, the Centre for Open Science, the Berkeley Institute for Transparency in the Social Sciences and the Society for the Improvement of Psychological Science have all worked vigorously to establish a culture of transparency and a system of reproducible research practices. However, mental health researchers, including psychiatry researchers, have not kept pace with their psychology counterparts.8 A few editorials have circulated to promote awareness of reproducibility and transparency within psychiatric literature.7–9 For example, The Lancet Psychiatry published an editorial addressing various topics of reproducibility, such as increasing the availability of materials, protocols, analysis scripts and raw data within online repositories. The editorial author argued that few limitations exist within psychiatry that would impede depositing study materials in a public repository. The author additionally countered a commonly held position that raw patient data should not be made available, noting that the use of appropriate de-identification can make the information anonymous.8 A second editorial published in JAMA Psychiatry7 argued for more robust statistical analysis and decision making to improve the reproducibility of psychiatry studies. The author of this editorial discussed several statistical considerations, including the effects of statistical assumption violations on validity and study power, the likelihood of spurious findings based on small sample sizes, a priori covariate selection, effect size reporting and cross-validation. These efforts are good first steps to create awareness of the problem, which was deemed a ‘reproducibility crisis’ by over 1000 scientists in a recent Nature survey10 ; however, further measures are needed. A top-down approach to evaluating transparency and reproducibility would provide valuable information about the current state of the psychiatry literature. In our study, we examined a random sample of publications from psychiatric literature for evaluating specific indicators of reproducibility and transparency within the field. Our results may be used both to evaluate for current strengths and limitations and to serve as baseline data for subsequent investigations.