Assessment of transparent and reproducible research practices in the psychiatry literature ========================================================================================== * Caroline Elizabeth Sherry * Jonathan Z Pollard * Daniel Tritz * Branden K Carr * Aaron Pierce * Matt Vassar ## Abstract **Background** Reproducibility is a cornerstone of scientific advancement; however, many published works may lack the core components needed for study reproducibility. **Aims** In this study, we evaluate the state of transparency and reproducibility in the field of psychiatry using specific indicators as proxies for these practices. **Methods** An increasing number of publications have investigated indicators of reproducibility, including research by Harwicke *et al*, from which we based the methodology for our observational, cross-sectional study. From a random 5-year sample of 300 publications in PubMed-indexed psychiatry journals, two researchers extracted data in a duplicate, blinded fashion using a piloted Google form. The publications were examined for indicators of reproducibility and transparency, which included availability of: materials, data, protocol, analysis script, open-access, conflict of interest, funding and online preregistration. **Results** This study ultimately evaluated 296 randomly-selected publications with a 3.20 median impact factor. Only 107 were available online. Most primary authors originated from USA, UK and the Netherlands. The top three publication types were cohort studies, surveys and clinical trials. Regarding indicators of reproducibility, 17 publications gave access to necessary materials, four provided in-depth protocol and one contained raw data required to reproduce the outcomes. One publication offered its analysis script on request; four provided a protocol availability statement. Only 107 publications were publicly available: 13 were registered in online repositories and four, ten and eight publications included their hypothesis, methods and analysis, respectively. Conflict of interest was addressed by 177 and reported by 31 publications. Of 185 publications with a funding statement, 153 publications were funded and 32 were unfunded. **Conclusions** Currently, Psychiatry research has significant potential to improve adherence to reproducibility and transparency practices. Thus, this study presents a reference point for the state of reproducibility and transparency in Psychiatry literature. Future assessments are recommended to evaluate and encourage progress. * retrospective studies * sample size * Sampling studies * research design ## Introduction Reproducibility is a cornerstone of scientific advancement1; however, many published works lack the core components needed for reproducibility and transparency. These barriers to reproducibility have presented serious immediate and long-term consequences for psychiatry, including poor credibility, reliability and accessibility.2 Fortunately, methods to improve reproducibility are practical and applicable to many research designs. For example, preregistration of studies provides public access to the protocol and analysis plan. Reproducibility promotes independent verification of results2 and successful replication,2 3 and it hedges against outcome switching.4 Supporting this need in the field of psychology, the Open Science Collaboration’s reproducibility project was an attempt to replicate the findings of 100 experimental and correlated studies published in three leading psychology journals. Researchers found that 97% of the original reports had statistically significant results, whereas only 37% of the replicated studies had significant results.5 With regard to outcome switching, a recent survey of 154 researchers investigating electrical brain stimulation found that less than half were able to replicate previous study findings. These researchers also admitted to selective reporting of study outcomes (41%), adjusting statistical analysis to alter results (43%) and adjusting their own statistical measurements to support certain outcomes.6 Leveraging good statistical practices and using methods that promote reproducibility, such as preregistration, are necessary to protect against similar incidents of selective reporting.7 Considerable advancements have been noted to promote and endorse reproducible and transparent research practices in the field of psychology. For example, the Centre for Open Science, the Berkeley Institute for Transparency in the Social Sciences and the Society for the Improvement of Psychological Science have all worked vigorously to establish a culture of transparency and a system of reproducible research practices. However, mental health researchers, including psychiatry researchers, have not kept pace with their psychology counterparts.8 A few editorials have circulated to promote awareness of reproducibility and transparency within psychiatric literature.7–9 For example, *The Lancet Psychiatry* published an editorial addressing various topics of reproducibility, such as increasing the availability of materials, protocols, analysis scripts and raw data within online repositories. The editorial author argued that few limitations exist within psychiatry that would impede depositing study materials in a public repository. The author additionally countered a commonly held position that raw patient data should not be made available, noting that the use of appropriate de-identification can make the information anonymous.8 A second editorial published in *JAMA Psychiatry*7 argued for more robust statistical analysis and decision making to improve the reproducibility of psychiatry studies. The author of this editorial discussed several statistical considerations, including the effects of statistical assumption violations on validity and study power, the likelihood of spurious findings based on small sample sizes, *a priori* covariate selection, effect size reporting and cross-validation. These efforts are good first steps to create awareness of the problem, which was deemed a ‘reproducibility crisis’ by over 1000 scientists in a recent *Nature* survey10 ; however, further measures are needed. A top-down approach to evaluating transparency and reproducibility would provide valuable information about the current state of the psychiatry literature. In our study, we examined a random sample of publications from psychiatric literature for evaluating specific indicators of reproducibility and transparency within the field. Our results may be used both to evaluate for current strengths and limitations and to serve as baseline data for subsequent investigations. ## Methods We conducted an observational study using a cross-sectional design based on methodology by Hardwicke *et al*.2 Our study is reported in accordance with guidelines for meta-epidemiological methodology research.11 We have made available protocols, materials and other pertinent information on Open Science Framework ([https://osf.io/n4yh5/](https://osf.io/n4yh5/)). This study was not subject to institutional review board oversight because it did not include human participants. ### ​Journal and study selection We used the National Library of Medicine (NLM) catalogue to search for all journals, using the subject terms tag Psychiatry[ST]. This search was performed on May 29, 2019 by DT. The inclusion criteria required that journals were in English and also MEDLINE indexed. The list of journals in the NLM catalogue was then extracted along with their electronic ISSN (or linking ISSN if electronic is unavailable). The final ISSN search string was used to search PubMed to identify all publications between January 1, 2014, and December 31, 2018. DT then compiled a random sample of 300 publications from selected journals. ### ​Extraction training Prior to data extraction, two investigators (CES and JZP) underwent a full day of training to ensure inter-rater reliability. The training included a review of the study design, protocol, extraction form and the identification of information from two publications selected by DT. The two investigators were given three articles from which to extract data as examples. Following extraction, the pair reconciled all differences. The training session was recorded and listed online for reference ([https://osf.io/jczx5/](https://osf.io/jczx5/)). Prior to extracting data from all studies, these two investigators extracted data from the first 10 publications from their specialty list. Discrepancies were resolved by discussion between the investigators. ### ​Data extraction Two investigators (CES, JZP) extracted data from the 300 publications in a duplicate and blinded fashion. Following extraction, a final consensus meeting was held by the pair to resolve disagreements. A third investigator (DT) was available for adjudication, but this process was not necessary. A pilot-tested Google form was created based on a study by Hardwicke *et al*,2 with additions. This form included the indicators of reproducibility and transparency ([https://osf.io/3nfa5/](https://osf.io/3nfa5/)) and items related to study characteristics. This assessment of reproducibility and transparency was developed according to key indicators in Hardwicke *et al* in addition to other indicators relevant to promoting transparent, collaborative, reproducible research. The indicators examined were: public accessibility, funding, conflict of interest, citation frequency and any statements for protocol, materials, data availability and preregistration. (See online supplementary table A for specific quantity and percentage). Further explanation of the frequency, value and relevance of these indicators of transparency and reproducibility is organised in online supplementary table B. The extracted data varied according to the study design, with studies that had no empirical data being excluded (eg, editorials, commentaries (without reanalysis), simulations, news, reviews and poems). We also expanded the study design options to include cohort, case series, secondary analysis, chart reviews and cross-sectional studies. Finally, we used the following funding categories: university, hospital, public, private/industry, or non-profit. ### Supplementary data [[gpsych-2019-100149supp001.pdf]](pending:yes) ### ​Open access availability We searched Open Access Button ([https://openaccessbutton.org](https://openaccessbutton.org)) to assess whether studies were available by open access. If Open Access Button was unable to access the article, then two investigators (CES, JZP) used the publication title and DOI to search Google or PubMed to find whether the full text version was publicly available. ### ​Replication and evidence synthesis We used the Web of Science ([https://www.webofknowledge.com/](https://www.webofknowledge.com/)) to determine whether the studies composing our sample were replication studies or included in systematic reviews. Web of Science was used to easily determine the number and type of other studies that cited each publication we examined. Articles unavailable on Web of Science were located and examined via PubMed or other resources. We determined if each publication was a replication study based on if the research was conducted to replicate aspects of a prior study’s design or findings. To do so, we reviewed each publication that had cited the studies included in our sample using Web of Science’s citation listing feature. We performed this process in the same manner as data extraction, described previously. ### ​Statistical analysis We report descriptive statistics for each category along with 95% CIs of proportions, calculated using Microsoft Excel. ## Results ### ​Sample characteristics Our search of the NLM catalogue identified 346 journals, with only 158 meeting the inclusion criteria (figure 1). The median 5-year impact factor for these journals was 3.20 (IQR: 2.2–4.2). Our PubMed search returned 407 656 studies initially, and this number was reduced to 90 281 after we applied the date limiter. From these search returns, 300 psychiatry research publications were randomly selected. Four were inaccessible, yielding a final sample size of 296 publications. The majority of the 296 publications had a primary author from USA (155, 52%), UK (86, 29%) and the Netherlands (32, 11%). The top three publications types were cohort studies (46, 16%), surveys (45, 15%) and clinical trials (36, 12%). With regard to accessibility, 107 (36%) of the 296 publications were publicly available, whereas the additional 189 publications (64%) were only available behind a paywall. Remaining sample characteristics are displayed in table 1 and online supplementary table A. ![Figure 1](http://gpsych.bmj.com/https://gpsych.bmj.com/content/gpsych/33/1/e100149/F1.medium.gif) [Figure 1](http://gpsych.bmj.com/content/33/1/e100149/F1) Figure 1 Prisma diagram: selection process from PubMed-indexed psychiatry journals. ### ​Reproducibility factors Factors for reproducibility include the availability of materials, data, protocol, analysis script and preregistration. (See online supplementary table B detailing the relevance and value of each factor). Of the 296 publications, 185 were analysed for a materials availability statement and 211 were analysed for a data availability, protocol availability, analysis script availability statement and preregistration statement (figure 1). These differences were the result of excluding particular study designs from certain analyses, such as excluding case studies from preregistration. Of the 185 publications analysed for a materials availability statement, 22 (12% (95% CI: 8.2% to 16%]) had a materials availability statement, yet only 17 provided an accessible materials document (table 2). Only 14 (6.6% (95% CI: 3.8% to 6.6%)) of the 211 publications provided a data availability statement, with just one study including all the raw data necessary to reproduce its findings. Only four of the 211 publications (1.9% (95% CI: 0.4% to 3.4%)) provided a protocol availability statement, and a single publication (0.47% (95% CI: 0% to 1.3%)) stated that its analysis script was available on request (table 2). Of the 211 publications for which preregistration was analysed, only 13 (6.2% (95% CI: 3.4% to 8.9%)) included a statement that the study was registered in publicly accessible repositories (table 2). All 13 publications were accessible; four (31% (95% CI: 26% to 36%)) included their hypothesis, 10 (77% (95% CI: 72% to 82%)) included their methods and eight (62% (95% CI: 56% to 67%)) included their analysis plan. View this table: [Table 1](http://gpsych.bmj.com/content/33/1/e100149/T1) Table 1 Publication characteristics View this table: [Table 2](http://gpsych.bmj.com/content/33/1/e100149/T2) Table 2 Characteristics of reproducibility in psychiatry studies ### ​Conflict of interest and funding All 296 publications were included in the conflict of interest and funding source analysis (figure 1). Of these studies, 177 (60% (95% CI: 41% to 69%)) included a conflict of interest statement, with 10% reporting a conflict of interest. With regard to funding, 185 (63% (95% CI: 43% to 82%)) of the 296 articles had a funding statement. Of the 296 publications, 153 (52% (95% CI: 36% to 68%)) were funded and 32 (11% (95% CI: 7.3% to 14%)) did not receive funding. The majority (72, 24%) of funding came from public sources. Additional results are presented in table 2. ### ​Replication and evidence synthesis Of the 296 publications, 211 were analysed for being a replication study, and 201 were analysed to determine how many had been cited in a meta-analysis or systematic review (figure 1). Four (1.9% (95% CI: 0.4% to 3.4%)) were identified as a replication study, and 82 (41% (95% CI: 30% to 51%)) were cited in at least one systematic review or meta-analysis (table 2). ## Discussion ### Main findings Our results demonstrated that the majority of publications within psychiatry literature lack the necessary materials, raw data and detailed protocols to be easily reproducible. These findings are concerning, given the critical need for reproducible and transparent scientific research. In this section, we outlined a few of the issues causing concern and offer suggestions to improve this disparity between standards of research and current practices. To begin, we found that only 13 publications had a statement about preregistration. Preregistration allows for independent evaluation of the consistency between the registered plan and what was actually performed in the study. Selective reporting bias—upgrading, downgrading, removing, or adding study outcomes based on statistically significant findings—is particularly problematic. Comparisons between preregistration documents and published reports enable independent researchers to determine whether this form of bias has likely occurred. Multiple studies indicate that selective reporting bias is a pervasive problem in the medical literature12–16 including psychotherapy trials.17 Scott *et al* evaluated selective outcome reporting of clinical trials published in *The American Journal of Psychiatry*, *Archives of General Psychiatry/JAMA Psychiatry*, *Biological Psychiatry*, *Journal of the American Academy of Child and Adolescent Psychiatry* and *The Journal of Clinical Psychiatry*.18 They found that 28% of trials in their sample showed evidence of selective outcome reporting. As another example, the COMPare project was designed to evaluate all trials published in prestigious general medical journals. After completing evaluations for selective outcome reporting, members of the project drafted letters to the editor requesting clarification for discrepant endpoints. To date, they have identified 354 outcomes that were not reported and 357 outcomes that were silently added across 67 trials.19 To address this type of problem, stricter adherence to preregistration is needed. For example, although the Food and Drug Administration Amendments Act codified into law that all applicable clinical trials should be prospectively registered before trial commencement, penalties for non-compliant investigators have never been enacted.20 Given that this safeguard is already in place, greater enforcement is likely a viable first step toward improvement. Additionally, the International Committee of Medical Journal Editors (ICMJE) mandates that ICMJE-endorsing journals require prospective trial registration as a precondition for publication for all clinical trials.21 However, studies have found that journals do not always enforce registration policies.22 Given that journals are gatekeepers of scientific knowledge and advancement, we advocate for journals adopting mechanisms to enforce their policies. Additional training is also warranted for junior researchers and students who may not be aware of the inherent issues involved in the failure to preregister studies. Responsible conduct of research courses are required for trainees participating in fellowships and training programme funded by the National Institutes of Health. For more established faculty, universities offer modules related to research ethics, human participant protections, data management, informed consent and anonymity. Such courses could likely incorporate training into issues involving preregistration, transparency and reproducibility. Academic conferences offer another avenue for training of all parties regarding open research practices. Transparency of the methodological process, data collection and data analyses increases the credibility of study findings.8 Thus, access to the complete protocols and materials used to perform a study is imperative for replication attempts. This need is illustrated by the Reproducibility Project in Cancer Biology, which attempted to reproduce 50 landmark studies after concerns were raised by two drug companies regarding replication of cancer study findings.23 Replication of 32 of the 50 studies was abandoned, in large part because methodological details were not available from the original researchers in these published papers.5 In addition, a review of 441 biomedical publications from 2000 to 2014 found that only one study provided a full protocol, and none made all of their raw data available.24 Given the significant deficiency of materials availability in psychiatry, looking to other fields to garner ideas would be suggested. For example, the *American Journal of Political Science* requires authors of manuscripts accepted for publication to provide sufficient materials to enable other researchers to verify all analytic results reported in the narrative and supporting documents.25 Furthermore, this journal requires the materials of the final draft manuscript to be verified to confirm that the analytic results are reproducible for each study. In this process, both the quantitative and qualitative analyses have verification processes conducted at universities. Following verification, the university staff release the final data for public access, and then final publication can occur. ### Strengths With regard to strengths, this study included a random sample of the psychiatry literature from a large selection of journals. We used extensive training to ensure inter-rater reliability between investigators. Data were extracted in duplicate and blinded fashion with joint reconciliation to minimise human error. This double data extraction methodology is the gold standard in systematic reviews and is recommended by the *Cochrane Handbook for Systematic Reviews of Interventions*.26 Furthermore, all relevant study materials have been made available to ensure transparency and reproducibility. ### Limitations We acknowledge that although our sample size was 50% greater than that of Hardwicke *et al*,2 it still only represents a small fraction of the published literature. In addition, the current indicators of reproducibility and transparency have not been completely established. We used factors previously identified in social sciences and applied them to psychiatry. Our study findings should also be interpreted in light of our sample, which included only MEDLINE-indexed journals and studies published during a set time period. Differences in indicators might exist in other journals or outside this timeframe. ### Implications In conclusion, we stress the importance of adopting transparent and reproducible practices in research. Certainly, if the public lacks trust in science, it could evolve into a lack of trust in clinical practices.27 Lack of transparency is not an unknown issue,1 but when faced with change, we must reform our current practices. This study presents a reference point for the state of reproducibility and transparency in psychiatry literature and future assessments are recommended to evaluate progress. ## Footnotes * Presented at OSU-CHS Research Conference 2020 * Contributors All authors have contributed substantially to the planning, conduct and reporting of the work described in the article, including, but not limited to; Study design, data acquisition, data analysis, manuscript drafting and final manuscript approval. CES and JZP collaborated on the extraction, validation, organisation, analysis and interpretation of all data. CES was also responsible for team organisation and the manuscript formatting, revision and submission. DT designed methods, compiled the publication list, led data extraction training and assisted with data interpretation and manuscript editing. BKC and AP contributed to data interpretation and writing the introduction and discussion sections. MV provided advisement and leadership in data interpretation, scientific writing and manuscript editing. * Funding This study was funded through the 2019 Presidential Research Fellowship Mentor–Mentee Program at Oklahoma State University Center for Health Sciences. * Competing interests None declared. * Patient consent for publication Not required. * Provenance and peer review Not commissioned; externally peer reviewed. * Data availability statement All protocols, materials and other pertinent information are available on Open Science Framework ([https://osf.io/n4yh5/](https://osf.io/n4yh5/)). Comprehensive results are accessible online in Supplementary Tables A and B. * Received September 13, 2019. * Revision received November 21, 2019. * Accepted December 5, 2019. * © Author(s) (or their employer(s)) 2020. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. [http://creativecommons.org/licenses/by-nc/4.0/](http://creativecommons.org/licenses/by-nc/4.0/) This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: [http://creativecommons.org/licenses/by-nc/4.0/](http://creativecommons.org/licenses/by-nc/4.0/). ## References 1. Rigor and Reproducibility [Internet]. National Institutes of health (NIH). Available: [https://www.nih.gov/research-training/rigor-reproducibility](https://www.nih.gov/research-training/rigor-reproducibility) [Accessed cited 2019 Jun 26]. 2. Hardwicke TE, Wallach JD, Kidwell M, et al. An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014-2017) [Internet]2019. 3. Turner EH, Matthews AM, Linardatos E, et al. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med2008;358:252–60.[doi:10.1056/NEJMsa065779](http://dx.doi.org/10.1056/NEJMsa065779) [CrossRef](http://gpsych.bmj.com/lookup/external-ref?access_num=10.1056/NEJMsa065779&link_type=DOI) [PubMed](http://gpsych.bmj.com/lookup/external-ref?access_num=18199864&link_type=MED&atom=%2Fgpsych%2F33%2F1%2Fe100149.atom) [Web of Science](http://gpsych.bmj.com/lookup/external-ref?access_num=000252375600006&link_type=ISI) 4. Falk Delgado A, Falk Delgado A. Outcome switching in randomized controlled oncology trials reporting on surrogate endpoints: a cross-sectional analysis. Sci Rep2017;7:9206.[doi:10.1038/s41598-017-09553-y](http://dx.doi.org/10.1038/s41598-017-09553-y) 5. Open Science Collaboration. Psychology. estimating the reproducibility of psychological science. Science2015;349:aac4716.[doi:10.1126/science.aac4716](http://dx.doi.org/10.1126/science.aac4716) 6. Héroux ME, Loo CK, Taylor JL, et al. Questionable science and reproducibility in electrical brain stimulation research. PLoS One2017;12:e0175635.[doi:10.1371/journal.pone.0175635](http://dx.doi.org/10.1371/journal.pone.0175635) 7. Blackford JU. Leveraging statistical methods to improve validity and reproducibility of research findings. JAMA Psychiatry2017;74:119–20.[doi:10.1001/jamapsychiatry.2016.3730](http://dx.doi.org/10.1001/jamapsychiatry.2016.3730) 8. Bell V. Open science in mental health research. Lancet Psychiatry2017;4:525–6.[doi:10.1016/S2215-0366(17)30244-4](http://dx.doi.org/10.1016/S2215-0366(17)30244-4) 9. Jain S, Kuppili PP, Pattanayak RD, et al. Ethics in psychiatric research: issues and recommendations. Indian J Psychol Med2017;39:558–65.[doi:10.4103/IJPSYM.IJPSYM\_131\_17](http://dx.doi.org/10.4103/IJPSYM.IJPSYM_131_17) 10. Baker M. 1,500 scientists lift the lid on reproducibility. Nature2016;533:452–4.[doi:10.1038/533452a](http://dx.doi.org/10.1038/533452a) [CrossRef](http://gpsych.bmj.com/lookup/external-ref?access_num=10.1038/533452a&link_type=DOI) [PubMed](http://gpsych.bmj.com/lookup/external-ref?access_num=27225100&link_type=MED&atom=%2Fgpsych%2F33%2F1%2Fe100149.atom) 11. Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evid Based Med2017;22:139–42.[doi:10.1136/ebmed-2017-110713](http://dx.doi.org/10.1136/ebmed-2017-110713) 12. Howard B, Scott JT, Blubaugh M, et al. Systematic review: outcome reporting bias is a problem in high impact factor neurology journals. PLoS One2017;12:e0180986.[doi:10.1371/journal.pone.0180986](http://dx.doi.org/10.1371/journal.pone.0180986) 13. Wayant C, Scheckel C, Hicks C, et al. Evidence of selective reporting bias in hematology journals: a systematic review. PLoS One2017;12:e0178379.[doi:10.1371/journal.pone.0178379](http://dx.doi.org/10.1371/journal.pone.0178379) 14. Rankin J, Ross A, Baker J, et al. Selective outcome reporting in obesity clinical trials: a cross-sectional review. Clin Obes2017;7:245–54.[doi:10.1111/cob.12199](http://dx.doi.org/10.1111/cob.12199) 15. Aggarwal R, Oremus M. Selective outcome reporting is present in randomized controlled trials in lung cancer immunotherapies. J Clin Epidemiol2019;106:145–6.[doi:10.1016/j.jclinepi.2018.10.010](http://dx.doi.org/10.1016/j.jclinepi.2018.10.010) 16. Mathieu S, Boutron I, Moher D, et al. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA2009;302:977–84.[doi:10.1001/jama.2009.1242](http://dx.doi.org/10.1001/jama.2009.1242) [CrossRef](http://gpsych.bmj.com/lookup/external-ref?access_num=10.1001/jama.2009.1242&link_type=DOI) [PubMed](http://gpsych.bmj.com/lookup/external-ref?access_num=19724045&link_type=MED&atom=%2Fgpsych%2F33%2F1%2Fe100149.atom) [Web of Science](http://gpsych.bmj.com/lookup/external-ref?access_num=000269444900023&link_type=ISI) 17. Bradley HA, Rucklidge JJ, Mulder RT. A systematic review of trial registration and selective outcome reporting in psychotherapy randomized controlled trials. Acta Psychiatr Scand2017;135:65–77.[doi:10.1111/acps.12647](http://dx.doi.org/10.1111/acps.12647) 18. Scott A, Rucklidge JJ, Mulder RT. Is mandatory prospective trial registration working to prevent publication of Unregistered trials and selective outcome reporting? an observational study of five psychiatry journals that mandate prospective clinical trial registration. PLoS One2015;10:e0133718.[doi:10.1371/journal.pone.0133718](http://dx.doi.org/10.1371/journal.pone.0133718) 19. Goldacre B, Drysdale H, Dale A, et al. Compare: a prospective cohort study correcting and monitoring 58 misreported trials in real time. Trials2019;20:118.[doi:10.1186/s13063-019-3173-2](http://dx.doi.org/10.1186/s13063-019-3173-2) 20. U.S. Food and Drug Administration. Food and drug administration amendments act (FDAAA) of 20072007. 21. Laine C, De Angelis C, Delamothe T, et al. Clinical trial registration: looking back and moving ahead. Ann Intern Med2007;147:275–7.[doi:10.7326/0003-4819-147-4-200708210-00166](http://dx.doi.org/10.7326/0003-4819-147-4-200708210-00166) [CrossRef](http://gpsych.bmj.com/lookup/external-ref?access_num=10.7326/0003-4819-147-4-200708210-00166&link_type=DOI) [PubMed](http://gpsych.bmj.com/lookup/external-ref?access_num=17548404&link_type=MED&atom=%2Fgpsych%2F33%2F1%2Fe100149.atom) [Web of Science](http://gpsych.bmj.com/lookup/external-ref?access_num=000248804200009&link_type=ISI) 22. Sims MT, Sanchez ZC, Herrington JM, et al. Shoulder arthroplasty trials are infrequently registered: a systematic review of trials. PLoS One2016;11:e0164984.[doi:10.1371/journal.pone.0164984](http://dx.doi.org/10.1371/journal.pone.0164984) 23. Errington TM, Iorns E, Gunn W, et al. An open investigation of the reproducibility of cancer biology research. eLife2014;3.[doi:10.7554/eLife.04333](http://dx.doi.org/10.7554/eLife.04333) 24. Wallach JD, Boyack KW, Ioannidis JPA. Reproducible research practices, transparency, and open access data in the biomedical literature, 2015-2017. PLoS Biol2018;16:e2006930.[doi:10.1371/journal.pbio.2006930](http://dx.doi.org/10.1371/journal.pbio.2006930) 25. AJPS Verification Policy [Internet]. American Journal of Political Science, 2019. Available: [https://ajps.org/ajps-verification-policy/](https://ajps.org/ajps-verification-policy/) [Accessed 27 Jun 2019]. 26. Higgins JPT. Green S: cochrane Handbook for systematic reviews of interventions. John Wiley & Sons, 2011. 27. Birkhäuer J, Gaab J, Kossowsky J, et al. Trust in the health care professional and health outcome: a meta-analysis. PLoS One2017;12:e0170988.[doi:10.1371/journal.pone.0170988](http://dx.doi.org/10.1371/journal.pone.0170988) Caroline E. Sherry is a second year medical student at Oklahoma State University Center for Health Sciences. She graduated from the University of Notre Dame with a double major in Science and Theology, as well as a minor specialising in the science and practice of humanistic, compassionate medicine in 2018. Caroline became a member of the Vassar Research team as a first year medical student, enjoyed dedicated summer research time, and is looking forward to the opportunity to engage in research on rotations.
![][1] [1]: /embed/graphic-1.gif