key: cord-271363-nsjn05q0 authors: Page, Matthew J.; Welch, Vivian A.; Haddaway, Neal R.; Karunananthan, Sathya; Maxwell, Lara J.; Tugwell, Peter title: ‘One more time’: why replicating some syntheses of evidence relevant to COVID-19 makes sense date: 2020-05-25 journal: J Clin Epidemiol DOI: 10.1016/j.jclinepi.2020.05.024 sha: doc_id: 271363 cord_uid: nsjn05q0 • Given the urgent need for credible answers to high-priority questions about the health and social impacts of COVID-19, many systematic reviewers seek to contribute their skills and expertise; • Rather than embarking on unnecessary, duplicate reviews, we encourage the evidence synthesis community to prioritise purposeful replication of systematic reviews of evidence relevant to COVID-19. The coronavirus disease 2019 (COVID- 19) pandemic has mobilised researchers across the world on a scale not seen before (1) . As of 11 May 2020, 2787 studies presenting primary data on have been indexed in MEDLINE and Embase (2) , and 1029 clinical trials of interventions for the disease are currently underway (3, 4) . Preprint servers medRxiv and bioRxiv host more than three thousand preprints on COVID-19 (5) . There is also a wealth of data from previous pandemics (e.g. SARS, MERS) which may inform efforts to combat COVID-19. To make sense of all this data, timely, relevant systematic reviews and meta-analyses have started appearing (e.g. (6) (7) (8) (9) (10) ) and more will be necessary in the coming weeks and months. Systematic reviews are required to address not only the aetiology, diagnosis, prognosis and treatment of symptoms of COVID-19, but also the social impacts of the disease (e.g. effects of strategies to support parents with home-schooling their children and educators with online learning pedagogical strategies; consequences of police being mobilized to police quarantines; and international development issues such as food security during the pandemic). We believe that while original reviews are essential, decision making during the pandemic would benefit also from the purposeful replication of some systematic reviews of evidence relevant to COVID-19. In this article, we draw a distinction between duplication and replication of systematic reviews. By 'duplication' of systematic reviews, we mean needless, frequently unwitting or unacknowledged repetition of reviews without a clearly defined purpose for the repetition. By 'replication', we mean using the same or very similar methods as a previous systematic review to determine whether comparable results are obtained, or intentionally broadening or narrowing the question addressed in a previous review to check how operationalisation of concepts in the previous review influenced the results (11) . Previous research suggests that a high proportion of systematic reviews and meta-analyses duplicate those that came before (12, 13) . For example, 57 systematic reviews of the effects of direct oral 3 anticoagulants for stroke prevention in atrial fibrillation were published between 2012 and 2017 (14) . Duplicate systematic reviews waste time and resources, creating extra work for health care providers and other users who need to determine what unique information, if any, each review provides. Duplication can also create confusion when reviews addressing the same question reach conflicting findings (15) . By 11 May 2020, there were 806 systematic reviews of human studies relevant to COVID-19 registered in PROSPERO, the international prospective register for systematic reviews (16). However, many of these registered reviews appear to address the same or a similar question (e.g. 21 reviews include chloroquine or hydroxychloroquine and 30 include Traditional Chinese medicine in the title). Given the urgent need for credible answers to high-priority questions about the health and social impacts of COVID-19, it is unsurprising that many systematic reviewers seek to contribute their skills and expertise. However, unless different teams working on the same review begin collaborating with one another, an epidemic of redundant reviews on COVID-19 is likely on the horizon. Along with minimising production of unnecessary, duplicate reviews, we encourage the evidence synthesis community to prioritise purposeful replication of some systematic reviews of evidence relevant to COVID-19. Initially, this could involve replicating previously published, high-priority reviews conducted to address questions of relevance to a previous pandemic (e.g. what are the effects of wearing masks in public?) or questions originally posed in an unrelated context (e.g. what are the effects of programs to support physical activity at home for house-bound older adults?). Thereafter, it may be necessary to replicate some reviews relevant to COVID-19 that are conducted during the pandemic (e.g. what are the effects on COVID-19 symptoms of drugs currently being evaluated in randomized trials?). Replicating reviews might satisfy the curiosities of methodologists wondering what impact specific methods have on review findings, but that is far from their only purpose. Rather, replicating reviews is a mechanism for verifying or addressing uncertainties about the results of an original review that decision makers might be relying on to formulate recommendations for practice and policy. Replication of reviews is important in general but is especially valuable for syntheses of evidence relevant to COVID-19. Results of systematic reviews are determined by many choices relating to their design, conduct and analysis (17, 18) . For example, reviewers need to decide which studies to include, how to identify studies, which outcome data to collect, and how to synthesise results. There are also many opportunities for errors in reviews, for example in the selection of eligible studies, or collection of relevant data. These issues are compounded during a pandemic such as COVID-19, when stakeholders need answers to pressing questions as soon as possible. The time available to decide the review's scope and methods may be substantially less than usual, and potential for error may be considerably higher. Replication of systematic reviews of evidence relevant to COVID-19 therefore can serve as a useful quality control process. Results of a replication could lead to an increase or decrease in confidence in the claims made in the original review, indicate constraints on the reliability of the findings, and help refine or advance theory, subsequently providing more accurate information for decision makers during the pandemic (19) . Many systematic reviews of evidence relevant to COVID-19 are using methodological shortcuts to provide evidence in a timely manner (20). For example, in their review of quarantine alone or in combination with other public health measures to control COVID-19, Nussbaumer-Streit et al. decided to have a single author screen 70% of titles and abstracts, and one author collect data with verification by another (21) . Based on registration data in PROSPERO, there are many systematic reviewers keen to contribute to the COVID-19 research effort, who could band together to work on purposeful replications that evaluate the impact of abbreviated methods on review findings, rather than proceeding with a redundant review. Doing so could help reveal what risks the use of 5 methodological shortcuts entail, if any, adding to the limited comparative evidence on different methods for systematic reviews (22) . Replication of systematic reviews can be done in various ways, with some requiring less resources than others. Systematic reviewers could perform a full replication of a review by repeating the entire set of systematic review methods, or a partial replication by repeating a particular method for which there was reason for concern. Examples of the latter include: running the same or a broader search to see if any relevant studies were missed; extracting the study data necessary to recreate one of the meta-analyses reported to see if an alternative result was obtained; or conducting a more in-depth analysis of a subgroup of studies in the original review. Replication of a review could be done for the purpose of determining the impact of involving different stakeholders (e.g. patients, insurers) in the review process or using an alternative statistical or qualitative synthesis approach (23) . Replication might also be done to evaluate the impact on the review findings of using automation tools (e.g. for study selection, or risk of bias assessment) (24) , as compared to an original review relying on human reviewers only. The commonality amongst all these approaches is the adoption of similar or somewhat expanded methods as those used in a target systematic review. By contrast, adopting the methods of an entirely different type of evidence synthesis (e.g. scoping review, overview of systematic reviews) would not constitute a replication, given the different purpose the other type of synthesis serves. Even with an army of experienced systematic reviewers, replicating every review of evidence relevant to COVID-19 is neither feasible nor desirable. Systematic reviewers, commissioners and other stakeholders must therefore prioritise which reviews to replicate. An international, multidisciplinary group of 36 individuals from seven countries met in Wakefield, Canada in 2019 to develop guidance on when and when not to replicate systematic reviews. The resulting guidance advises reviewers to consider various criteria, such as (i) the priority of the review question for decision makers, (ii) the potential for replication to address uncertainties, controversies or the need for additional evidence relating to the framing, conduct, potential for author influence, or discordance of findings in previous reviews, (iii) the extent to which implementation of results of the replication could affect a sizeable population, and (iv) whether resources required to replicate are offset by the potential value in reaffirming or addressing uncertainties related to the original results (25) . A paper describing this guidance is under review, and we encourage anyone wishing further information, or keen to collaborate on research on the replicability of systematic reviews, to contact us. The COVID-19 context provides some unique opportunities and challenges for replication of reviews. Several syntheses of evidence relevant to COVID-19 are being continually updated (i.e. "living" reviews) (4, 10, 26, 27) . If one of these reviews were replicated and errors were identified, these could be corrected in the original review at a much faster pace than usually occurs. Also, by bringing together various organisations to help reduce duplication and better coordinate evidence syntheses relevant to COVID-19, the recently established COVID-19 Evidence Network to support Decisionmaking (COVID-END) (28) could help facilitate the process of prioritising and coordinating replications of reviews. On the other hand, the politicisation of discussions about COVID-19 means that the findings of replicated reviews would need to be communicated carefully, as failures to obtain the same result in a replication could be weaponised by some to discredit the entire systematic review process; something already observed in discourse on modelling studies for COVID-19 (29) . To enhance replication of systematic reviews relevant to COVID-19 completed during the pandemic, we urge systematic reviewers to make their workflow publicly accessible. We recommend reviewers use reporting guidelines for systematic reviews, which typically recommend authors report what question(s) the review addressed, the types of studies they considered eligible, how they identified 7 such studies, which data they collected, and how results were synthesized (30) . Following principles of "Open Synthesis" by sharing the underlying data, analytic code and other materials used in the review via one of the various public repositories available (such as the Open Science Framework, figshare, or Dryad) can supplement information provided in the review report (31) . For example, the summary data required to re-run meta-analyses and data for other outcomes for which metaanalysis was not possible could be provided in a well curated format ready for reuse (e.g. Review Manager file, or a Microsoft Excel or CSV file) along with any analytic code necessary for reanalysis. Also, data extraction forms that clearly indicate what data were sought, what data were obtained and where data were obtained from may reduce uncertainties for replicators (31, 32) . Replicators should also register their plans to replicate a review at PROSPERO and post working protocols in publicly accessible repositories. In addition to aiding replication efforts, making the review workflow available for scrutiny should help increase the public's trust in systematic review findings. We believe Nosek and Errington's view of replication as an "exciting, generative, vital contributor to research progress" (19) easily applies to systematic reviews and other research syntheses. However, replicated systematic reviews are currently a rarity, likely because their potential value is underrecognised by researchers, funders, journals and other stakeholders. We hope that this changes throughout the COVID-19 pandemic and beyond, with replicated systematic reviews coming to be seen as highly valued and necessary research products, and redundant reviews a relic of the past. MJP is an editorial board member and PT is the co-editor in chief for the Journal of Clinical Epidemiology, but neither were involved in the peer review process or decision to publish. There was no direct funding for this manuscript. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618). The meeting to develop guidance on when and when not to replicate systematic reviews was supported by a Canadian Institutes of Health Research Operating Grant (REF # PJT-148870). The funders had no role in decision to publish or preparation of the manuscript. All authors declare to meet the ICMJE conditions for authorship. MJP conceived the paper and wrote the first draft. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. A novel coronavirus outbreak of global health concern COVID-19: living map of the evidence Global Coronavirus COVID-19 Clinical Trial Tracker A real-time dashboard of clinical trials for COVID-19. The Lancet Digital health COVID-19 SARS-CoV-2 preprints from medRxiv and bioRxiv Oxford COVID-19 Evidence Service Living mapping and living network meta-analysis of Covid-19 studies When and how to replicate systematic reviews Overlapping meta-analyses on the same topic: survey of published studies Overlapping network meta-analyses on the same topic: survey of published studies Overview of Systematic Reviews of Non-Vitamin K Oral Anticoagulants in Atrial Fibrillation Discrepancies in meta-analyses answering the same clinical question were hard to explain: a meta-epidemiological study The role of judgment calls in meta-analysis Vibration of effects from diverse inclusion/exclusion criteria and analytical choices: 9216 different ways to perform an indirect comparison meta-analysis Coronavirus Disease (COVID-19) Pandemic: An Overview of Systematic Reviews Quarantine alone or in combination with other public health measures to control COVID-19: a rapid review Few studies exist examining methods for selecting studies, abstracting data, and appraising quality in a systematic review Just how plain are plain tobacco packs: re-analysis of a systematic review using multilevel meta-analysis suggests lessons about the comparative benefits of synthesis methods A full systematic review was completed in 2 weeks using automation tools: a case study When should systematic reviews be replicated, and when is it wasteful? Epidemiology of and Risk Factors for Coronavirus Infection in Health Care Workers: A Living Rapid Review. Annals of internal medicine Interventions for treatment of COVID-19: a protocol for a living systematic review with network meta-analysis including individual patient data (The LIVING Project) Coronavirus modelers factor in new public health risk: Accusations their work is a hoax Mapping of reporting guidance for systematic reviews and meta-analyses generated a comprehensive item bank for future reporting guidelines Open Synthesis: on the need for evidence synthesis to embrace Open Science Reproducible research practices are underused in systematic reviews of biomedical interventions