key: cord-0785837-nrpcu69b authors: Nieto, I.; Navas, J.F.; Vázquez, C. title: The quality of research on mental health related to the COVID-19 pandemic: A note of caution after a systematic review() date: 2020-08-02 journal: Brain Behav Immun Health DOI: 10.1016/j.bbih.2020.100123 sha: a12bfd77c5a0df09e5e2c94a6c5addb11544d1ca doc_id: 785837 cord_uid: nrpcu69b BACKGROUND AND AIMS: SARS-CoV-2 pandemic has spurred scientific production in diverse fields of knowledge, including mental health. Yet, the quality of current research may be challenged by the urgent need to provide immediate results to understand and alleviate the consequences of the pandemic. This study aims to examine compliance with basic methodological quality criteria and open scientific research practices on the mental health effects of the COVID-19 pandemic. METHOD AND RESULTS: Twenty-eight studies were identified through a systematic search. Most of them met the requirements related to reporting key methodological and statistical information. However, the widespread use of convenience samples and the lack of a priori power analysis, coupled with low compliance with open science recommendations, such as pre-registration of studies and availability of databases, raise concerns about the validity, generalisability, and reproducibility of the findings. CONCLUSIONS: While the importance of offering rapid evidence-based responses to mitigate mental health problems stemming from the COVID-19 pandemic is undeniable, it should not be done at the expense of sacrificing scientific rigor. The results of this study may stimulate researchers and funding agencies to try to orchestrate efforts and resources and follow standard codes of good scientific practice. The SARS-CoV-2 pandemic has had profound consequences in many areas of daily life including research activities. The academic world has launched, admirably fast, many initiatives to try to understand both the biological mechanisms of the virus and the associated psychological and social responses in humans (Holmes et al., 2020) . However, this rapid proliferation of studies, and fast-track publication practices, can also be a source of unexpected problems related to the quality of the ongoing studies (Ioannidis, 2020) which, ultimately, are leading to enormous ethical and practical problems (Mehra et al., 2020) . The SARS-CoV-2 crisis is likely challenging some of the standard codes of scientific conduct. A gigantic incentive system has been built up to immediately offer results, which might being stimulating a race for social and academic reputation as well as potential economic advantages (Smaldino and McElreath, 2016) . However, even during these difficult times, or perhaps due to it, high-quality research should be inexcusably guided by a series of principles that Zarin et al. (2019; p.813 ) summarised as: "(1) the study hypothesis must address an important and unresolved scientific, medical, or policy question; (2) the study must be designed to provide meaningful evidence related to this question; (3) the study must be demonstrably feasible (e.g., it must have a realistic plan for recruiting sufficient participants); (4) the study must be conducted and analysed in a scientifically valid manner; and (5) the study must report methods and results accurately, completely, and promptly". Lack of adherence to these principles may contribute to the crisis replicability and reproducibility in science (Munafò et al., 2017) and may favor the creation of a worthless corpus of knowledge based on significant but spurious findings. It is possible that the urgency of situations may bring some problems, like running studies only for the sake of exploratory purposes with no clear hypotheses and flawed designs or analyses of results, which may harm the credibility of science and, worst of all, mislead important decisions on prevention and intervention measures (London and Kimmelman, 2020) . With the hope of contributing to the improvement of quality future research, the aim of this meta-research study was to provide a descriptive perspective on the practices of research and publication related to the mental health aspects of the current COVID-19 crisis. To analyse the quality of quantitative studies, given that there are no clear universal standards of scientific practices in the field of mental health (Gruber and Joormann, 2020) , we focused on general principles of ethical standards, methodological soundness, adequate reporting, open science practices as described in quality checklists (Downs and Black, 1998) and indicators of reproducibility and transparency (Munafò et al., 2017) . This systematic review was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses-P statement for systematic review and metaanalysis protocols (Shamseer et al., 2015) . The study protocol was previously preregistered: https://osf.io/bk3gw/. The literature search was performed using PubMed and Scopus databases on the 13 th of May. Studies were included if they were empirical studies, published in English, between February and May 2020, and in peer-reviewed journals. The dependent variable(s) required to be quantitative, self-reported behavioural, cognitive or emotional measures related to mental health. Exclusion criteria included clinical pharmacological trials, studies using psychophysiological or biological recordings, and opinion articles, letters to the Editor, reviews and the like. The terms used in the search were coronavirus OR COVID (limited to title/abstract) AND mental OR psych* OR depression OR anxiety OR stress OR trauma OR alcohol OR drugs OR substance use. Two researchers (IN and JFN) independently evaluated search results for inclusion. First, duplicates were removed. Then, studies whose title and abstracts suggested that they did not meet the inclusion criteria were eliminated. Finally, researchers examined in detail the full text of the remaining studies. Discrepancies were resolved by consensus and, when needed, by referral to a senior researcher (CV). A list of quality indicators was developed by agreement between the authors, after examining some of the most widely used existing guidelines of quality in social and health research: Checklist for Measuring Quality (Downs and Black, 1998) For the selection, we included indicators that were common to all types of quantitative research, regardless of their design (e.g. clinical trials vs. cross-sectional studies). Since there is a great overlap between checklists, the criteria were mainly based on the Checklist for Measuring Quality (Downs and Black, 1998) . Indicators which required a subjective judgment were avoided (e.g., Were the statistical tests used appropriately?). Broadly, these indicators encompassed aspects of the design (e.g. recruitment process, use of validated instruments), the analytical strategy (e.g. control of potential confounders), and the way in which key components of the study are reported (e.g. internal reliability of the instruments used, exact p-values). In addition, some further criteria that were not considered in the referred guidelines (most of them related to Open science quality criteria) were added: (i) the approval of an ethical committee; (ii) report of the effect sizes; (iii) pre-registration of the study and (iv) open access to data bases (see Table 1 , next section, for the list of quality of indicators). Twenty-eight empirical studies on mental health problems associated with the SARS-CoV-2 were finally included in the analyses (see Figure 1 for a detailed description of search flow and results). In Table 1 we present the standard ethical, methodological, analytical and open science quality indicators that were selected and the percentage of studies that met each indicator (i.e., our main outcome measure). The studies were coded by two independent judges (IN and JFN) . The Kohen´s kappa inter-rater reliability score was 0.96. Discrepancies were resolved by discussion between both researchers. Those on which no consensus was reached were referred to the senior researcher (CV) for a final decision. For the sake of transparency, the quality indicator met by each single study and is available as a Supplementary file. Report of estimates of the random variability in the data for the main outcome(s). 5 Report of effect size(s) of main outcome(s). 96.4% 6 * Report of exact p-value(s) for the main result(s), except when the probability value is less than 0.001. Use of random sampling methods to recruit participants. 10.7% 8 * Report of the proportion of participants that agreed to participate. 48.1% 9 Inclusion of a priori power analyses. 7.14% Internal validity 10 Use of previously validated instruments to measure main outcomes. 11 Report of the internal reliability of measurement instruments within the study. 12 * Inclusion of specific analyses to control for potential confounders (e.g. sex, age, health status, etc.). Open science 13 Pre-registration of the study design, primary outcomes and analysis plan. 14 Open access to databases (i.e. databases available in public repositories). Note: Quality indicators marked with an asterisk were not applicable in some cases. Thus, there were 27 studies that met the eighth indicator, while for the sixth and twelfth indicators there were 24 studies. All but one study reported some effect size related to their main outcome measures. Two criteria were met by more than 80% of the studies. One related to reporting key information (i.e., estimates of the random variability in the data for the main outcomes) and the other one was the use of validated instruments to measure main outcomes. Also, a majority of the studies had the approval of an ethics committee and reported information on their inclusion/exclusion criteria for the sample, the period of time in which the sample was recruited and data was collected, and the exact p-values for main results. On the other hand, around sixty percent of studies included analytic strategies to control for the main potential confounders, mostly the half of studies reported the proportion of participants that agreed to participate, but less than forty percent of studies included estimates of the internal reliability of the instruments used. Finally, there was a low compliance with standards of quality regarding the use of random sampling methods to recruit participants, conducting a priori power analysis, making available the database used in a public repository, and pre-registration. The results of the study reveal that some critical components of high-quality (Vanpaemel et al., 2015) and psychiatry (Sherry et al., 2020) , showing that this practice is poorly followed by researchers, confirms that research related to the COVID-19 does not follow recommended measures to increase replicability of findings. Additionally, some warnings and recommendations have been made about the quality of internet-collected responses (Chandler et al., 2020) and the practice of pre-analysis plans (Olken, 2015 ) that apparently are not followed in the studies analysed. Although some of the limitations detected in our systematic review may be due to the urgency of providing fast research-based answers during a global emergency situation, we wonder whether researchers, universities, research centres and funding agencies are ready to respond to the unusual pressures associated to the development of scientific knowledge without affecting quality standards. It is worth noting that the COVID-19 pandemic has had a deep impact on academic organisations and researchers worldwide subjecting them to unprecedented stress (e.g., time pressures to launch initial studies, difficulties to release funds for research, overwhelming increases in workload,...) that may have hindered possibilities to design and perform, at least in the initial stages of the pandemic, more sophisticated and controlled research (e.g. using random sampling methods instead of convenience samples). In that respect, a limitation of the present review is that we have only included a first wave of published studies performed at the beginning of the pandemic, which might have been particularly affected by restrictions in time, funding and human resources. Nonetheless, regardless of reasons explaining these shortcomings, our findings bring concerns on whether the research examined has potential practical utility or may instead represent an obstacle to understanding the true impact of the COVID-19 pandemic on mental health. As London and Kimmelman (2020) have defended, the logistic and practical challenges caused by the pandemic should not be an excuse to loosen up the criteria of quality usually requested in good science practices. In spite of the current limitations to conduct high-quality research, society does not just need data but, now more than ever, sound data. Poor designs and procedures of recruitment and the lack of representativeness of the samples may make findings worthless and being a waste of money an energy. Furthermore, flawed science may reduce trust in science, which is a relevant key for citizens to follow prescriptions during pandemics (Balog-Way and McComas, 2020). Although as we mentioned before, it is understandable the urgency to conduct studies and publish results, a sound science is typically fed by reflection, research guided by hypotheses, and robust strategic plans . Behind the demand of quality in research in these difficult times, we should remind ourselves that "the moral mission of research remains the same: to reduce uncertainty and enable caregivers, health systems, and policy-makers to better address individual and public health" (London & Kimmelman, 2020; p.476) . Otherwise, these efforts may be both as worthless and unproductive as unethical (Zarin et al., 2019) . Authors have no conflict of interest to declare. There was no direct funding source for this study. JFN was supported by a Spanish COVID-19: Reflections on trust, tradeoffs, and preparedness Participant carelessness and fraud: Consequences for clinical research and potential solutions CASP Qualitative Checklist The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions Best research practices in clinical science: Reflections on the status quo and charting a path forward Calibrating the scientific ecosystem through metaresearch An empirical assessment of transparency and reproducibility-related research practices in the social sciences Multidisciplinary research priorities for the COVID-19 pandemic: a call for action for mental health science Coronavirus disease 2019: The harms of exaggerated information and non-evidence-based measures Criteria for the systematic review of health promotion and public health interventions Against pandemic research exceptionalism. Science (80-. ) Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis A manifesto for reproducible science Promises and perils of pre-analysis plans Preferred reporting items for systematic review and meta-analysis protocols (prisma-p) 2015: Elaboration and explanation Assessment of transparent and reproducible research practices in the psychiatry literature The natural selection of bad science Are we wasting a good crisis? The availability of psychological research data after the storm The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies The Newcastle-Ottawa scale (NOS) for assessing the quailty of nonrandomised studies in meta-analyses Harms from uninformative clinical trials 11 We thank James O'Grady for his assistance in editing the latest version of this manuscript and Prof. Julio Sanchez-Meca for his comments on the goals of design of the study. The authors declare no conflict of interest