key: cord-328548-5kjq9xqs authors: Oliveira J. e Silva, Lucas; Vidor, Marcos V.; Zarpellon de Araújo, Vicenzo; Bellolio, Fernanda title: Flexibilization of Science, Cognitive Biases, and the COVID-19 Pandemic date: 2020-08-27 journal: Mayo Clin Proc DOI: 10.1016/j.mayocp.2020.06.037 sha: doc_id: 328548 cord_uid: 5kjq9xqs nan This occurred 10 weeks after China reported to the WHO a cluster of pneumonia cases. In the context of a pandemic, multiple pressures, such as fear of death and economic collapse, may align. This scenario creates a fertile ground for ingrained cognitive biases, thereby disturbing the systematic approach upon which science usually relies. The senior author of the first published clinical study on the use of hydroxychloroquine for COVID-19 had stated in an interview with French newspaper Le Monde that "doctors can and should think like doctors, not like methodologists." 1 This study attracted attention and exerted influence after the release of its promising results. Even though methodological limitations were evident, this work generated several claims about the efficacy of hydroxychloroquine for patients with COVID-19. In ordinary times, this manuscript would be scrutinized by an extensive peer-review process that would potentially raise substantive concerns. A subsequent high-profile paper associating the use of hydroxychloroquine with increased mortality in the treatment of COVID-19 had to be retracted 2 after scientists pointed out issues such as mismatched mortality rates when compared to Australian official reports, no release of the dataset for independent analysis, and lack of thorough ethical review. These are examples of a phenomenon we call the "flexibilization" of science, a part of a vicious cycle underpinned by cognitive biases and triggered by the COVID-19 pandemic ( Figure) . The term "flexibilization" here refers to a loosening of methodological standards and the development of low-quality studies, leading to the creation of unreliable data and, later in the cycle, of anecdotal evidence. While low-quality evidence may generate new hypotheses that ultimately result in benefits for patients, it can also have the opposite effect. There are several historical examples that show how careful we need to be before making decisions based on the available evidence. 3, 4 In fact, some of these "surprising" results in previously published literature have taught us that therapeutic approaches that were initially found to be promising were instead causing harm to patients. Within this context, science and clinical research have been creating rigorous methodological standards in order to produce high-quality studies that allow us to have greater confidence in the evidence while mitigating unnecessary damage. Contrary to what was once largely accepted as a normative model, human beings are not usually rational in their decision-making processes, often relying on many heuristics that may have afforded an evolutionary compensation and adaptation for our limited computational capacity. Take, for example, our inclination to search for evidence that confirms our prior beliefs, a tendency known as confirmation bias. We are naturally prone to this sort of intellectual ambush, and there is evidence to support that scientists are not immune to these systematic errors. 5 For a group of people, once a belief is incorporated, even strong evidence contrary to such belief is not enough for a reinterpretation, a mechanism named belief perseveration. In the extreme case, exposing people to proof that is inconsistent to their understanding might lead them to reject the opposing hypothesis even more strongly, what has been called the backfire effect. For instance, among people highly concerned about vaccine side effects, receiving information about vaccines by the Centers for Disease Control and Prevention reduced their intent on vaccinating. 6 In a time urging for cost-effective results, it is important to make a clear distinction between scientific method and scientists. The former is the enterprise that aims to diminish systematic error; the latter is reasonably susceptible for all sorts of biases when not following a systematic approach. When low-quality studies are created, the risk of misinterpretation increases. In the setting of a highly connected world and increasing public exposure, there might be a temptation for researchers to report the results of their findings in an incorrect way, either highlighting benefits or downplaying the harms of a specific treatment. This is called the "spin" of reporting clinical research. 7 Since "spin" is highly prevalent in the medical literature, 8 one might speculate on the motivations behind it. In the context of COVID-19 pandemic, this seems to be mostly driven by an intrinsic desire to find a treatment that works against a disease that is having an important impact on society across the world. However, other motivations include lack of knowledge about methodological standards, opportunistic publishing, and an intent to influence readers. 9 Studies evaluating the impact of "spin" have shown that clinicians are more likely to perceive a treatment as beneficial when "spin" is present. 10 The general public may be more susceptible to be influenced by "spin", especially if they lack the expertise to avoid misinterpretation of research data. This issue could be amplified by the Dunning-Kruger effect, a cognitive bias in which unskilled individuals tend to overestimate their ability in a given task. During the COVID-19 pandemic, the early adoption of new interventions by clinicians and policy makers based on promising but often low-quality data is creating a scenario from which anecdotal evidence may emerge. Several countries have endorsed the use of hydroxychloroquine for COVID-19 in clinical scenarios outside of the undergoing research protocols. As an example, Brazil's Ministry of Health has released a new treatment guideline for COVID-19 recommending the use of either hydroxychloroquine or chloroquine for patients with mild symptoms, such as cough, fatigue, anosmia, or headache, 11 ignoring current best evidence. This off-label use allows for claims of efficacy based on informal reports of patients who recovered from the disease after taking the medication, adding an important layer of confusion and misinterpretation. Anecdotal evidence is more likely to emerge from this mild spectrum in which drug efficacy is easily confounded with the natural course of the disease that would otherwise improve with supportive care only. 12 The belief that hydroxychloroquine might be a good intervention for COVID-19 led to the hoarding of this medication by the general public and health care workers around the world. This scenario created an uncertainty about drug availability to patients who need this medication, especially for those with rheumatologic diseases and in low-to-middle income countries with high rates of malaria. Despite being arguably the lowest quality of scientific proof, there are several reasons to believe that anecdotal evidence, in particular, may be accorded more credence than such evidence truly merits. First, there is a narrative quality to anecdotal evidence that resonates with intuitive patterns of learning. Second, the context in which this sort of information is shared usually involves a known person, which might add an affective valence to the message. Finally, there is the availability heuristics, which leads people to misjudge the probability of an event being true based on how easily it can be recalled. We emphasize the importance of interrupting the "flexibilization" of science in order to break a vicious cycle that may do more harm than good. While fast-paced clinical studies need to be done during a global crisis, they need to follow methodological standards in order to produce reliable and high-quality evidence. The administrative bureaucracy and procedures to perform a well-designed randomized controlled trial, for example, can be accelerated, but the pandemic should not be an excuse to overlook important aspects of methodological standards. In the era of COVID-19, we need to be even more vigilant about our own cognitive biases and limitations, and avoid the "flexibilization" of science as this may exert significant harm to our society. ):105949. 2. Mehra MR, Ruschitzka F, Patel AN. Retractiondhydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis Translation of highly promising basic science research into clinical applications Why most published research findings are false Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information Spin" in reports of clinical research Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes A new classification of spin in systematic reviews and meta-analyses was developed and ranked according to the severity Impact of spin in the abstracts of articles reporting results of randomized controlled trials in the field of cancer: The SPIIN randomized controlled trial Orientações do Ministério da Saúde para Manuseio Medicamentoso Precoce de Pacientes com diagnóstico da COVID-19 Characteristics of and Important Lessons from the Coronavirus Disease 2019 (COVID-19) Outbreak in China: Summary of a Report of 72314 Cases from the Chinese Center for Disease Control and Prevention