key: cord-1022592-8r4cbevs authors: Chou, Wen-Ying Sylvia; Gaysynsky, Anna; Vanderpool, Robin C. title: The COVID-19 Misinfodemic: Moving Beyond Fact-Checking date: 2020-12-16 journal: Health Educ Behav DOI: 10.1177/1090198120980675 sha: 5528c794dee74a4e1afd40e4aab2d6604be3fe07 doc_id: 1022592 cord_uid: 8r4cbevs Online misinformation regarding COVID-19 has undermined public health efforts to control the novel coronavirus. To date, public health organizations’ efforts to counter COVID-19 misinformation have focused on identifying and correcting false information on social media platforms. Citing extant literature in health communication and psychology, we argue that these fact-checking efforts are a necessary, but insufficient, response to health misinformation. First, research suggests that fact-checking has several important limitations and is rarely successful in fully undoing the effects of misinformation exposure. Second, there are many factors driving misinformation sharing and acceptance in the context of the COVID-19 pandemic—such as emotions, distrust, cognitive biases, racism, and xenophobia—and these factors both make individuals more vulnerable to certain types of misinformation and also make them impervious to future correction attempts. We conclude by outlining several additional measures, beyond fact-checking, that may help further mitigate the effects of misinformation in the current pandemic. The rampant spread of online misinformation surrounding COVID-19 and the virus that causes it has significantly undermined the adoption of recommended prevention and control behaviors (Bridgman et al., 2020) and decreased support for crucial, life-saving policies. Efforts to combat online misinformation undertaken by government and health organizations, like the World Health Organization (2020), have largely focused on fact-checking, correcting, or debunking myths and falsehoods. Although such reactive responses are valuable, social science research suggests the effectiveness of information correction is likely to be limited (Lewandowsky et al., 2012) . Solutions exclusively focused on providing evidence-based information and debunking false information are insufficient because they do not account for many of the critical factors that contribute to acceptance and sharing of misinformation in the midst of a crisis. Citing recent literature on misinformation, we argue that additional actions will need to be taken to adequately address the ongoing COVID-19 "misinfodemic." Research shows that misinformation is resistant to correction. Several studies have found that corrections cannot completely undo the effect of misinformation exposure (Lewandowsky et al., 2012) . Additionally, some literature supports the possibility of a "backfire effect"-where corrections may actually cause individuals to more strongly believe the initial piece of misinformation in certain situations (Nyhan & Reifler, 2010) . However, more recent research suggests that the backfire effect is more tenuous than suggested by prior studies (Wood & Porter, 2019) . Additionally, the efficacy of corrections seems to be topic-and context-specific. For example, an experiment found that correction through the "related stories" feature on Facebook could alter attitudes regarding genetically modified organisms but not vaccines (Bode & Vraga, 2015) . In certain situations, corrections-especially those that challenge a preexisting worldview-may lead individuals to discount the scientific process altogether (Lewandowsky et al., 2012) or disparage the information source (Jang et al., 2019) in order to avoid having to change their beliefs. For a more comprehensive review of the limits of corrections, we refer readers to Lewandowsky et al. (2012) . Finally, while the literature does point to some techniques for increasing the effectiveness of corrections (such as repetition), in the context of the current pandemic, the use of some of these strategies may not be possible. For instance, the provision of an "alternative" narrative (Lewandowsky et al., 2012) may not be feasible in efforts to debunk misinformation about an unproven treatment (e.g., hydroxychloroquine or "miracle drugs" like colloidal silver and oleandrin), if limited approved treatments are available that could be offered as a realistic alternative to the promotion of unproven and dangerous remedies. Fact-checking and correction efforts hinge on the assumption that people engage with information in an objective, rational way, and these efforts generally do not address other critical factors that contribute to the COVID-19 misinfodemic. Research from other health domains, such as vaccine hesitancy, suggests that psychological, social, and contextual factors play an important role in shaping health attitudes and behaviors and that the simple provision of information is not always effective (Brewer et al., 2017) . Below we describe key factors that should be considered when developing effective responses to COVID-19 misinformation. First, fact-checking does not take into account powerful cognitive biases that are in play when individuals engage with information on social media. For example, confirmation bias (the tendency to seek out evidence that supports a preferred narrative while ignoring contrary evidence) and disconfirmation bias (the tendency to uncritically accept supportive arguments/evidence and carefully scrutinize contrary arguments/ evidence) can prevent individuals from accepting corrective information (MacFarlane et al., 2020) . Other cognitive biases help explain why people are receptive to misinformation in the first place. The confidence heuristic (the tendency to interpret the confidence or certainty with which information is expressed as a signal of accuracy or knowledge) can undermine people's ability to discern true experts from overconfident frauds (MacFarlane et al., 2020) . It is easy to see how this bias would disadvantage credible sources of information during the current pandemic when they must be transparent about the limited and evolving nature of the evidence, while misinformation agents-who are not constrained by ethics or professional standards-can boldly promote cures and conspiracy theories regardless of evidence. Second, the role of emotion in the spread of misinformation must also be addressed. In political science studies, emotions have been shown to affect the way people process misinformation-with anger, for example, encouraging partisan, motivated evaluation of information (Weeks, 2015) . Emotion can also facilitate the spread of misinformation on social media, with content capable of inducing strong emotions spreading more easily (MacFarlane et al., 2020) . For example, a 2018 study showed that false information spreads farther and faster than true information, possibly due to the specific emotions (e.g., surprise, disgust) it elicits (Vosoughi et al., 2018) . Crisis situations evoke strong emotions including fear, anxiety, and sadness, and having information (even if incorrect) can make people feel more secure and in control. Information-seeking under crisis conditions might not be completely rational and may serve a purpose other than knowledge acquisition. Communication efforts that validate people's feelings (especially their fears) and try to channel these emotions into constructive health-promoting behaviors (e.g., frequent handwashing) may be an effective complement to provision of factual information. Third, the current information environment around COVID-19 is characterized by an information vacuum resulting from both scientific uncertainty and a "data void" absent of sufficient high-quality data. Knowledge regarding transmission, treatment, vaccines, and long-term effects is constantly evolving, and the public is at times receiving changing or conflicting guidance from health organizations. This environment creates ideal conditions for misinformation to thrive and makes responding to COVID-19 misinformation different from responding to other health topics that have been plagued by high levels of misinformation on social media (like vaccines), where an extensive evidence base and established scientific consensus exist. The role of values and worldview also must not be overlooked, especially in a polarized environment where seemingly nonpartisan issues such as disease outbreaks can become politicized. Research shows that individuals tend to form risk perceptions that fit their values and that this "cultural cognition of risk" shapes individuals' beliefs about the existence of scientific consensus across a variety of scientific domains (Kahan et al., 2011) . Information that goes against one's personal values or worldviews can also create cognitive dissonance, especially when this worldview is connected to a social identity or ideological group (MacFarlane et al., 2020). However, research on climate change shows that changing the moral frame in which information is presented can moderate the influence of ideology on pro-conservation attitudes and intentions (Wolsko et al., 2016) , suggesting that worldview can be leveraged to change attitudes and behaviors. In practice, tailoring COVID-19 messages to be congruent with the values held by the target group (e.g., religious beliefs) could reduce counterarguing and make communication efforts more effective. To this end, Reyna's work on mental representations (the distinction between the verbatim representation of the rote facts of a message and "gist" representations of the message's essential meaning in context) may be a useful tool to apply to health communication about COVID-19, as gist-based messages can better cue the retrieval of motivating values and demonstrate how the scientific information being presented connects to those values (Reyna, 2020) . Distrust in institutions also plays a role in the COVID-19 misinfodemic. Distrust in the medical system, particularly among communities of color, due to a history of abuse, everyday discrimination, and broader structural racism, has been well documented (e.g., Armstrong et al., 2008) . A recent study, for example, highlighted the impact of institutional distrust on influenza vaccine attitudes among African Americans (Jamison, Quinn, & Freimuth, 2019) . Such distrust could potentially make people more susceptible to disinformation campaigns. Given racial health disparities observed in the COVID-19 epidemic (Garg et al., 2020) , building and sustaining trust with these communities and leveraging trusted in-group messengers will be vital. Distrust likely plays an important role in the views and behaviors of other communities as well, such as groups whose members endorse conspiracy theories. Individuals in these groups hold lower trust in information coming from sources they deem to be part of the "establishment," which may render corrections coming from institutional sources like the government ineffective (Bode & Vraga, 2018) . Therefore, alternate communication strategies would likely be needed to reach these groups. Xenophobic attitudes and racism may also fuel the spread of misinformation, as being threatened with disease can increase ethnocentrism and prejudice against those deemed "foreign" (Schaller & Neuberg, 2012) . The racial undertones of COVID-19 misinformation can be seen in the use of stigmatizing terms such as the "Chinese virus" and videos attributing the virus to people in China eating "bat soup" (Dickson, 2020) . Fact-checking is not an effective tool against xenophobia, but messages that highlight common ground and humanize those who are being affected by the disease around the world can help reduce susceptibility to misinformation that plays on fear of the "other." Finally, characteristics that are unique to the social media environment could be contributing to the spread of COVID-19 misinformation. Misinformation on social media is particularly intractable due to the lack of information "gatekeepers" on these platforms that make accurate and false information equally accessible (Bode & Vraga, 2015) . In fact, recent work highlights the extent to which these platforms are being used to purposely spread heath misinformation-with bots and other malicious actors having a significant presence on platforms like Twitter (Broniatowski et al., 2018; Jamison, Broniatowski, & Quinn, 2019) . Furthermore, social media enables individuals to self-curate their feeds, and platform algorithms can further reinforce information silos by providing suggestions based on past behaviors and expressed interests. These features make it unlikely that someone who belongs to an echo-chamber where misinformation is circulating will be exposed to contradictory (and in this case, accurate) points of view. Seeing misinformation repeated within an echo-chamber can further entrench misinformation through false social consensus on beliefs not widely endorsed outside a particular group (Lewandowsky et al., 2012) . This points to the importance of accounting for the unique features of the social media environment when formulating a response to the spread of misinformation on these platforms. Fact-checking and corrections have an important role to play in the public health response to the COVID-19 misinfodemic, but additional actions will be needed to mitigate the impact of misinformation. Here we suggest a number of additional strategies to explore. For example, enhancing the public's health and science literacy could help reduce susceptibility to misinformation: educating the public about the scientific research process could make people less likely to accept spurious causal associations suggested by misinformation posts (MacFarlane et al., 2020) , and educating people about the incremental and evolving nature of scientific knowledge could make the public less impatient with the scientific process in the face of emerging diseases. Moreover, the concept of cognitive reflection could be integrated into health and science literacy endeavors. Research shows that nudging people to think about the accuracy of the information they encounter can improve their ability to discern true and false information and to make better decisions about what kinds of information they share on social media (Pennycook et al., 2020) . These types of "nudges" could even be incorporated into the design of social media platforms or public health messaging on social media to make the public more mindful about the content they are reading and sharing (e.g., deploying a pop-up message that says, "We all have a part to play in the fight against misinformation. Before you share a story, consider whether it is accurate, and if you are not sure, use a respected fact-checking resource to help you verify the information.") Clinicians could also play a larger role in educating patients about misinformation related to the COVID-19 epidemic by leveraging the established relationships they have with their patients. For example, when patients request an unproven treatment based on information they see online, rather than shutting down the conversation, doctors could explain the need for sufficient clinical evidence from rigorous studies, outline the potential risks of the treatment, and recommend reliable sources of information to use in the future. Efforts to encourage skepticism toward disinformation agents, for example, by highlighting their ulterior motives, including possible financial or political gain, could also help mitigate the impact of misinformation (Lewandowsky et al., 2012) . This technique worked well for the "truth" campaign, which focused on exposing tobacco industry practices (Hershey et al., 2005) . In a similar vein, highlighting the techniques used by those who spread misinformation (such as false logic) might be an effective way to reduce the impact of these tactics (Schmid & Betsch, 2019) . One of the more innovative interventions we have seen attempts to accomplish this through an online game that familiarizes players with the techniques commonly used to spread misinformation-the game was found to be successful in increasing ability to identify and resist misinformation irrespective of education level or political ideology (Roozenbeek & van der Linden, 2019) . Additionally, efforts to help people assess the credibility of information sources could be helpful, since individuals often resort to source credibility as a heuristic when evaluating message content but are relatively inept at discerning source credibility based on contextual cues (Lewandowsky et al., 2012) . One possible approach could see social media companies verifying the accounts of credible experts and organizations and marking them with a green checkmark, similar to the way some platforms use blue checkmarks to denote "authentic" accounts. This would remove some of the onus from individual users to vet individual information sources and make it more difficult for fraudulent organizations to pass as legitimate institutions (Trivedi et al., 2020) . Finally, in crisis situations such as the current pandemic, attempting to educate and convince people on an individual basis is unlikely to produce the needed scale of behavioral change. Broader environmental changes (Lewandowsky et al., 2012) and shifting social norms (Paynter et al., 2019) may be more effective strategies. For example, it matters less if some people believe the severity of COVID-19 is being exaggerated if stores enact policies limiting the number of customers allowed in, thereby creating conditions where social distancing is less dependent on individual choice. Similarly, highlighting the fact that the vast majority of people follow social distancing guidelines could make individuals reluctant to flout those recommendations due to concerns about social exclusion and judgment (regardless of their personal belief in the effectiveness of social distancing). The aim of this perspective piece is to further dialogue with the research community and COVID-19 communication practitioners at the local, state, federal, and international levels to ensure that health communication practice is being informed by insights from psychology and communication. There is an urgent need to develop multipronged and innovative communication approaches to combat the rampant spread of misinformation online. It is apparent that traditional health communication campaigns and earnest correction of misinformation will fall short in the age of online misinformation. While the potential approaches highlighted above have limitations and are by no means a panacea, they may help inform health communication research and practice moving forward as we collectively seek to mitigate the COVID-19 pandemic domestically and abroad. Differences in the patterns of health care system distrust between blacks and whites In related news, that was wrong: The correction of misinformation through related stories functionality in social media See something, say something: Correction of global health misinformation on social media Increasing vaccination: Putting psychological science into action The causes and consequences of covid-19 misperceptions: Understanding the role of news and social media Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate Coronavirus is spreading-And so are the hoaxes and conspiracy theories around it Hospitalization rates and characteristics of patients hospitalized with laboratory-confirmed coronavirus disease 2019-COVID-NET, 14 states The theory of "truth": How counterindustry campaigns affect smoking behavior among teens Malicious actors on Twitter: A guide for public health researchers You don't trust a government vaccine": Narratives of institutional trust and influenza vaccination among African American and white adults What debunking of misinformation does and doesn't Cultural cognition of scientific consensus Misinformation and its correction: Continued influence and successful debiasing Protecting consumers from fraudulent health claims: A taxonomy of psychological drivers, interventions, barriers, and treatments When corrections fail: The persistence of political misperceptions Evaluation of a template for countering misinformation-Real-world autism treatment myth debunking Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention A scientific theory of gist communication and misinformation resistance, with implications for health Fake news game confers psychological resistance against online misinformation Danger, disease, and the nature of prejudice (s) Effective strategies for rebutting science denialism in public discussions Well, the message is from the institute of something": Exploring source trust of cancer-related messages on simulated Facebook posts The spread of true and false news online Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation Red, white, and blue enough to be green: Effects of moral framing on climate change attitudes and conservation behaviors The elusive backfire effect: Mass attitudes' steadfast factual adherence. Political Behavior The opinions expressed by the authors are their own and this material should not be interpreted as representing the official viewpoint of the U.S. Department of Health and Human Services, the National Institutes of Health, or the National Cancer Institute. The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. The authors received no financial support for the research, authorship, and/or publication of this article. Wen-Ying Sylvia Chou https://orcid.org/0000-0002-9140-6094 Anna Gaysynsky https://orcid.org/0000-0001-5612-5572