key: cord-0663259-f5fjn486 authors: Geeng, Christine; Francisco, Tiona; West, Jevin; Roesner, Franziska title: Social Media COVID-19 Misinformation Interventions Viewed Positively, But Have Limited Impact date: 2020-12-21 journal: nan DOI: nan sha: 966cb5de814a21661f79cabe0718bc4302eb174d doc_id: 663259 cord_uid: f5fjn486 Amidst COVID-19 misinformation spreading, social media platforms like Facebook and Twitter rolled out design interventions, including banners linking to authoritative resources and more specific"false information"labels. In late March 2020, shortly after these interventions began to appear, we conducted an exploratory mixed-methods survey (N = 311) to learn: what are social media users' attitudes towards these interventions, and to what extent do they self-report effectiveness? We found that most participants indicated a positive attitude towards interventions, particularly post-specific labels for misinformation. Still, the majority of participants discovered or corrected misinformation through other means, most commonly web searches, suggesting room for platforms to do more to stem the spread of COVID-19 misinformation. In late March 2020, social media platforms had recently increased implementation of misinformation interventions (such as banners or labels) in response to the proliferation of COVID-19 health misinformation. Twitter and Facebook both added generic banners directing users to COVID-19 information, as well as added misinformation warnings to specific posts. To better understand user responses to these changes, we conducted a mixed-methods online survey, recruiting through Prolific and our personal networks, to gauge attitudes (of participants who had seen them) towards these interventions on Facebook, Instagram, and Twitter. Our survey was exploratory and no hypotheses were tested. We also collected accounts of how participants had learned that COVID-19 misinformation they had seen was false. Our research questions were: 1. What are people's attitudes towards social media platform interventions for COVID-19 misinforma-tion, including generic banners linking to authoritative sources and specific false information labels? 2. How did people discover that COVID-19 misinformation was actually false? Specifically, what was the role of social media platform interventions in this discovery, compared to other methods? Our results show that participants rated the helpfulness of Facebook's "False Information" label -which appears on specific posts -significantly higher than Facebook's generic COVID-19 information banner, suggesting that post-specific interventions may be more effective. Some participants reacted negatively to the interventions, e.g., expressing a distrust of the platform. Despite the general acceptance of the interventions, we find 76.7% of participants instead discovered information to be false through web searches or trusted health sites. Our results suggest that social media platform interventions are not yet doing the heaviest lifting when it comes to correcting misinformation, but people are receptive to these attempts. Our exploratory study raises open research questions, and our results suggest there is room for platforms to augment or support existing user strategies, as well as increase post-specific fact-check labeling. The rise of misinformation on social media has prompted sites like Facebook and Twitter to design platform affordances addressing misinformation, such as showing links to trusted public health sites for vaccine-related search terms [20] . Facebook has experimented with various interventions, ranging from showing "disputed" flags or "false information" labels on posts to more subtly showing fact-checking "related articles" [3] . While having post-specific "disputed" labels might raise concerns over triggering the backfire effect [14] , i.e., entrench existing false beliefs, recent replication [21] and review work suggests that "backfire effects are not a robust empirical phenomenon" [19] . Findings about the effectiveness of interventions have been varied. Bode et al. found Facebook's "related articles" to reduce health misperceptions [1] . Pennycook et al. found that attaching warnings to fake news headlines could lead to incorrect belief that non-labeled headlines are not false [16] . The latter study only displayed headlines to participants; we note that other work has shown that people use multiple heuristics on and off social media to determine information credibility [5, 6, 13] . Other approaches include pre-emptive debunking, which has been shown to be effective at preventing anti-vaccine conspiracy beliefs [8] . Since the COVID-19 pandemic began, Facebook, Twitter, and others have implemented more factchecking affordances [17, 18] , given the various health misinformation that has arisen [9] . To investigate the effectiveness of these interventions in this specific, currently highly-relevant context, we qualitatively and quantitatively surveyed sentiments of users who have seen these interventions on their own feeds, as well as their other experiences with COVID-19 misinformation. At the highest level, our results suggest that while most respondents are receptive to social media platforms' attempts to curb COVID-19 misinformation, there remains room for improvement and future research to inform both platform designs and related policy discussions. To answer our research questions, we conducted an anonymous online survey (approximately 10 minutes long) from March 20-26, 2020 to elicit quantitative and qualitative responses. Our study was reviewed and deemed exempt by the University of Washington Human Subjects Review Board (IRB). We did not collect identifying information about participants. For any quotes used in this paper, the quoted participant explicitly provided their consent (in the survey) to have their anonymized quotes used in publications. To recruit participants, we used both Prolific, a paid crowdsourcing service, and our personal networks via social media. Prolific participants were paid $13.86/hr, with an average 7 minute survey completion time. We also sought volunteers via our personal networks on Facebook and Twitter. Participants were screened out if they were not at least 18 years old or had not used Facebook, Twitter, or Instagram since March 1st (around the time when the COVID-19 misinformation interventions started to roll out). We recruited 111 participants through our personal networks and 202 through Prolific, and we removed 2 disingenuous responses (based on our review of answers to free-response questions), for a total of 311 completed surveys. In this paper, we discuss and analyze the results of both populations combined. Demographic questions were optional. The majority of our participants were 18-24 years old (refer to Table 1). 37.94% of our participants live in the United States, 10.29% in Portugal, 9.97% in the United Kingdom, 7.40% in Canada, and 7.40% in Poland. The rest of our participants live in a variety of other countries. Of the 118 participants living in the United States, 48 identify as Democrat, 11 as Republican, and 34 as Independent; the rest did not answer the question. Our survey asked if participants had seen Facebook, Twitter, or Instagram COVID-19 or misinformation interventions (circa March 2020) before. If so, we asked both an open-ended question about their thoughts, as well as a question about how helpful they considered the intervention, on a 5-point scale from "Not at all helpful" (1) to "Extremely helpful" (5) . For interventions that labeled specific misinformation, we asked how that had We also asked for anecdotes of when participants had seen or believed COVID-19 misinformation, where they had seen it, how they discovered its falsity, and what they did upon realizing this. Finally, we asked participants to select from a list of known COVID-19 misinformation which they had seen. To analyze open-ended responses about perceptions of interventions, three coders independently inductively coded a subset of these answers before discussing and agreeing on a codebook of 17 codes. Following Mc-Donald et al.'s guidelines on when to seek coding agreement [11] , we double coded a subset (46.34% of total responses) to check for agreement and then had a single coder code the rest of the responses. For the doublecoded subset, we calculated Cohen's κ for inter-coder reliability, given that we had two coders and nominal data [12] . We had a κ of "substantial" (0.61-0.80) to "almost perfect agreement" (0.81-1.00) for 87.5% of categories (see Appendix). We discussed code usage discrepancies between coders until we reached a consensus To analyze our helpfulness scale data, we compared helpfulness ratings between interventions from the same site by comparing between participants who had seen both. Since our scale data is ordinal, we used a Wilcoxon signed rank test and only report on tests with significance. Our results reveal a variety of reactions to misinformation labels and different modes of discovering misinformation. Social media interventions are used, but are outweighed by other strategies for debunking misinformation. When asked in a multiple-response question how they learned something they saw was false (whether or not they initially believed it), participants told us most frequently that they conducted a web search (39.6% of 240 who answered this question), sought out trusted sources (37.1%), saw a correction in a social media comment (19.2%), or heard a correction from someone directly (12.1%). Only 4.2% learned something was false because the social media platform had labeled it as such. The majority (71.7%) of respondents indicated they "knew it wasn't true", though we cannot verify whether respondents' baseline knowledge was correct. Post-specific social media interventions are viewed as more helpful, and seem to be more effective, than generic interventions for our participants. We find that participants tended to rate (on a 1-to-5 scale) postspecific interventions as more helpful. For example, comparing the 30 participants who had seen both Facebook interventions, these participants found the postspecific "False Information" label significantly more helpful (median rating of 4 "very helpful") than the generic banner (median rating of 2 "slightly helpful"). (Wilcoxon signed-rank test, V = 4, Z = -4.13, p = 0.018, r = 0.75). Considering effectiveness, only 13.3% of 105 partic- ipants who saw the Facebook banner said that they had ever clicked on it. Meanwhile, 32.3% of the 65 participants who saw the Facebook "False Information" label said they no longer believe the content of the post due to the label. 50.8% self-reported (albeit in retrospect) that they had already not believed the false-labeled post, and only 6.2% said that they continued to believe the post, or believed it more, given the label. The median helpfulness rating of the generic Twitter banner was 3 ("somewhat helpful"), with a reported clickthrough rate of 32.8% by 58 participants who saw the banner. The difference in helpfulness rating between the Facebook and Twitter banners (medians of 2 and 3 respectively), for the 26 participants who saw both, was not statistically significant (Wilcoxon signed-rank test, V = 4.5, Z = -2.11, p = 0.06, r = 0.41). We collected qualitative free-response data about participants' attitudes and highlight key themes here. We find participants' opinions about platform interventions range from positive ("I thought it was good that Facebook was trying to do something to inform people better") to neutral ("I didn't think much of it. I follow the news so I didn't click on this one because I already know the basic details") to negative ("I don't like it. I don't need Facebook to tell me this, and I don't trust their automated way of detecting it" 1 ) to -rarely -hostile ("I was irritated because it is another in a long list of 'tools' to 'protect' users. In my opinion, this label assumes people are morons and unable to discern what's true, false and/or misleading"). Table 2 shows how often themes we coded appeared in responses. Our analysis focused on negative reactions, as these provide more actionable information. A sentiment expressed by both positively and negatively-reacting participants was that they found the interventions unnecessary, because they were already sufficiently informed about COVID-19. When participants came across misinformation and realized it was false, 54.62% did nothing, but 35.3% made a correction. Our results suggest that COVID-19 misinformation was rampant on social media and the web in late March, 2020: 79.5% of participants reported having seen others share COVID-19 related misinformation, and 33.9% reported believing something false themselves. Table 3 and Table 4 show participants' self-reported reactions when they realized they or their contacts had shared COVID-19 misinformation. In both cases, a slight majority of participants did nothing, though a significant fraction also publicly or privately shared a correction. Public corrections sometimes occurred in group chats. One participant stated, "On the same group where the message was shared, with my friends, we discussed the fact that it was false after it appeared on the news." Oth-ers added comments with corrections to posts or "liked" an existing correction. Private corrections involved inperson conversations, email, or direct messages. Some "Other" responses included reporting the post or filtering unwanted content from one's social media feed. Though we did not collect data on reasons for taking no action, we note that these reasons might include not wanting to engage in a debate, not being able to find the original post again, not considering the issue personally relevant enough, or not having re-shared the false information themselves after believing it. From our results, we make some suggestions towards improving misinformation labeling efforts. Social media platforms should increase specific misinformation labeling efforts. In the context of COVID-19, our participants had generally positive responses to the interventions, and found specific misinformation labels to be more helpful than generic banners pointing to authoritative sources. We also found that these specific labels worked for many participants: out of the 65 people who saw the Facebook label, 21 people heeded the label, while only 2 people continued to believe the post and 2 people believed it more. Our results thus suggest that -at least among our study population -the labels generally produce the intended effect rather than a "backfire effect" [14, 10] ; this supports other work that find no robust evidence for this phenomenon [19, 21] . However, only 4.2% of respondents who said they have seen misinformation stated they learned it was not true through social media labeling, suggesting that they often see misinformation on social media that is not labeled by the platform. This finding suggests a strong motivation for social media platforms to significantly increase the amount and frequency of misinformation that they explicitly label as false. Authoritative banners should be designed to not look like ads, and warning fatigue should be considered. While the banners were the result of collaborations between social media sites and the World Health Organization and other national public health agencies [17, 18] , 11 responses (out of 246 responses) mentioned the banners looked like ads or that they did not trust the social media company enough to trust the banner. The sheer frequency with which participants see these or similar banners across different sites may also lead to warning fatigue [2] . Indeed, 27 responses noted ignoring the banner because they have already seen so much other information about COVID-19. Future research should further study these effects and how to avoid them. Open research questions remain around intervention design and effectiveness, side effects, and interventions beyond COVID-19. As discussions around platform responsibility and potential liability in the face of misinformation intensify, policymakers will need substantive evidence to lean on to inform these discussions. Our results suggest that different interventions have different impacts, so multiple and continued studies are needed. We call on future research to help answer these open questions, considering different types of users, different types of content, and different types of intervention designs. For example, our study does not attempt to differentiate different types of people, who may react differently to interventions. A Pew research study showed that Americans engage with online information in varying ways, ranging from eager or curious to distrustful of information sources [7] . Future research should study whether interventions like the ones we study here are most effective for certain types of information consumers -for example, people who trust the fact-checking sources used by social media platforms, and people who are not already convinced of the relevant misinformation but are attempting to become informed. Different intervention designs may be effective for different information consumers. Future work should also explore if there are other potential side effects of the interventions, beyond debunking misinformation directly. For example, while the generic banners may not change behaviors in the moment, perhaps they have a more subtle, sustained impact on how people evaluate information in their feeds. This impact might be positive (reducing trust in misinformation) but may also be negative, e.g., increasing trust in misinformation that doesn't have a fact-checking label [16] . We found that 35.3% of participants corrected others sharing misinformation, and 27.18% of participants shared a correction when they themselves had believed misinformation. These numbers are far below 100%, but non-trivial. Future research and design should explore how much room there is to increase (self-)correcting behavior from users, experimenting with ways to make sharing corrections easier. Finally, COVID-19 is a unique situation, and future research should study how people use and react to platform-based interventions on other topics (e.g., political misinformation, climate change). People may consider certain platform-based interventions more appropriate during a global pandemic, but may prefer that social media platforms take a less active role in labeling content in other circumstances. The normalization of current platform practices may also shift user perspectives for the future. Our exploratory study is based on a convenience sample of participants and our results may not be generalizable to broader, more representative populations. Most of our participants live in the United States; individuals living in other countries may have seen other misinformation more relevant to their geographic location that we did not ask about, or different versions of the platform interventions than the screenshots we showed. For participants sampled from our personal networks, they may have skewed towards academics and people with an interest in computer security and privacy. With any self-report methodology, responses are susceptible to recall bias, influence from wording, and erroneous statements [15] . We did not compare differences in data between our two sampling populations as it is unclear what variations and similarities there are between these two groups. We make no strong quantitative claims about our qualitative results. Finally, the COVID-19 situation and platform interventions themselves are changing rapidly; our results represent one snapshot in time (late March 2020). Nevertheless, this study sheds light on participants' reactions to platform interventions in a hotlydebated and quickly evolving space, and raises new research questions and directions for future work. To better understand people's responses to social media platform interventions for COVID-19 misinformation, we conducted an exploratory mixed-methods online survey in late March 2020 to gauge attitudes (of participants who had seen them) towards these interventions on Facebook, Instagram, and Twitter, as well as to collect accounts of how participants have learned COVID-19 misinformation was false. Our results suggest that post-specific interventions may be more effective. and that social media platform interventions are not yet doing the heaviest lifting when it comes to correcting misinformation, but people are receptive to these attempts. 14. Have you seen this "Manipulated media" label on Twitter before? • (Please be aware that these are all false rumors. For up-to-date information on the virus, please go to WHO.int or CDC.gov.) The novel coronavirus sickness is caused by 5G There's a plot to "exterminate" people infected with the new coronavirus Scientists have proven that humans got the novel coronavirus from eating bats Scientists predicted the virus will kill 65 million people China built a biological weapon that was leaked from a lab in Wuhan Chinese spies smuggled the virus out of Canada A coronavirus vaccine already exists There were 100,000 confirmed cases in January A teen on TikTok is the first case in Canada There will be a mass quarantine and martial law in a certain state (e.g. Washington) Other 34. Can we use anonymized quotes from your freeresponse answers in future research publications? Table 5 : Inter-rater reliability percentages. *http://dfreelon.org/2008/10/24/recal-error-log-entry-1-invariant-values/ In related news, that was wrong: The correction of misinformation through related stories functionality in social media Harder to ignore? revisiting pop-up fatigue and approaches to prevent it Replacing Disputed Flags With Related Articles Facebook's approach to fact-checking: How it works Falling for fake news: Investigating the consumption of news via social media Fake news on facebook and twitter: Investigating how people (don't) investigate How people approach facts and information Prevention is better than cure: Addressing anti-vaccine conspiracy theories No, holding your breath is not a 'simple selfcheck' for coronavirus Misinformation and its correction: Continued influence and successful debiasing Reliability and inter-rater reliability in qualitative research: Norms and guidelines for cscw and hci practice Interrater reliability: the kappa statistic 210 -220. Biases and constraints in communication: Argumentation, persuasion and manipulation When corrections fail: The persistence of political misperceptions The self-report method. Handbook of research methods in personality psychology The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings Stepping up our work to protect the public conversation around covid-19 An update on our work to keep people informed and limit misinformation about covid-19 Searching for the backfire effect: Measurement and design considerations TWITTER. Helping you find reliable public health information on Twitter The elusive backfire effect: Mass attitudes' steadfast factual adherence When did you last use each social media site? After March 1st Before March 1st Never Facebook • • • Twitter • • • Instagram • • • 2. Have you seen this banner on Facebook before? • Yes • No • How helpful was this banner? • Extremely helpful • Very helpful • Somewhat helpful • Slightly helpful • Not at all helpful Yes • No • I don't know/remember 6. Have you seen this "False Information" label on Facebook before? • Yes • No • I don't know 7. You said you've seen this "False Information" label on Facebook before Think of a recent time when you saw this label. Did the label change your view of the post it was referring to? • Yes, I no longer believed the post • Yes, I believed the post more • No, I already didn't believe the post • No, I still believe the post • Other 10. Have you seen this banner on Twitter before? • Yes • No • I don't know 11. You said you've seen this banner on Twitter before. What did you think or feel about it? 12. How helpful was this banner? • Extremely helpful • Very helpful • Somewhat helpful • Slightly helpful • Not at all helpful 13 We thank Tadayoshi Kohno, Lucy Simko, and Miranda Wei for their feedback on our survey instrument, and Yim Register for feedback on an earlier version of this paper. This paper was supported in part by the National Science Foundation under Award CNS-1651230 and the John S. and James L. Knight Foundation.