key: cord-1030669-cavymayk authors: Swetland, Sarah B.; Rothrock, Ava N.; Andris, Halle; Davis, Bennett; Nguyen, Linh; Davis, Phil; Rothrock, Steven G. title: Accuracy of health‐related information regarding COVID‐19 on Twitter during a global pandemic date: 2021-07-29 journal: World Med Health Policy DOI: 10.1002/wmh3.468 sha: 09365d1b40d0bc9a7680bc3093b7df04a54e7a98 doc_id: 1030669 cord_uid: cavymayk This study was performed to analyze the accuracy of health‐related information on Twitter during the coronavirus disease 2019 (COVID‐19) pandemic. Authors queried Twitter on three dates for information regarding COVID‐19 and five terms (cure, emergency or emergency room, prevent or prevention, treat or treatments, vitamins or supplements) assessing the first 25 results with health‐related information. Tweets were authoritative if written by governments, hospitals, or physicians. Two physicians assessed each tweet for accuracy. Metrics were compared between accurate and inaccurate tweets using χ (2) analysis and Mann–Whitney U. A total of 25.4% of tweets were inaccurate. Accurate tweets were more likely written by Twitter authenticated authors (49.8% vs. 20.9%, 28.9% difference, 95% confidence interval [CI]: 17.7–38.2) with accurate tweet authors having more followers (19,491 vs. 7346; 3446 difference, 95% CI: 234–14,054) versus inaccurate tweet authors. Likes, retweets, tweet length, botometer scores, writing grade level, and rank order did not differ between accurate and inaccurate tweets. We found 1/4 of health‐related COVID‐19 tweets inaccurate indicating that the public should not rely on COVID‐19 health information written on Twitter. Ideally, improved government regulatory authority, public/private industry oversight, independent fact‐checking, and artificial intelligence algorithms are needed to ensure inaccurate information on Twitter is removed. In December, 2019, a novel coronavirus was identified as the cause of pneumonia in a cluster of patients in Wuhan, China (Zhu et al., 2020) . Since this initial outbreak, the identified virus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has spread across the globe with the World Health Organization (WHO) declaring a global pandemic on Global Communications, 2020) According to the WHO, an infodemic allows for the "spread of misinformation, disinformation, and rumors during a health emergency," hampers an effective public health response, and has the potential to create confusion and distrust among populations (Department of Global Communications, 2020) . Because Twitter is so widely used a public source of communication and news, the accuracy, relevancy, and timeliness of information is important. For this reason, we chose to study health-related information regarding Twitter during this COVID-19 global pandemic. Our primary goal was to determine the accuracy of health-related tweets related to COVID-19. A secondary goal was to determine features associated with accurate tweets. Between March 24 and April 5, 2020, five new Twitter accounts were established with no followers, no accounts followed, default of English language, people (anyone), location (anywhere), and highly sensitive content turned off. Before performing the study, two physician study authors created five sets of search terms to be used. Search terms were chosen to elicit tweets that addressed common issues surrounding the management of COVID-19. These included "COVID-19" plus either "cure," "treat or treatment," "prevent or prevention," "emergency or emergency room," or "supplements or vitamins." Tweets were categorized as having an authoritative source if they were written by a government, physician/ physician group, or hospital/hospital system. Tweets were defined as medical or healthrelated if they contained medical or health information presented as a fact, a recommendation, a statement, or an opinion. Tweets were categorized as news if they self-reported that the author was a news organization or worked for a news organization. Tweets that were primarily political, memes or jokes, or religious without health or medical claims were excluded. Before initiating the study, a 2-h training session took place with all study authors (data abstractors) that emphasized definitions, uniform tweet reviews, and coding of information to be placed into an Excel spreadsheet. Two weeks before initiating this study, using "influenza" as a practice search term, all data abstractors simultaneously analyzed 10 non-COVID-19-related tweets to ensure uniform reviews and information collection. During the study, data abstraction, data entry, and coding rules were rereviewed with abstractors by the principal investigator after each 25 tweets. The principal investigator arbitrated all data collection and coding questions on an ongoing basis. On three separate dates (Friday, April 17; Wednesday, April 22; Saturday, April 25, 2020) between 4:00 and 8:00 p.m. Eastern Standard Time the searches (queries) were performed on Twitter using previously selected terms. Each term was assigned to an individual study author/abstractor during this time period such that only one individual performed each search for only their assigned terms on each given day. To perform their search, five study authors logged into their new/native Twitter account and entered "COVID-19" into the search box within the Twitter search box plus either "cure," "treat or treatment," "prevent or prevention," "emergency or emergency room," or "supplements or vitamins" such that there was no overlap of searches. This time of day was selected to coincide with the peak time for retweets and click-throughs on Twitter across the United States (Sailer, 2019) . On each date, the top 25 retained tweets for each search with any health or medical-related information were copied into a spreadsheet. Twenty-five tweets were chosen as the search limit since it was estimated that each tweet and accompanying links/figures/pictures could be read within one minute and the average half-life of a tweet is 18-24 min with engagement and retweets tapering rapidly after this period (Wilson, 2016) . For each tweet, the following information was recorded: The tweet, any tweet hyperlinks, the author, author's country listed on their public profile, author's credentials, author's Twitter verification status, the number of followers, number of retweets, number of likes, and the tweet's rank order within that day's search. Duplicate tweets were excluded. An online tool termed a Botometer developed by the Observatory on Social Media and the Network Science Institute at Indiana University was used to generate a score that estimated whether or not authors of Tweets exhibited bot-like activity (https://botometer. osome.iu.edu/; Botometer, 2021). Bots are automated programs that generate messages, follow accounts, reply to or share hashtags via automation or machine learning. The Botometer uses machine learning to characterize tweet authors based on user data, temporal features/patterns, content, friends/retweets, networking/links, and sentiment-related features. According to the Pew Research Center, a Botometer score > 0.43 on a 0-1 scale (>2.15 on a 5-point scale) is the optimum cutoff for classifying a Tweet as more likely to be written by a bot than a human (Wojcik et al., 2018) . Two physician authors independently reviewed and categorized information within each tweet as accurate/generally accepted or unproven/inaccurate. Before assessment, identifying information (author, affiliations, sponsors, advertisements, videos, nonessential pictures) was removed from each tweet to allow blind assessment. Accurate or generally accepted information was defined as that which agreed with the National Institutes of Health guidelines, Infectious Disease Society of America guidelines, WHO, Centers for Disease Control and Prevention, American College of Emergency Physicians, American Academy of Pediatrics and current major textbooks in emergency medicine, infectious disease, internal medicine, and pediatrics. In addition, reviewers were allowed to search the National Library of Medicine (PubMed) for original articles and the Cochrane Database to analyze the accuracy of tweets. For tweets with more than one health-related statement, it was predetermined that the presence of any single inaccurate statement would lead to the tweet being categorized as inaccurate. Disagreement between two reviewers were settled using a third physician author. Prior studies found that internet and social media-based medical information was frequently incorrect with 12%-40% of health-related tweets described as untrustworthy or inaccurate (Albalawi et al. 2019; Gage-Bouchard et al., 2018; Kedzior et al., 2019; Love et al., 2013; Shah et al., 2019) . Assuming an inaccuracy rate within this range, it was estimated that a sample size of at least 369 tweets would be needed to derive an overall accuracy with 95% confidence that was within 5% of these values. All data were treated as nonparametric. Categorical data were compared between accurate and inaccurate tweets using chi-squared analysis or Fisher's exact test. Pairwise comparisons of continuous and ordinal data were made using the Mann-Whitney U test. p Values were adjusted for multiple comparisons using Benjamini and Hochberg's method (McDonald, 2014) . Tweets were ranked based on their order within a search. Spearman rank order was used to assess the correlation between tweet accuracy and their respective rank order. Interrater correlation for initial tweet accuracy was calculated using Cohen's kappa. A kappa coefficient was considered almost perfect at 0.81-1, showed substantial or good agreement at 0.61-0.80, moderate agreement at 0.41-0.60, fair agreement at 0.21-0.40, slight agreement at 0.01-0.20, and less than chance at <0. Data were analyzed using MedCalc (MedCalc Statistical software, v18.2.1; MedCalc Software Bvba). There were 375 tweets collected during the study period with 17 duplicates deleted leaving 358 evaluable tweets. Two hundred and sixty-seven tweets (74.6%, 95% CI: 69.8-78.8) were graded as accurate. Tweets with the search terms "COVID-19" plus "cure" were more likely to be inaccurate compared to other tweets (Table 1) . Authoritative authors wrote 69 tweets (31 government, 25 physicians, and 13 hospital tweets) and nonauthoritative authors wrote 289 tweets. A total of 67 of 69 authoritative tweets were accurate compared to 200 of 289 nonauthoritative tweets (97.1% vs. 69.2%, 27.9% difference, 95% CI: 19.2-33.8). The inaccurate tweets were written by authoritative authors comprised two physician tweets claiming COVID-19 was cured with vitamin C ( Table 2 ). Of the subset of authoritative authors listed as government or hospitals/hospital systems, 41 (100%, 95% CI: 91-100) tweets were accurate. A total of 64 accurate and 25 inaccurate tweets were authored by self-reported news organizations (24% vs. 27.5%, −3.5% difference, 95% CI: −14.5 to 6.3). Overall, accurate tweets were significantly more likely to be written by authoritative sources (25.1% vs. 2.2%, 22.9% difference, 95% CI: 15.6-28.6) and authors verified by Twitter (49.8% vs. 20.9%, 28.9% difference, 95% CI: 17.7-38.2). Authors of accurate tweets had significantly more followers than authors of inaccurate tweets (19,491 vs. 7346; 3446 difference, 95% CI: 234-14,054). The number of likes, tweets authored by news organizations, Botometer scores, retweets, tweet length, and Flesch-Kincaid grade level did not differ between accurate and inaccurate tweets (Table 3) . North America was the most common author location, N = 171 (47.8%), with the United States comprising the source for 153 (42.7%) of all tweets (Table 3) . The median overall rank order of retained tweets was 37 (95% CI: 32-46) with a range of 1-219. The rank order of tweets was not associated with tweet accuracy (Spearman's rho = −0.0164, 95% CI: −0.12 to 0.087). The interrater reliability for the physicians assessing tweet accuracy was substantial (kappa = 0.77, 95% CI: 0.69-0.84). Ultraviolet light or sunlight will cure people with COVID-19 (these tweets do not refer to the use of UV light or sunlight to kill viruses on surfaces) 3 Vaccines weaken the immune system and will worsen COVID-19 2 States are purposefully undercounting cases to hide real mortality 2 One each of the following items were inaccurate: • Cures for COVID-19 = one each for breast milk, camel urine, cannabis, diet, hand sanitizer, homeopathy, immune globulin (nonspecific, pooled), placental cells, montelukast, vitamin A, vitamin D, whiskey • Death panels are the cause of COVID-19 mortality in the United States 1 Each (total subset 13 tweets) Abbreviations: COVID-19, coronavirus disease 2019; UV, ultraviolet. a Tweets that stated that supplements and vitamins (or a particular diet) cured COVID-19 were labeled as inaccurate. However, if tweets stated they supported or potentially strengthened the immune system, they were not labeled as inaccurate. b Total adds up to more than 91 since multiple tweets listed more than one inaccurate product or statement. c These were only labeled as inaccurate if the tweet stated these products cured COVID-19. Tweets that stated they hydroxychloroquine/zinc/azithromycin might have antiviral properties without stating they cured COVID-19 were not labeled as inaccurate. At the time of the study, definitive studies proving the ineffectiveness of hydroxychloroquine had not yet been published. The most common inaccurate tweets in our study comprised recommendations for using unproven prescription medicines, hydroxychloroquine or chloroquine, to treat or prevent COVID-19 (e.g., "#Hydroxychloroquine with Zn supplement cures #COVID-19"). The United States Food and Drug Administration (FDA) approves medicines after their effects have been reviewed by the Center for Drug Evaluation and Research and their benefits are found to outweigh known and potential risks for an intended population (Food and Drug Administration, 2019a). Since neither is approved for use in COVID-19, tweeted recommendations to use hydroxychloroquine or chloroquine for this disease would be classified as unapproved or off-label by the FDA. Off-label drug use is allowed by the FDA when there is no FDAapproved drug to treat a condition or there is insufficient supply of FDA-approved drugs for a particular condition (Food and Drug Administration, 2019b). Importantly, recent FDA attempts to limit off-label drug use have been constrained by court rulings that support a pharmaceutical company's right to "free speech" when promoting such use "as long as their statements are not false or misleading" (Kim & Kapcynski, 2017) . In a similar manner, the FDA does not have the authority to regulate the free speech (i.e., tweets) of individuals especially if they have no commercial interest in a product (Kim & Kapcynski, 2017) . Unproven herbs, vitamins, and supplements were another common recommendation within inaccurate tweets (e.g. "there are medicinal plants in Madagascar such as Artemisia, which can cure COVID-19") These products, collectively termed dietary supplements, are characterized by containing at least one identified dietary ingredients such as a vitamin, mineral, herb, botanical, amino acid, enzyme, or metabolite. Dietary supplements are not approved for use by the FDA and can be brought to market without having been proven safe or effective (Harris, 2000) . Product labeling, although regulated in the United States by the FDA (under the 1994 Dietary Supplement and Health Act), is much less stringently regulated. While specific disease claims are prohibited, the FDA allows for claims regarding the structure and function of these supplements with a required label that these products are "not intended to diagnose, treat, prevent, or cure any disease" (Harris, 2000; Owens et al., 2014) . A 2014 study of 1300 dietary supplement retail and nonretail websites found that 20%-38% of websites made disease-related claims and only 8% of retail websites studied included the required FDA disclaimer regarding disease claims (Owens et al., 2014) . This study cited a lack of FDA manpower and resources to adequately enforce labeling requirements for dietary supplements. Based on its limited ability to monitor and regulate both drugs and dietary supplements, it is likely the FDA would need more funding and more regulatory authority before it could meaningfully impact misinformation on social media platforms like Twitter. The Federal Communications Commission (FCC) regulates interstate and international communications with authority over communications law, regulation, and technological innovation (Federal Communications Commission, 2021) . While not directly tasked with verifying the accuracy of the information on social media sites, the FCC has the legal authority to interpret important aspects of communication laws regarding websites and social media companies. Section 230 of the Communications Act of 1934 and an amendment (the Telecommunications Act of 1996) state that "no provider or user of an interactive computer service shall be held liable on account of… any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected" (Johnson, 2020) . In addition to protection from criminal lawsuits, experts have interpreted Section 230 as conferring protection to social media companies from civil lawsuits due to harm caused by third-party content (i.e., harmful or inaccurate information; Barriott & Wilkens, 2020) . As such, social media companies have protection from liability when either illegal or inaccurate information is posted as long as those companies make good faith efforts to remove this material. While there may be bipartisan Congressional support to amend or replace Section 230 such that companies would be required to remove or moderate inaccurate or harmful content, it is undetermined if proposed changes would meaningfully improve the accuracy or validity of health-related content on social media (Reardon, 2020) . As in our study, others found that unverified Twitter accounts contain more misinformation compared to verified accounts (authors authenticated by Twitter (Kouzy et al., 2020) . The 25% rate of inaccurate information in our study is consistent with these prior studies of epidemic-related tweets (Brennan et al., 2020; Kouzy et al., 2020) . These findings indicate that misinformation during disease outbreaks continues to be a common and important problem. While most tweets listed individual authors and contained no hyperlinks, 44% of inaccurate and 36.7% of accurate tweets were authored by or had links to self-reported news sites. Knowledge regarding the accuracy of news sites is especially important since up to 67% of Americans get their news from social media (Shao et al., 2018) . Inaccurate tweets found in our study included those reported by an affiliate of major news organizations (e.g., NEWS4SanAntonia/NBC promoting melatonin for COVID-19), international news organizations (e.g., abs-cbn, "virgin coconut oil could be COVID-19 cure"), entertainment news, and news sites that appeared to mix advocacy with news (e.g., vaxxter.com, "China cures coronavirus with vitamin C"). Identifying whether or not a reported news story is real and verifying the legitimacy of a news source can be difficult. While there is no single technique for verifying whether or not a story is real (vs. fake), experts recommend reading reputable news sources, reading original studies or sources, looking for verification of stories on multiple sites, ensuring there are author attributions for stories, and using fact check tools (i.e., www.snopes.com, www.factcheck.org, www.politifact.com. www.punditfact.com; Spector, 2020 ). An international not-for-profit organization, Health on the Net/HON, exists for verifying the accuracy and legitimacy of health and medical websites (Boyer et al., 2016) However, HON requires that individual websites request a review. The review provided by HON only analyzes eight principles of information authority (e.g., confidentiality, authorship, source attribution, supporting information, transparency, advertising, objectiveness, and financial disclosure) and not the accuracy of the information within websites (Boyer et al., 2016) . It is unlikely that a similar review process could be performed on Twitter which relies on a rapid dissemination of information during news cycles with immediate commentary and feedback by users of this platform. Due to the speed of information dissemination and high volumes of information on social media, automated algorithms (i.e., artificial intelligence) are one method for identifying real versus fake news currently being studied (Lara-Navarra et al., 2020). Social media bots or automated programs used to engage social media have been accused of spreading misleading information on Twitter . Botometer scores were high for both accurate and inaccurate authors of Tweets in our study indicating that a substantial amount of COVID-19-related information on Twitter might be spread by these automated programs. Tweet accuracy did not appear to be associated with bot scores. Not all bots are malicious, and many bots can perform legitimate functions. Bots that appropriately follow Twitter rules can include those that update real-time news and weather. Human Twitter authors also can automate a portion of their account by forwarding Facebook posts, sending Real Simple Syndication feeds, or tweeting/retweeting in the absence of the human user. These bot functions would tend not to disseminate misinformation. Importantly, Botometer scores fluctuate over time leading some experts to question their reliability, validity, and reproducibility (Rauchfleisch & Kaiser, 2020) . A recent study using the Botometer also incorrectly misclassified humans as bots in 41%-76% of political tweets (Rauchfleisch & Kaiser, 2020) . Subsequently, Botometer scores should be interpreted with caution. In addition to relaying inaccurate medical information, Twitter has been identified as a source for unfounded conspiracy theories during disease outbreaks. Government conspiracies to hide information, control populations, or hide treatments were rumored during Ebola, Zika, influenza-H1N1, and MERS outbreaks (Sell et al., 2020; Smallman, 2015; Vijaykumar et al., 2018; Yang & Lee, 2020) . With COVID-19, an unfounded rumor linking the SARS-CoV-2 virus to 5G (fifth-generation mobile phone) networks began in late January 2020 and spread rapidly leading to widespread misinformation and the burning of 5G towers in the United Kindgom (Ahmed et al., 2020) . Other COVID-19 conspiracies include the deliberate release of the virus as a bioweapon and pharmaceutical companies blocking known treatments to boost their own drugs and vaccines (Neil & Campbell, 2020) . Experts have noted misinformation and conspiracy theories were amplified and retweeted more than accurate tweets during the Zika outbreak (Brennan et al., 2020; Cinelli et al., 2020; Vosoughi et al., 2018) . Tweets describing conspiracies in our study included five linking pharmaceutical companies with attempts to suppress available cures, two stating governments were purposefully hiding cases, and one linking death panels to COVID-19 mortality in the United States. Multiple organizations (i.e., WHO, United Nations, Centers for Disease Control, and IFCN) and experts have provided recommendations for combating misinformation on social media (Centers for Disease Control and Prevention, 2020; International Fact-Checking Network, 2020; United Nations, 2020; World Health Organization, 2020b) . Proposals include a sustained coordinated effort by independent fact-checkers like the IFCN, an independent (nonbiased) news media, tracking of misinformation plus dissemination of accurate, easy to read, information by public health and government authorities/agencies, and censorship by social media companies (Brennan et al., 2020; Garrett, 2020; Limaye et al., 2020; LLewyllen, 2020; Sell et al. 2020; Yang & Lee, 2020) . Our study was conducted after Twitter implemented a policy stating they would delete tweets that run the risk of causing harm by spreading misinformation about COVID-19. Thus, self-censorship alone may be an ineffective screen for information accuracy within Twitter (Gadde & Derella, 2020; Hern, 2020) . Our study evaluated the content of tweets, not whether individuals acted on this information or were harmed by misinformation. Certain inaccurate tweets (e.g., ingesting bleach can cure disease) are potentially more harmful than other inaccurate tweets (e.g., taking vitamin C can cure disease). Despite this difference, we treated all inaccurate statements equally within our study. Future studies might concentrate on the potential of inaccurate tweets to cause harm when evaluating the accuracy of health-related information on social media. We only evaluated the first 25 health-related tweets for each query/search. Since tweet searches partially use timelines to identify relevant tweets, older tweets may not have been identified during searches. In addition to the timeliness of tweets, other factors used by Twitter to rank results are proprietary and unknown. Tweets are also dependent upon the news cycle and evaluation of tweets on different dates or during different news cycles might yield different results. Since our goal was to analyze the accuracy of the information on Twitter, we did not evaluate the accuracy or legitimacy of website hyperlinks within tweets. Within accurate and inaccurate tweets, there were links to self-reported news websites, online magazines, blogs, YouTube videos, published/unpublished studies, and health-related websites (integrated medicine, homeopathy). For websites describing themselves as news-related, some appeared to report legitimate news (e.g., a television news website wherein a nurse reporting he was cured of COVID-19 by taking hydroxychloroquine, vitamin C, and ritonavir, an antiretroviral HIV medicine) while others appeared to have a more commercial news slant (e.g., homeopathyplus.com which described homeopathic COVID-19 remedies). While the authoritative tweet categories of hospitals and government agencies are easily identifiable, listing of author qualifications or training is often not apparent when analyzing biographies on Twitter. Thus, physician authors who did not detail their qualifications might have been erroneously labeled as nonauthoritative. It is unknown what effect miscategorization in this manner would have on the accuracy of tweets with authoritative authors. Each author's country was determined by the country listed within their public Twitter profile. This setting is based upon the country selected by a user, can be changed by users, and cannot be independently verified (Twitter Help Center, 2021) . Separately, Twitter uses Internet Protocol/ IP addresses plus global positioning satellite/GPS information about wireless networks and cell towers to identify countries associated with users for internal purposes. This internal "country setting is non-public information," is used by Twitter to customize content and advertisements, and is not available for study (Twitter Help Center, 2021) . Countries with the highest use of Twitter include the United States and many Western European nations (Chen et al., 2020; Singh et al., 2020) . Our study only analyzed English language tweets. Thus, our findings might not be applicable to countries outside these regions and to non-English speaking countries. Moreover, we only analyzed tweets related to five terms chosen by study authors. It is possible that alternate terms or combinations of terms would yield different results. In the month before initiating our study, Twitter instituted new measures to limit potentially abusive, manipulative, and inaccurate content (Gadde & Derella, 2020) . Part of this strategy involves machine learning and automation. Machine learning requires a large data set evaluated over time to create useful algorithms. Because of this, it is possible that Twitter will be better able to identify and remove this problematic content in the future. We found over one-quarter of health-related COVID-19 tweets to be inaccurate. Authoritative authors of tweets, especially government entities and hospitals/hospitals systems, were more likely to post accurate tweets. These findings suggest the public be wary of COVID-19 health information posted on Twitter. Ideally, Section 230 of the amended Communications Act should be updated by Congress to hold social media companies responsible for harm from inaccurate health-related information on their sites if comprehensive attempts are not made to identify and remove this information. This action would incentivize those companies to ensure information is factchecked and removed if potentially harmful or misleading. Increased funding for the FDA would allow enhanced review of supplement companies for removal of improper disease and medical claims within advertisements and on their websites. Legislation that requires supplements to be proven safe with oversight and approval by the FDA would potentially decrease adverse events related to supplements. While not directly addressing misinformation, such legislation would increase surveillance of these companies and potentially improve their adherence to regulations disallowing health and medical claims for their products. Other measures to improve the accuracy of health-related information on social media include enhanced public/private industry oversight, independent fact-checking, and effective automated, artificial intelligence algorithms. It is doubtful that any single approach will resolve the "infodemic" of COVID-19 misinformation on Twitter and a multi-faceted approach encompassing each of these potential solutions is needed to improve the accuracy of health-related information on social media. COVID-19 and the 5G conspiracy theory: Social network analysis of Twitter data Trustworthy health-related tweets on social media in Saudi Arabia: Tweet metadata analysis The potential of social media and internet-based data in preventing and fighting infectious diseases: From Internet to Twitter Justice Thomas lays blueprint for Supreme court to limit Section 230 in a future case Health on the Net's 20 years of transparent and reliable health information Types, sources, and claims of COVID-19 misinformation. Fact sheet. Reuters Institute Tracking social media discourse about the COVID-19 pandemic: Development of a public Coronavirus twitter data set Pandemics in the age of Twitter: Content analysis of tweets during the 2009 H1N1 outbreak The COVID-19 social media infodemic UN tackles 'infordemic' of misinformation and cyber crime in COVID010 crisis Enforcement Activities. Unapproved Drugs. Department of Health and Human Services Surge of virus misinformation stumps Facebook and Twitter. The New York Times Social media's initial reaction to information and misinformation on Ebola Chinese social media reaction to the MERS-CoV and avian influenza A (H7N9) outbreaks. Infectious Diseases of Poverty An update on our continuing strategy during COVID-19 Is cancer information exchanged on social media scientifically accurate COVID-19: The medium is the message Regulatory and ethical issues with dietary supplements Twitter to remove harmful fake news about coronavirus. The Guardian The FCC's authority to interpret Section 230 of the Communications Act The pandemic is driving media consumption way up It takes a community to conceive: An analysis of the scope, nature, and accuracy of online sources of health information for couples trying to conceive Promotion of drugs for off-label uses. The US Food and Drug Administration at a crossroads Coronavirus goes viral: Quantifying the COVID-19 misinformation epidemic on Twitter Information management in healthcare and environment: Towards an automatic system for fake news detection Building trust while influencing online COVID-19 content in the social media world COVID-19: How to be careful with trust and expertise on social media Twitter as a source of vaccination information: Content drivers and what they are saying Multiple comparisons, Handbook of biologic statistics Fake science: XMRV, COVID-19, and the toxic legacy of Dr Yellow fever outbreaks and Twitter: Rumors and misinformation Online sources of herbal product information Twitter and misinformation: A dangerous combination? YouTube as a source of information on the H1N1 influenza epidemic Content and source analysis of popular tweets following a recent case of diphtheria in Spain The false positive problem of automatic bot detection in social science research Democrats and Republicans agree that Section 230 is flawed Classifying and summarizing information from microblogs during epidemics The best times to post on social media in 2019 according to 25 studies Misinformation and the US Ebola communication crisis: Analyzing the veracity and content of social media messages related to a fear-inducing infectious disease outbreak Automatically appraising the credibility of vaccine-related web pages shared on social media: A Twitter surveillance study Anatomy of an online misinformation network Social bots' sentiment engagement in health emergencies: A topic-based analysis of the COVID-19 pandemic discussions on Twitter A first look at COVID-19 information and misinformation sharing on Twitter Whom do you trust? Doubt and conspiracy theories in the 2009 influenza pandemic Fake News How fact-checkers are fighting coronavirus misinformation worldwide. Reuters Institute How to change your country settings UN launches new initiative to fight COVID-19 misinformation through 'digital responders Virtual Zika transmission after the first U.S. case: Who said what and how it spread on Twitter The spread of true and false news online The lifespan of a social media post. MM Consulting Bots in the Twittersphere WHO Director-general's opening remarks at the media briefing on COVID-19 Countering misinformation about COVID-19. A joint campaign with the government of the United Kingdom Framing the MERS information crisis: An analysis on online news media's rumour coverage A novel coronavirus from patients with pneumonia in China The authors declare that there are no conflict of interests as per the ICJME guidelines. All authors (Sarah B. Swetland, Ava N. Rothrock, Halle Andris, Bennett Davis, Linh Nguyen, Phil Davis, Steven G. Rothrock) were involved in the conception of study, design of the study, data collection and abstraction, drafting and revision of the manuscript. All authors analyzed the data and Sarah B. Swetland/Ava N. Rothrock/Steven G. Rothrock performed statistical analyses. All authors take responsibility for the paper as a whole. The data, models, and methodology used in this research are not proprietary.