key: cord-0639842-qotfqvvz authors: Rao, Ashwin; Morstatter, Fred; Lerman, Kristina title: Partisan Asymmetries in Exposure to Misinformation date: 2022-03-02 journal: nan DOI: nan sha: aaa898d0c769d1594bede9f45011aef076bc1bd3 doc_id: 639842 cord_uid: qotfqvvz Health misinformation is believed to have contributed to vaccine hesitancy during the Covid-19 pandemic, highlighting concerns about the role of social media in polarization and social stability. While previous research has identified a link between political partisanship and misinformation sharing online, the interaction between partisanship and how much misinformation people see within their social networks has not been well studied. As a result, we do not know whether partisanship drives exposure to misinformation or people selectively share misinformation despite being exposed to factual content. We study Twitter discussions about the Covid-19 pandemic, classifying users ideologically along political and factual dimensions. We find partisan asymmetries in both sharing behaviors and exposure, with conservatives more likely to see and share misinformation and moderate liberals seeing the most factual content. We identify multi-dimensional echo chambers that expose users to ideologically congruent content; however, the interaction between political and factual dimensions creates conditions for the highly polarized users -- hardline conservatives and liberals -- to amplify misinformation. Despite this, misinformation receives less attention than factual content and political moderates, who represent the bulk of users in our sample, help filter out misinformation, reducing the amount of low factuality content in the information ecosystem. Identifying the extent of polarization and how political ideology can exacerbate misinformation can potentially help public health experts and policy makers improve their messaging to promote consensus. The growing popularity of social media as a source of news for a large portion of the population (Pew 2018) has raised concerns about the quality and validity of information being shared online and its effect on polarization, which refers to division of the public into two groups with sharply contrasting opinions or beliefs (Levy 2021; Van Bavel et al. 2021 ). These concerns have only grown in urgency with the emerging evidence that social media enables the spread of misinformation and politically polarized content about the Covid-19 pandemic, its toll, mitigation measures, and the efficacy of interventions, therapies and vaccines (Jiang et al. 2021; Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Rao et al. 2021 ). According to a Pew Report (Pew 2020) , political ideology explains a partisan divide in attitudes about Covid-19 and compliance with health guidelines (Gollwitzer et al. 2020) . Since effective response to the pandemic requires collective action, e.g., mass vaccination to achieve herd immunity, social media can exacerbate public health impacts of the pandemic by deepening societal divisions and amplifying health misinformation (Roozenbeek et al. 2020; Chen et al. 2021; Memon and Carley 2020) , thereby hindering consensus. Multiple studies have examined how misinformation and "fake news" are shared online (e.g., see Grinberg et al. (2019) ; Vosoughi, Roy, and Aral (2018) ), focusing on methods to automatically recognize misinformation (Pennycook and Rand 2019a) and characterize people who spread it (Pennycook and Rand 2019b) . Online polarization has been a research topic for an even longer time period. Studies have shown that people share information that aligns with their attitudes and political beliefs (Levy 2021) . These attitudes can be measured by analyzing online activity traces, based on the text of the messages people share (Conover et al. 2012) or the links to news sources embedded in their posts . People also seek out information sources that are consistent with their beliefs (Knobloch-Westerwick and Meng 2009), following and retweeting social media partisans who have similar ideology to their own (Barberá et al. 2015; Badawy, Ferrara, and Lerman 2018) . These activities facilitate the development of "echo chambers," which surround people with like-minded peers who confirm their pre-existing attitudes and beliefs. Studies have explored political echo chambers and measured their effects (Cinelli et al. 2021; Bakshy, Messing, and Adamic 2015; Nikolov, Flammini, and Menczer 2021; Jiang et al. 2021 ), but their role in exposure misinformation, especially in the context of the pandemic, has not yet been explored. Previous works have identified a link between partisanship and misinformation: politically conservative social media users are more likely to spread misinformation (Grinberg et al. 2019; Nikolov, Flammini, and Menczer 2021) and antiscience content . This link partly explains the opposition by conservatives to Covid-19 mitigation measures (Gollwitzer et al. 2020) . However, the interaction between partisanship and exposure to misinformation has not been as well studied. As a result, we do not know whether partisanship drives selective exposure to misinformation or people selectively share misinformation despite being exposed to diverse and high quality information sources. In the context of a global pandemic and the public's increasing reliance on online information, it is important to understand the factors shaping public's exposure to polarized information and misinformation. We organize our research around these questions: RQ1 Is polarization of information people share (along partisan and factual dimensions) aligned with the polarization of information they see within their social networks? Are there multidimensional echo chambers? RQ2 How well are the dimensions of polarization correlated, i.e., how much is partisanship correlated with content factuality? RQ3 Are exposures to misinformation asymmetrical along partisan lines? RQ4 Are there partisan asymmetries in the selective amplification or filtering of misinformation? RQ5 Do people pay more attention to factual content or misinformation? Our study addresses this challenge by examining online discussions about the Covid-19 pandemic. First, we classify social media users ideologically along political and factual dimensions, assigning them a multi-dimensional polarization score. Next, we quantify multi-dimensional polarization of the information users see in their friends' posts. As a proxy of friends, we take the messages posted by accounts the user retweets. We identify echo chambers that expose users to ideologically congruent information along political and factual dimensions. While social media users tend to surround themselves with peers who share similar views on politics and factuality, there are partisan asymmetries in exposure to factual content. Additionally, the substantial interaction between the two dimensions, also observed in earlier studies (Grinberg et al. 2019) , creates conditions for politically polarized users to amplify misinformation. These polarized users, who represent hardline partisans on both sides of the political spectrum, selectively share misinformation. However, such users receive less attention than those sharing factual content, and political moderates, who represent the bulk of users in our study, help filter out misinformation, reducing the amount of low-factuality content in the information ecosystem. Identifying the extent of polarization and how political ideology can exacerbate misinformation can potentially help public health experts and policy makers improve their messaging to facilitate consensus and compliance with public health measures. Researchers define polarization as divergence of opinions along the political dimension and study its impact on other opinions, such as scientific topics (Bessi et al. 2016) . However, opinions on controversial topics are often correlated (Baumann et al. 2020) . For example, those who oppose lockdowns as a way to suppress the spread of the disease also resist vaccinations. To capture some of the complexity of polarization we project opinions in a multi-dimensional space, with different axes corresponding to different semantic dimensions. Once we identify the dimensions of polarization and measure them, we can study dynamics of polarized opinions, their interactions and regional differences. Label propagation leverages structure of connections in a network to infer the political views based on the ideology of the accounts users retweet (cf. (Badawy, Ferrara, and Lerman 2018) ). The intuition behind the approach is that people prefer to connect to (e.g., retweet content posted by) others who share their opinions and ideology (boyd, Golder, and Lotan 2010; Metaxas et al. 2015) . Others have looked at echo chambers in the context of online discourse. The authors in (Cinelli et al. 2021 ) studied the effect of echo chambers across different platforms, finding that Facebook is more segregated than other platforms. Interestingly, they find that those platforms that allow users to adjust their feed (e.g., Reddit) afford more balanced media consumption than those that don't (e.g., Twitter and Facebook). Our work builds on this by providing a multidimensional understanding of the exposure of particular users to certain content. In another work, (Nikolov, Flammini, and Menczer 2021) finds that misinformation sharing is strongly correlated with right leaning by studying the network and content. This finding is echoed in . This is an important departure from our work. We find that left leaning folks also produce content with low factuality. ) also finds that the number of moderate, factual users greatly surpasses the number of misinformants. We see this in our data as well, with more factual users as demonstrated in Fig. 1(b) . In this study, we use the publicly available dataset (Chen, Lerman, and Ferrara 2020) comprising of 260.6M tweets related to Covid-19 posted between January 21 and July 31, 2020. These tweets contain at least one of a predetermined set of Covid-19-related keywords (e.g., coronavirus, pandemic, Wuhan, etc.). However, less than 1% of the tweets have geographic coordinates associated with them. We therefore rely on the geolocation method 1 employed in (Jiang et al. 2020) to determine if the user is within the US. The method works by first extracting the mentions of city or state users frequently have in their profile before employing a fuzzy matching algorithm to match them to their respective states in the US. A manual review of this approach found it to be effective in identifying user's home state. This leaves us with 48M tweets generated by 2.4M geolocated users in the United States. In this study, we characterize attitudes along two dimensions: political and factual. The political dimension captures user political ideology or partisanship, ranging from hardline liberal to hardline conservative, while the factual dimension quantifies user's predilection for factual content or misinformation. With Media Bias-Fact Check 2 providing an exhaustive list of domains and their ideological polarities, previous studies have leveraged users' domain sharing behaviors on Twitter (Cinelli et al. 2021; Le et al. 2019; Rao et al. 2021) to quantify ideological alignment. Along the political scale, Media Bias-Fact Check lists over 2K paylevel domains (PLDs) under five mutually exclusive categories: Left, Center-Left, Least-Biased/Center, Center-Right and Right. In addition, it also provides a measure of reporting quality for over 3.5K pay-level domains belonging to one of the six content factuality classes: Very Low, Low, Mixed, Mostly Factual, High and Very High. Sources generating pro-science content are categorized as High or Very High while, sources propagating misinformative, questionable content or anti-science content are categorized as Low or Very Low on the factuality scale. On the other hand, highly partisan news sources such as foxnews.com, cnn.com, huffpost.com generally have a chequered quality of reporting and have been listed as Mixed. Table 1 refers to the collection of information sources and their ideological biases. We measure ideological polarization by looking at the political and factual scores of domains people share in their posts and see their friends share. Individual Polarization (Information Sharing) We extract tweets containing URLs and use tldextract 3 to extract pay-level domains from them. We filter out tweets and retweets containing pay-level domains that are not categorized under either of the two ideological polarities of interest (Table 1) . Similar to previous works (Cinelli et al. 2021; Rao et al. 2021 ), we infer a user's partisanship by averaging over the political scores of the PLDs the user shared. Likewise, we infer individual's factual scores by averaging the factual scores of the PLDs the user shared. This makes our measure of factual sharing similar to the propensity, or vulnerability, to misinformation used in previous works (Grinberg et al. 2019; Nikolov, Flammini, and Menczer 2021) . It is important to note that individual scores quantify the information that users generate within the online information ecosystem; therefore, users with low factual scores produce more misinformation. We calculate user u's sharing behaviors along the political p l (u) and factual f l (u) dimensions using Eqs 1 and 2 respectively. We denote the set of pay-level domains shared by user u as D(u). These include only the domains appearing in u's original tweets. Functions Π(d) and Φ(d) return the political and factual polarity of each domain d. Neighborhood Polarization (Exposure) Understanding the polarization of information people see online is challenging for several reasons. On Twitter, as on other social media platforms, users subscribe to follow accounts of other users to see the content they post. However, the follower graph is usually not available nor is it feasible to reconstruct it from the available APIs. Even when the follower graph is known, platform's personalization algorithms may select only a subset of the messages posted by friends, i.e., the accounts the user follows, in the user's timeline (Bakshy, Messing, and Adamic 2015) . This can dramatically change not just the number but also the nature of the information people see (Bartley et al. 2021; . As a proxy of the follower graph, we use the retweet graph, creating links to the accounts the user retweets. We consider the retweeted accounts as friends whose activity the user sees. We extract tweets and retweets generated by friends, extract PLDs and filter out ones that do not have a political or factual polarity. In contrast to previous works (Cinelli et al. 2021; Garimella et al. 2018; Nikolov, Flammini, and Menczer 2021) , however, which measure polarization of a user's neighborhood by averaging over friends' individual political leanings, we aggregate over all messages posted by friends and calculate political and factual scores of aggregated tweets. This approach factors in the large variation in friend activity: an active friend who posts many messages will have a bigger effect on the user's information exposure than a less active friend. Information exposure scores along political (p e (u)) and factual (f e (u)) dimensions are calculated using Equations 1 and 2, but now the set of pay-level domains D(u) corresponds to all domains user u sees, which we construct by aggregating over all PLDs shared by u's friends. After filtering out users who share or see two or fewer PLDs with political and factual polarity, we are left with a little over 350K users. Fig. 2 shows the distribution of the number of pay-level domains users share in their posts, as well as the distribution of the number of PLDs users see. The difference between the two distributions suggests that some domains are seen much more than they are shared, likely because they are shared by influential accounts with many followers. In this paper we study the relationship between the polarization of information people see on social media and polarization of information they themselves share. We now explore the relationship between individual polarization of users discussing Covid-19 online and the polarization of information to which they are exposed. First, we explore the relationship between individual polarization and polarization of information exposure separately along each dimension. Fig. 3 shows the joint distribution of individual political (Fig. 3(a) ) and factual ( Fig. 3(b) ) leanings and the political (resp. factual) scores of the information they are exposed to by friends in their retweet neighborhood. The high density along the diagonal confirms the existence of echo chambers: many users are linked to friends who expose them to ideologically congruent information. The correlation between individual leanings and information exposure scores along the political and factual dimensions are 0.61 (p < 0.001) and 0.5 (p < 0.001) respectively. There are no partisan asymmetries in the political echo chambers (Fig. 3(a) ), as both liberal and conservative users are exposed to a similar variety of political content. There is some asymmetry in factual information echo chambers (Fig 3(b) ), since there is much lower density of users in the misinformation bubble. Unlike previous works, e.g., (Cinelli et al. 2021) , the echo chambers we observe are more diffuse, with users linked to friends with more variable ideologies. This is because previous works calculate the average polarization of friends, which gives equal weight to friends who share a lot or little information, while we aggregate in-formation shared by all friends when measuring polarization. Previous research has shown that political and factual dimensions of information people share online are correlated: conservatives share misinformation to a greater degree than liberals (Vosoughi, Roy, and Aral 2018; Grinberg et al. 2019; Nikolov, Flammini, and Menczer 2021) , and they also tend to share more anti-science sources . Our results are consistent with these findings. Fig. 4 shows the distribution of user scores in the political-factual space. There is a strong negative correlation (−0.198, p < 0.001) between user leanings along the two dimensions: users sharing more conservative domains are more likely to share misinformation. However, the large variance masks more nuanced posi- tions. For example, the bright line in the upper-left quadrant shows a phenomenon also observed by (Nikolov, Flammini, and Menczer 2021) that more extreme liberals have a greater propensity to share misinformation. Fig. 5 contrasts popular topics (hashtags) discussed by people sharing factual information and misinformation. While factual people post messages on health topics, such as "pandemic", "wearamask", "stayhome", people sharing misinformation are preoccupied with politics ("trump2020", "kag2020", "democrats", "maga"), conspiracies ("plandemic", "qanon", "wwg1wga"). Interestingly, these users also mention media to a much greater extent, using topics like "foxnews", "7news", "foxandfriends", "morningjoe", and "fakenews". This may suggest the greater role that media plays in agenda-setting for people vulnerable to misinformation. Also, unlike factual users, people spreading misinformation also discuss unproven cures, like "hydroxychloroquine". How does the interaction between partisanship and factuality affect what information the users are exposed to within their echo chambers and, in turn, what information they share? Do people effectively filter out misinformation they see by selectively sharing more factual content? Or do they amplify misinformation by selectively sharing fewer factual domains than what they are exposed to? Fig. 6 visualizes user exposure to multi-dimensional information within the echo chambers. The top row shows user exposure to political and factual information as a function of user political (Fig. 6(a) ) and factual ( Fig. 6(b) ) leanings. Note the neighborhood exposure vs leaning space is the same shown in Fig. 3(a) and (b) for respectively. The color in each plot shows the median exposure score. There . are several regions of interest in Fig. 6(a) . Liberal users (p l < 0.5) who are exposed to politically moderate content (p e ≈ 0.5) see the most factual information (dark orange). Liberals (p l < 0.5) who are exposed to liberal content (p e < 0.5) generally see more factual (orange) information, although as their exposure becomes more partisan, the share of misinformation they see grows. Those exposed to extreme left content (p e ≈ 0) see more misinformation (green hue). As liberals become more exposed to conservative content (p e → 1) they see more and more misinformation The same is not true of conservatives: conservative users (p l > 0.5) who are exposed to right-wing information (p e > 0.5) tend to see more misinformation; however, as long as they are not too conservative, exposure to liberal information (p e < 0.5) allows them to receive more factual information. Unlike liberals, exposure to politically moderate content (p e ≈ 0.5) does not promote factual information among conservatives. Trends within misinformation echo chambers ( Fig. 6(b) ) tell a similar story. Users who generate more misinformation (f l < 0.4) and are exposed to misinformation (f e < 0.4) tend to see more conservative content (red), although those who are exposed to more factual content (f e → 1) see more liberal information (blue dots). Among people sharing factual information (f l > 0.6), those who are exposed to more factual information (f e → 1) tend to see politically moder-ate content (white). The box outline is an artifact of domain polarity scores. MBFC classifies many information sources as "mixed" (0.4), leading to an overabundance of points near that value. The bottom row of Fig. 6 visualizes multi-dimensional polarization within the echo chambers. Again, the neighborhood exposure vs leaning space is the same as the row above, but the color in each plot shows user polarization or leaning along the alternate dimensions. Fig. 6(c) shows that as partisanship becomes more extreme (p l → 0 or p l → 1), people are more likely to share misinformation (green). Interestingly, this trend does not strongly depend on partisanship of their exposure (p e ). Overall, liberals (p l < 0.5) share more factual information, although those who are more moderate (p l ≈ 0.5) tend to share more misinformation (yellow/green) as they are exposed to more conservative content (p e → 1). As shown in Fig. 6(d) , misinformation-prone users (f l < 0.4) tend to post more hardline conservative content (darker red) as they share more misinformation (f l → 0) regardless of their exposure; however, those who are most exposed to misinformation (f e < 0.2) tend to share more liberal views (blue dots). This is not true for factual users, who tend to share liberal content (blue) regardless of the factuality of their exposure (f e ). The off-diagonal elements in the echo chamber plots in Fig. 3 suggest that a sizable fraction of social media users share information that is more polarized and less factual than what they are exposed to, and an equally large number share information that is more factual than what they are exposed to. In other words, some people filter out misinformation from the information ecosystem, while others amplify it. To better understand how the patterns in the interaction between political and factual dimensions affect how people react to exposure, we define two quantities. Equation 4 simply gives excess factuality (amplified factuality) for a given user, i.e., how much more factual content the user shares relative to their exposure. Equation 4 measures excess partisanship (amplified partisanship), i.e., the relative partisanship of the content the user shares compared to their exposure. Note that we had transformed polarization scores so that instead of partisanship, they measure the degree of political moderacy or extremism regardless of its ideological label. Fig.7 shows the joint distribution of excess partisanship ∆ p and excess factuality ∆ f . The negative correlation (Pearson's correlation r = −0.38, p < 0.001) between the two dimensions suggests that not only do politically hardline social media users (regardless of whether they are liberal or conservative) have a higher propensity for misinformation, but users who amplify politically polarized content also amplify misinformation. The color shows partisanship. Interestingly, both hardline conservatives and hardline liberals In general, as users produce more conservative content while being exposed to more conservative content, they have a higher propensity to misinformation. Low content factuality is also seen along the liberal extreme where users produce far-left content. (b) Color indicates the median political leaning score in each bin. Generally, as users generate more misinformation while being exposed to low factual content, they have a higher propensity to political conservatism. are active in amplifying partisanship ∆ p > 0 and misinformation ∆ f < 0, with liberals playing a more active role in amplifying misinformation (dampening factuality). On the other hand, users who are less partisan than their neighborhood (∆ p < 0) also share more factual information than what they are exposed to (∆ f > 0). By filtering out misinformation, such users play an important role in the information ecosystem. They also tend to be politically moderate. The presence of users generating low factuality content raises questions about their activity and the subsequent attention they garner. Are users sharing misinformation more active than users sharing more factual content? Does aggressive content generation correlate with more attention? These are some of the questions that become imperative to assess the ravages of misinformation on social media. We define a user's overall activity as the sum of the tweets T and retweets RT they generate A(u) = T (u)+RT (u). In order to quantify the attention the user u receives in response to their activity, we define retweet power P (u) as the ratio of number of times u is retweeted R and their overall activity: Boxplots in Fig. 8 visualize the differences in tweet and retweet activity of factual (f l ≥ 0.6) and misinformation (f l ≤ 0.4) users. To assess the significance of differences between the two groups, we use the Student's t-test. This parametric test of difference between the means of two groups requires the corresponding distributions to be normal. While our metrics (the number of tweets and retweets) have a skewed distribution, taking a log transform increases H0 : µ(log(R))F ≤ µ(log(R))M 25.64 * * * Ha : µ(log(R))F > µ(log(R))M P = R/A 0.57 0.17 H0 : µ(log(P ))F ≤ µ(log(P ))M 32.86 * * * Ha : µ(log(P ))F > µ(log(P ))M Table 2 : Results of hypothesis testing for difference in means between the two groups of users along the factuality dimension for various metrics. Factual users (F ) have high factuality scores (f l ≥ 0.6) while misinformation users (M ) have low scores (f l ≤ 0.4). Metrics include: number of tweets (T ) and retweets (RT ) generated by the user, the overall activity (A), number times the user is retweeted (R) and retweet power (P ) which is the ratio of number of times retweeted and activity. We performed t-tests to assess the statistical significance of difference between the two distributions after log transforming the variables. * * * denotes a statistically significant difference between the means of the two distributions with p-value < 0.001. Table 2 details the null and alternate hypotheses used in our t-tests. From Fig.8 and Table 2 , we see that misinformative users tweet and retweet more often and have higher overall activity compared to factual users. Statistically significant tstatistics for T ,RT and A in Table 2 reinforce these findings. Despite their increased overall activity, users sharing misinformation are retweeted less often than factual users (µ(R M ) < µ(R F )), significant at p < 0.001 and have considerably lower retweet power (µ(P M ) < µ(P F )) at p < 0.001 (Fig.8(d) ) . These findings hint at an increased attention to factual users despite their lower overall activity. In this work we study multi-dimensional polarization and echo chambers. We focus on two dimensions of polarization-political and factual-which are assessed by measuring the bias in the pay-level domains that the users tweet. We find that there is strong polarization along both dimensions. To deepen the understanding of the mechanics behind echo chambers, we separate a user's interactions into their exposure (what their friends post) and leaning (what they post). We find a strong correlation between what a user sees and what the post, confirming the presence of echo chambers. Next, we study the partisan asymmetries of these echo chambers. We find that conservatives are more likely to share misinformation than liberals. Nevertheless, extremely liberal partisanship increases a user's propensity to share misinformation. Moreover, we find that moderate liberals have the highest exposure to factual information. We find that this does not hold for conservatives whose exposure to liberal information yields the most truthful information. Furthermore, for conservatives exposure to politically moderate content does not make them more factual. Lastly, we look at the relationship between partisan extremism and misinformation. We find that highly polarized users, who represent hardline partisans on both sides of the political spectrum, are most likely to amplify partisan content and misinformation. However, such users get less attention than the bulk of users in our study who are political moderates who selectively share more factual content. Therefore, such users filter out misinformation. There are several limitations to this study worth considering. First, we do not know the actual exposures and thus rely on the retweet network as a proxy. Second, there could be factual/pro-science bias in the data due to the way it was collected. More generally, the keyword-based Twitter crawl While we notice that misinformative users are more active both in terms of number of tweets and retweets generated, they are retweeted less com-pared to factual users. Subsequently, the ratio of retweets received to overall activity is significantly lower for misinformative users than factual ones. used to produce this data could omit nuanced subtopics related to Covid-19 discussions. Lastly, our study focuses on users in the United States. This decision was made because of the United States' information environment, and due to the dominance of English keywords used to collect the dataset in our study. This work identifies important differences in the information space of polarized and partisan users. Better understanding how information is received, and how it propagates, can help public health experts craft more effective messaging. There are several important avenues for future work, such as designing effective interventions for misinformation, assessing the relationship between partisan asymmetries and the binding dimensions of moral thinking such as loyalty, authority and purity, and studying the temporal dynamics of these echo chambers. Analyzing the digital traces of political manipulation: The 2016 russian interference twitter campaign Exposure to ideologically diverse news and opinion on Facebook Tweeting from left to right: Is online political communication more than an echo chamber? Auditing Algorithmic Bias on Twitter Modeling echo chambers and polarization dynamics in social networks Tweet, tweet, retweet: Conversational aspects of retweeting on twitter COVID-19 misinformation and the 2020 US presidential election Tracking Social Media Discourse About the COVID-19 Pandemic: Development of a Public Coronavirus Twitter Data Set Neutral bots reveal political bias on social media The echo chamber effect on social media Partisan asymmetries in online political activity Political discourse on social media: Echo chambers, gatekeepers, and the price of bipartisanship Partisan differences in physical distancing are linked to health outcomes during the COVID-19 pandemic Political Polarization Drives Online Conversations About COVID-19 in the United States Social Media Polarization and Echo Chambers in the Context of COVID-19: Case Study Looking the other way: Selective exposure to attitude-consistent and counterattitudinal political information Measuring political personalization of Google news search Social media, news consumption, and polarization: Evidence from a field experiment Characterizing covid-19 misinformation communities using a novel twitter dataset What Do Retweets Indicate? Results from User Survey and Meta-Review of Research Right and left, partisanship predicts (asymmetric) vulnerability to misinformation Fighting misinformation on social media using crowdsourced judgments of news source quality Fighting misinformation on social media using crowdsourced judgments of news source quality Social media outpaces print newspapers in the U.S. as a news source Political Partisanship and Antiscience Attitudes in Online Discussions About COVID-19: Twitter Content Analysis Susceptibility to misinformation about COVID-19 around the world How social media shapes polarization The spread of true and false news online