key: cord-0593119-4cotglbh authors: Unal, Mesut Erhan; Kovashka, Adriana; Chung, Wen-Ting; Lin, Yu-Ru title: Visual Persuasion in COVID-19 Social Media Content: A Multi-Modal Characterization date: 2021-12-05 journal: nan DOI: nan sha: dcbf41e310472681d3845aaf9b7404f51a3f735f doc_id: 593119 cord_uid: 4cotglbh Social media content routinely incorporates multi-modal design to covey information and shape meanings, and sway interpretations toward desirable implications, but the choices and outcomes of using both texts and visual images have not been sufficiently studied. This work proposes a computational approach to analyze the outcome of persuasive information in multi-modal content, focusing on two aspects, popularity and reliability, in COVID-19-related news articles shared on Twitter. The two aspects are intertwined in the spread of misinformation: for example, an unreliable article that aims to misinform has to attain some popularity. This work has several contributions. First, we propose a multi-modal (image and text) approach to effectively identify popularity and reliability of information sources simultaneously. Second, we identify textual and visual elements that are predictive to information popularity and reliability. Third, by modeling cross-modal relations and similarity, we are able to uncover how unreliable articles construct multi-modal meaning in a distorted, biased fashion. Our work demonstrates how to use multi-modal analysis for understanding influential content and has implications to social media literacy and engagement. From campaigns to advertising, social media content routinely incorporates multi-modal design choices that combine texts and images to effectively covey information, shape meanings, and sway interpretations toward desirable implications. Compared to textual and linguistic analyses, how the different compositions of written words and visual elements were created and disseminated on social media has not been sufficiently studied. This work situates in the context of prevalent online misinformation in the ongoing COVID-19 pandemic. Increased isolation and the anxiety about the pandemic drastically changed our lives -particularly, the increased use of social media can result in fast spreading of false content, make users more susceptible to misinformation, and create unique challenges to detect and debunk untruth (Su 2021) . This study attempts to reveal how the subtle multi-modal content elements are associated with the propagation of information from online news outlets that manipulate facts or shape misinterpretations. In this work, we focus on two aspects of persuasive information, popularity and reliability. While inferring content reliability alone may seem enough to identify problematic Unreliable Popular Figure 1 : Our method performs article popularity and reliability classification using multi-modal cues. We highlight salient regions for the model's predictions using a gradientbased visualization technique (Selvaraju et al. 2017) . In this example, our model associates the star in the Chinese flag, along with part of the title that has negative tone, with the tweeted article being unreliable. On the other hand, the forehead of a WHO officer (B. Aylward) and a part of the tweet text have been associated with the article being popular. content and prevent its spread, popularity is important yet often overlooked. Besides allowing us to investigate content creation strategies to persuade the audience and propagate misinformation, estimating content popularity can also help with timely debunking and prevention of the spread of misinformation. For example, one can prioritize content estimated to become popular for manual fact-checking, when slow and costly expert evaluation is a part of the process. Popularity and reliability of news articles shared on social media have been studied before as separate topics. Efforts on predicting popularity of news articles often rely on hand-crafted content features (Bandari, Asur, and Huberman 2012; Arapakis, Cambazoglu, and Lalmas 2014; Piotrkowicz et al. 2017 ) and early engagement statistics (Castillo et al. 2014; Wu and Shen 2015; Liao et al. 2019) . Prior work focuses on textual content, and does not investigate in what way accompanying visuals contribute to popularity, even though modern media is often multi-modal. However, work in media studies and communication theory suggests images play a critical role in conveying meaning and are a powerful rhetoric tool (Messaris 1997; Forceville 2002; O'Shaughnessy and O'Shaughnessy 2004) . In contrast to text, images are eye-catching and concisely paint a rich context. For instance, images can imply associations between people and qualities (Joo et al. 2014 ; Thomas and Kovashka 2019) , and use juxtaposition or contrast to suggest desirable properties or undesirable outcomes (Williamson 1978) . Because images are powerful, they can both make content popular, and also carry out an agenda and mislead. Since most news sources use special meta-tags to specify which image should be shown with the shared article on social media (e.g. Twitter), analyzing their target-specific imagery may help us better understand the relative contribution of visuals in COVID-19 (mis-)information on these platforms. However, to the best of our knowledge, no prior work examines popularity of COVID-19-related imagery. Prior work on predicting reliability, on the other hand, focuses on detecting fake news using article content (Potthast et al. 2018; Horne and Adali 2017) and social context features (Ruchansky, Seo, and Liu 2017; Wang, Bansal, and Frahm 2018; Meghawat et al. 2018; Wu and Liu 2018; Wang et al. 2018a; Shu et al. 2019; Monti et al. 2019) . Nevertheless, detection methods that employ social context heavily rely on meta-data beyond the content itself. For example, network-based models (e.g. (Wu and Liu 2018; Monti et al. 2019) ) utilize social network graphs which usually requires extensive data collection, pre-processing and computation efforts. Models that make use of user-based features (e.g. (Shu et al. 2019) ) do not generalize well onto spreaders who have little to no previous social interaction. Finally, efforts that utilize multi-modal content (image and text) suffer from lack of interpretability, and fail to explain the link between reliability and high-level concepts in the input. Using data collected from social media pertaining to the COVID-19 crisis, we attempt to characterize the elements of persuasion. In this work, "persuasion" refers to the communication tactics manifested as multi-modal (textual or visual) elements which articles use to reach their audiences and convey a particular message. We use popularity as a proxy measure of persuasiveness, and reliability relates to the agenda, i.e. the purpose of the persuasion (agenda to convey accurate or misleading information). We examine both popularity and reliability of COVID-related content, where "popularity" is captured by how frequent an article shared on social media, and "reliability" refers to the credibility of the online news outlets previously identified in prior work (Grinberg et al. 2019) . We seek to answer the following questions: • RQ1: To what extent do textual and visual signals in a tweet predict the popularity and reliability of news articles shared on social media? • RQ2: What textual and visual elements are predictive of the popularity and reliability of shared news? How can we identify the predictive signals? • RQ3: How does the combination of textual and visual el-ements in unreliable and reliable sources differ? To address these questions, we first develop a multi-modal approach using visual and textual cues from news-sharing tweets. We learn a shared feature space optimizing jointly for both popularity and reliability classification tasks, and use this space to visualize important parts of the input for the model's predictions, as well as to show how these important parts change across two tasks and their classes. We finally formulate a cross-modal retrieval task to discover whether reliable and unreliable sources combine visual and textual elements differently to construct multi-modal meaning. Our work is the first empirical study that analyzes the popularity and reliability aspects of multi-modal persuasive COVID-19-related content using a multi-task approach. Our approach achieves robust improvements over other multimodal baselines. We find that multi-modal data better enables detection of misleading or popular content, but the relative importance of visual and textual features varies: for instance, visual features are more important for reliability classification. One important finding is that unreliable content constructs multi-modal meaning in a biased and distorted fashion, as the results show that a multi-modal representation model trained on unreliable articles does not translate well to reliable ones. Finally, articles from unreliable sources often feature visuals or mentions of national symbols, certain lab/medical equipment, charts, and comics. Our work can be used in high-school curricula to develop critical media literacy skills, to gauge bias in publicly funded news media, or to construct balanced presentation of news in search engines and social media feeds. Multi-modal learning on general data. A plethora of recent work investigates the ways of integrating information from different modalities for tasks such as image captioning (Kiros, Salakhutdinov, and Zemel 2014; Karpathy and Fei-Fei 2015; You et al. 2016 ), but while captioning assumes the same objects are shown and mentioned, this is rarely the case in news articles where images and text serve complementary roles. We discuss multi-modal approaches for the tasks relevant to our problem setting, below. Reliability and bias prediction. Predicting reliability of news articles on social media has seen interest in recent years, especially after the 2016 elections (Pogue 2017) . Some work requires manual fact-checking data from experts at the article-level granularity (Shu et al. 2018; Singer-Vine 2016) , which is costly, slow and not scalable. Thus, (Horne et al. 2018) shifted the attention to source reliability. Following their approach, we use source-level reliability labels given in (Grinberg et al. 2019) for the articles in our dataset. Prior work has mostly examined cues from text and social context. (Potthast et al. 2018 ) performs fake news detection using hand-crafted content features (e.g. number of paragraphs). (Shu et al. 2019 ) combines implicit (e.g. age, political orientation) and explicit (e.g. registration time, follower count) user features for fake news detection. content is relatively recent and limited. (Joo et al. 2014; Thomas and Kovashka 2019; Joo, Steen, and Zhu 2015; Yoon et al. 2020 ) examine how politicians' portrayal can be used to predict personal qualities, electability, and bias of the news source. (Xi et al. 2020 ; Thomas and Kovashka 2019) predict political ideology from images that politicians share on social media or that news articles choose to include. However, none of this work pertains to the COVID-19 crisis. The COVID-19 topic poses a challenge in that it is fairly narrow, thus the type of imagery will be limited, and the same images might often be reused and thus not be discriminative. Finally, multi-modal learning has also been used to analyze social media. (Jin et al. 2017 ) fuse features and statistics from different modalities using an attention mechanism to perform rumor detection. (Khattar et al. 2019 ) learn a feature space to capture explicit correlations between image and text by employing a multi-modal variational autoencoder. (Wang et al. 2018b ) learn event-agnostic multi-modal features for fake news detection performing event discrimination as an auxiliary task. (Lippe et al. 2020 ; Velioglu and Rose 2020) utilize recent multi-modal transformer architectures to detect hateful memes. In contrast to these works, we use multi-modal cues in a multi-task setting to perform article popularity and reliability classification tasks, in the unique context of COVID-19 misinformation. Importantly, these works only perform classification, but do not examine the elements of misinformation. In other words, they do not explain which parts of images/text are important, do not reveal the associations between high-level visual concepts (e.g., a star) and reliability, and crucially, the different ways images and text are combined to convey meaning. We show our approach outperforms (Khattar et Bielski and Trzcinski 2018), are uni-modal (visual) only, not multi-modal. Our work learns from multi-modal cues to predict article popularity, within a multi-task framework, from content only and no meta-data, using a dataset of 95 news sources. We experimentally compare against (Bielski and Trzcinski 2018) and demonstrate superior performance. Our dataset is constructed using a list of pandemic-related tweets (Chen, Lerman, and Ferrara 2020) and reliability coding of news domains (Grinberg et al. 2019 ). After we retrieve tweet objects for tweets given in (Chen, Lerman, and Ferrara 2020), we only keep tweets that include a link to one of the domains in (Grinberg et al. 2019) . After data collection, we obtain a set of articles S = {A 1 , A 2 , ..., A N } where each article A i is represented as a set of tweets which shared that particular article. Lastly, we crawl article URLs to retrieve their titles and images. We specifically check for twitter:title (og:title as fallback) and twitter:image (og:image as fallback) meta-tags since they are utilized by the news source to denote the title and the image to appear within a news-sharing tweet. We will share the URLs of images and the split into reliable/unreliable tweets and images as an extension of (Chen, Lerman, and Ferrara 2020)'s dataset. Popularity labels: The first task we want our model to perform is binary article popularity classification. Thus, we come up with a popularity measure which makes use of retweet and like counts of tweets that shared the same article (raw popularity), and follower counts of authors posted those tweets (audience size): where t retweet and t like denote number of retweets/likes for tweet t in set A i , Q the number of followers of a Twitter user, and λ is a smoothing constant to prevent the score from being inflated when audience count is small. The top 20% articles are taken as popular and the bottom 20% are taken as unpopular. All popular articles have a popularity measure P greater than zero, and P for all unpopular articles is zero. Choosing λ: Setting the right value for λ is important as it affects the calculated P values and thus the set of popular articles (top 20%). One should expect that, with an appropriate choice of λ, the distribution of audience size in popular articles and in articles that gained some popularity (i.e. P > 0) should be similar as the former is a subset of the latter. Otherwise, the chosen λ could be favoring articles with small/large audience as having higher P. After experimenting with different values, we set λ to 10 4 as it makes these two article sets' audience distributions similar. 1 Reliability labels: Data points are also assigned binary reliability labels. To this end, domain codings in our data collection need to be collapsed into two categories: reliable and unreliable. After a careful review of (Grinberg et al. 2019 )'s domain labeling strategy, we strip yellow and satire sources out as they cannot be perfectly associated with eliciting misinformation. We consider articles from green sources as reliable, and articles from either red or orange sources as unreliable. Lastly, we undersample reliable articles to balance our experiment dataset, and split it into fixed train/val/test with 70/10/20 ratio. Even after undersampling, our dataset is still much larger than (Zhou et al. 2020 ) (2,017 vs 12,326 articles). Table 1 shows the number of articles that fall into each category in our data collection (before reliability label assignment) and Table 2 shows descriptive statistics of the experiment dataset. We use the latter in the classification experiments to answer RQ1&2, and a subset of the initial data collection (Table 1) in the cross-modal relation experiments to answer RQ3. Popularity and reliability classification. We describe our multi-task architecture (see Fig. 2A ) to perform the binary popularity classification (T1) and source reliability classification (T2) tasks simultaneously given inputs: • A title i : Title of the article in the generated preview, • A tweet i : Concatenated user-generated content of top-5 tweets (retweet+like) sharing the article; we oversample if |A i | < 5, • A image i : Image of the article in the generated preview. As the language used in article titles is likely different than in tweets (e.g. tweets are more informal), we hypothesize these two should not share the word embedding space. We train two separate Word2Vec (Mikolov et al. 2013 ) models offline using article titles (φ) and tweet texts (ψ). Both Word2Vec models embed a token into a 128-D space (φ, ψ : X → R 128 ). Finally, we represent titles and tweet texts as a sequence of Word2Vec embeddings, preserving token order and padding with 0 ∈ R 128 to the length of the longest sequence. Our model employs (Kim 2014 )'s Text-CNN architecture on top of these 128-D representations. Concisely, our textual feature extractors (G, H) employ 1-D filters of size {3, 5, 7}, 128 filters for each. We apply max-pooling over filter outputs, resulting in one scalar per filter, and feature extractors G : A title → R 384 and H : A tweet → R 384 . We compare to alternative text representations in Sec. 5. For images, we employ ResNet-50 (He et al. 2016) pretrained on ImageNet (Deng et al. 2009 ) as feature extractor (F : A image → R 2048 ). We fuse the textual and image modalities to perform T1 (popularity) and T2 (reliability prediction) using two classification branches (Fig. 2 ) and a multi-task binary cross-entropy loss: where y p ∈ {0, 1} and y r ∈ {0, 1} denote groundtruth popularity and reliability labels respectively, andp p = p(ŷ p = 1 | θ) andp r = p(ŷ r = 1 | θ) denote predictions. The intuition for using convolutions for text is that popularity and reliability may be inferrable from local patterns in the text. Thus, learning convolutional filters that match these patterns may be easier than modeling the entire text autoregressively. We show in Sec. 5 that our method outperforms both (Bielski and Trzcinski 2018) and (Khattar et al. 2019) , which use bi-directional LSTM for text encoding. Convolutions also facilitate our interpretation of pattern importance. We train our model with an initial learning rate of 1 × 10 −4 and decrease it by ×0.1 if validation loss does not improve in the last four epochs. We use early stopping to terminate training if the validation loss does not improve in Table 3 : Comparison of classification performance (mean accuracy, ± standard error) between our multi-task architecture and other baselines. The best method is shown in bold, and the second-best is underlined. the last six epochs. We use the Adam (Kingma and Ba 2014) optimizer with default parameters of β 1 = 0.9, β 2 = 0.999. Cross-modal relation modeling. We next describe our architecture (Fig. 2B ) for learning a cross-modal embedding space wherein a paired (belonging to the same article) visual and textual data resides closer than an unpaired one. This embedding enables analysis of the link between modalities in terms of the message they convey, and the different ways in which multi-modal meaning is constructed in articles with different labels. We employ an ImageNet pretrained ResNet-50 followed by a linear transformation as the image embedding branch (F : A image → R 512 ) and two Text-CNNs followed by a concat and a linear transformation as the text embedding (G : A title × A tweet → R 512 ). Outputs of these branches are then L2-normalized to place embeddings on the surface of a 512-D unit hypersphere. To optimize our model, we minimize an N-pairs loss (Sohn 2016) : where L trip denotes the triplet loss (Schroff, Kalenichenko, and Philbin 2015) commonly used for learning cross-modal representations. For each article in a minibatch, we take the article image (A image i ) as anchor, paired text (A title i , A tweet i ) as positive and all other article texts (A title j , A tweet j ) from minibatch as negatives (hence N-pairs), and accumulate the loss for each negative that violates the margin α. We use the same hyperparameters and training strategy as for popularity and reliability classification, and set the margin α to 0.5. We describe the experiments conducted in order to answer our research questions with empirical evidence. RQ1: Multimodal prediction of popularity and reliability. The first experiment aims to verify the appropriateness of the architecture we use, by comparing it with several other multi-modal, single-task baselines described below. We train two instances for each baseline, one for each task. Table 4 : Importance of inputs for popularity (T1) and reliability (T2). The method with the best accuracy is bolded, second-best is underlined, and third-best is italicized. Table 3 summarizes the results. We observe that learning task-specific document representations (as done by (Bielski and Trzcinski 2018) , (Khattar et al. 2019 ) and our method), instead of using task-agnostic document embeddings (Doc2Vec is trained on our data but in unsupervised fashion), leads to better exploitation of the textual modality and stronger performance for both tasks. Our method is the best single-task method for both tasks, outperforming prior art, in part due to the use of convolutions (discussed previously). The success of our model addresses RQ1 and indicates that popularity and reliability can indeed be estimated from content alone (textual and visual features) with reasonable accuracy, without needing to rely on meta-data (network features). We also observe our proposed multi-task approach improves T1 accuracy by 0.4%, indicating that even though these two tasks seem unrelated, optimizing them jointly enables learning more informative feature representations. RQ2: Predictive signals from texts and images. We conduct another experiment to identify which source(s) of information are useful in predicting article popularity and source reliability. We use the single-task version of our architecture, i.e. OURS (SINGLE-TASK), to see each input's effect separately for each task. Results in Table 4 show that tweet text is the most important source of information for popularity classification, while title and image are significantly weaker (see supplementary for hashtag/mention effect experiment). One possible explanation could be that articles may share very similar titles and images regardless of popularity as all of them are related to the same topic, COVID-19. For example, images that portray the US President holding a news conference can be found on both sides of popularity. On the other hand, while the article title is the most important input for source reliability, all inputs carry useful signals. Adding tweets to the inputs improves performance over title only by 3.6%, and adding the image adds an additional 3.4% in accuracy. These results may indicate news sources have a unique way of conveying information through images and titles, and this distinction persists among user-generated content shared along with articles. Experiments in this section answer RQ2, concluding that tweet text and article title are the most important sources of information for T1 and T2, respectively. Visualizing important regions. One advantage of having a multi-task architecture is that one can pinpoint important parts of the inputs for each task within the same model, because the exact same input representation is used to perform different tasks. In this work, we combine Grad-CAM (Selvaraju et al. 2017) and SmoothGrad (Smilkov et al. 2017) to visualize important regions for the model's predictions and show how these regions change across tasks and their classes (popular/not, reliable/not). Grad-CAM uses gradient information to build classdiscriminative localization maps. It calculates an importance score for each feature map by performing global average pooling on back-propagated gradients and then takes linear combinations of forward feature maps using their importance scores. To prevent rapid gradient fluctuations within local structures, SmoothGrad computes a stochastic approx- imation to Gaussian smoothing by averaging gradients for multiple noisy versions of the input. As our feature extractors for textual inputs are also CNNs, we use the same technique to visualize important parts of the input text. For article titles (Fig. 3) , we observe that sentence fragments which can be associated with oppression (e.g. "censoring" and "suppress" in [e, g]), conspiracy (e.g. "china falsified", "secretly" and "spying," in [a, c, d]), decline in economic activity (e.g. "shares crash" and "sales crash" in [f, h]) or ridiculing and portraying COVID-19 as a hoax (e.g. "billion-jillion" and "might lower" in [b, i]) become important for classifying an article as unreliable. On the other hand, China-related tokens are linked to unpopularity (e.g. "Wuhan", "China", "Chinese" in [c, f, g]). Interestingly, our model puts very little attention on title when classifying an article as reliable or popular, and relies on other inputs. Next, Fig. 4 shows smoothed Grad-CAM output for 18 article images. For each image, from top to bottom, we show important regions for classifying an article as popular, unpopular, reliable, and unreliable, respectively. In the top row, we show images with Chinese flag [a-d], charts [e-g], and comics [h-i] . We observe that stars in Chinese flag are used to predict these images coming from unreliable sources [ad] . In [e-g], charts are consistently associated with unreliability and often with popularity, signaling that unreliable sources use chart visuals while talking about economic impact of the pandemic and these visuals attract the audience. Similarly in [h-i] , comics are associated with being both popular and unreliable, revealing another successful strategy used by unreliable sources to make their articles more noticeable when shared on Twitter. In the second row of Fig. 4 , we show images with 3-D models of coronavirus [j-k] , pipettes and needles [l-o] , and large texts [p-r], all associated with being unreliable. In [lo] , however, pipettes and needles are also tied to popularity, probably because the types of unreliable articles these images can belong to (e.g. anti-vaccine, COVID being labmade) draw people's attention more easily. Finally in Table 5 , we report the 10 tweet tokens with largest average attention score in each task class. Results show that while prevention-related tokens are associated with the shared article being reliable, political tokens are mostly tied to being unreliable. It is also clear that certain emojis indicate article popularity. RQ3: Difference in cross-modal relationship between reliable and unreliable domains. The social media posts we examine construct meaning from multiple modalities, i.e. tweet, title and image. We next examine how the textual and visual components relate to each other, and how their relationship differs between reliable and unreliable samples. We learn two separate cross-modal embedding spaces (using Fig. 2 (B) but different training data) for each domain: one using only reliable (green) and another using only unreliable articles (red, Table 1 ). These models allow us to compare similarity across modalities (e.g. find the text that most closely matches an image). Rather than absolute performance of these models for cross-modal retrieval, we are interested in how they generalize across domains. If a model trained on domain A performs poorly when the test domain is switched from A to B, this may be because domain A contains a distortion or bias the model can exploit. Table 6 shows the results. Regardless of which domain we train on, performance is inflated when the training and test domains are the same, and drops when testing on a differ-ent domain (drop shown in the last column). However, this performance drop is much larger when training on red (unreliable) articles-performance drops drastically when the test domain switches from red to green, i.e. the model does not generalize to the green (reliable) domain. On the other hand, cross-domain performance decrease is much smaller for the model trained on green articles. Thus, the image-text association in the unreliable domain is much less general compared with that in the reliable domain. In other words, the image-text association in the unreliable domain is more biased. This finding relates to RQ3. We complement it with another measurement and discussion in the next section. We chose K-way retrieval to test generalization performance, as in (Thomas and Kovashka 2020) , for the following reason. Semantic discrepancy between image and text of an article is generally large (e.g. an article image with people wearing masks can be paired with several different texts), so a retrieval quality metric used for semantically well-aligned modalities (e.g. image and its caption), namely Recall@K, is not suitable to assess performance. In K-way retrieval, for a query image, we choose paired text as positive, and randomly sample K − 1 article texts as negatives, then check whether the positive is the closest to the query image among K. All models get the same negative set for the same query. The green training set is undersampled to match the size of the red training set. Figure 5 : MMD within green, within red, and between green and red articles w.r.t. sample size, for image inputs (left), titles (middle), and tweets (right). Discrepancy within green article images is significantly smaller than it is within red article images. On the other hand, we find no significant difference in within-domain discrepancy for other input types. Homogeneity of reliable/unreliable content. In the previous section, we found that unreliable content is more biased and generalizes worse than reliable content. One hypothesis is that this bias is due to homogeneity of the unreliable content (i.e. the same ideas being propagated, so embeddings trained on these do not generalize to other data). We test this hypothesis by measuring within-domain homogeneity. We measure how coherent the distributions of tweets, titles and images are in reliable and unreliable sources using maximum mean discrepancy (MMD) (Gretton et al. 2012) . Given two sets of observations X = {x 1 , x 2 , ..., x N } and Y = {y 1 , y 2 , ..., y M } drawn i.i.d. from two distributions p and q respectively, empirical estimate of MMD is computed: where we use the Laplace kernel, k(x, x ) = e −α x−x , in our experiments. We randomly sample 2N articles from each domain (reliable or unreliable), divide them into two N -sized sets and calculate MMD between these two sets, both of which are from the same domain. We represent article images with their 2048-D features extracted from a ResNet-50 pre-trained on ImageNet, and text inputs with 128-D Doc2Vec embeddings. We repeat the sampling process 250 times for each N value, and report average MMD. Figure 5 shows how within-domain MMD changes for different values of N ; small MMD indicates large homogeneity. For N = 1, 000, t-test results show that the image pool of reliable articles is more homogeneous than of unreliable articles (t(498) = −7.46, p < 0.01). We found no significant difference in homogeneity between tweet pools of reliable and unreliable articles (t(498) = 1.17, p = 0.24), and between their title pools (t(498) = −0.34, p = 0.74). Thus, unreliable sources are not more homogeneous than reliable ones, indicating their bias has another cause. Findings in this and the previous section answer our RQ3 and show that unreliable and reliable articles construct meaning in different ways. However, generalization performance of a metric learning (embedding) model trained on unreliable articles is much worse than the one trained on reliable articles, indicating unreliable articles are distorted and biased. This bias is not because unreliable articles are more homogeneous (less diverse and broad) than reliable ones. We examined the elements of multi-modal information and misinformation on social media. We showed that the popularity and reliability of an article can be inferred with good accuracy from visual and textual content alone, without relying on expensive network or user features. We measured the impact of the visual and textual channels, as well as which segments within them (regions in images, words in tweets and titles) most contribute to the persuasive power of the articles. For instance, national symbols and conspiracy-related words become important for classifying an article as unreliable. We showed unreliable articles use image-text associations very differently to construct multi-modal rhetoric. This has an important implication in relevant downstream tasks: general-purpose image datasets and models cannot be readily used for combating misinformation in multimodal content unless accounting for the bias. Our work is a step towards understanding misinformative COVID-19related content and demonstrate that there are differential patterns of textual and visual elements in online misinformation, which suggests media literacy educators and online platforms should look at multiple modalities that shape user experience and meaning in the shared media content. One major drawback of our approach is that it is not able to associate important regions with high-level semantic concepts. This requires a vocabulary of these concepts which is very hard to construct considering our diverse dataset. It is currently not feasible to compute a table like Table 5 for visual tokens, i.e. some frequency-based statistic over common patterns appearing in images. Unfortunately, the state-of-the-art computer vision methods are insufficient for this task in the space of COVID-related persuasion. One strategy for extracting visual tokens could be to run an off-the-shelf object detection model on article images, then count how frequently each object category is attended to by each of our four task classes. However, we found that even large-vocabulary detection models perform poorly on our data, and miss important categories (e.g. medical equipment, flags, banners, etc.). Alternatively, to avoid the need for semantic labels, we have experimented with clustering of visual inputs, but semantic/topical similarity and visual similarity are quite distinct, and visual similarity models (and clustering) do not capture the theme of each image. For example, images of a couple performing partner stunt at a park, a store front, and a government building are grouped together. Because computing semantically-aware representations for the specific domain of COVID imagery is a fullfledged ML task, we leave it as future work. On the Feasibility of Predicting News Popularity at Cold Start The pulse of news in social media: Forecasting popularity Understanding multimodal popularity prediction of social media videos with self-attention Characterizing the life cycle of online news stories using social media reactions Tracking Social Media Discourse About the COVID-19 Pandemic: Development of a Public Coronavirus Twitter Data Set Learning phrase representations using RNN encoder-decoder for statistical machine translation ImageNet: A large-scale hierarchical image database Pictorial metaphor in advertising Image popularity prediction in social media using sentiment and context features A kernel two-sample test Fake news on Twitter during the 2016 US presidential election Deep residual learning for image recognition This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news Assessing the news landscape: A multi-module toolkit for evaluating the credibility of news Multimodal Fusion with Recurrent Neural Networks for Rumor Detection on Microblogs Novel visual and statistical image features for microblogs news verification Low-rank multi-view embedding learning for micro-video popularity prediction Visual persuasion: Inferring communicative intents of images Automated facial trait judgment and election outcome prediction: Social dimensions of face Deep visual-semantic alignments for generating image descriptions MVAE: Multimodal Variational Autoencoder for Fake News Detection Convolutional neural networks for sentence classification Adam: A method for stochastic optimization Unifying visual-semantic embeddings with multimodal neural language models Distributed representations of sentences and documents Popularity prediction on online articles with deep fusion of temporal process and content features A multimodal approach to predict social media popularity Visual persuasion: The role of images in advertising Efficient estimation of word representations in vector space Fake news detection on social media using geometric deep learning Persuasion in advertising Headlines Matter: Using Headlines to Predict the Popularity of News Articles on Twitter and Facebook How to Stamp Out Fake News A Stylometric Inquiry into Hyperpartisan and Fake News Csi: A hybrid deep model for fake news detection Facenet: A unified embedding for face recognition and clustering Grad-CAM: Visual explanations from deep networks via gradient-based localization Fakenewsnet: A data repository with news content, social context and dynamic information for studying fake news on social media The Role of User Profile for Fake News Detection Smoothgrad: removing noise by adding noise Improved deep metric learning with multiclass n-pair loss objective It doesn't take a village to fall for misinformation: Social media use, discussion heterogeneity preference, worry of the virus, faith in scientists, and COVID-19-related misinformation beliefs Predicting the Politics of an Image Using Webly Supervised Data Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval Recurrent neural networks for online video popularity prediction Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge Retweet wars: Tweet popularity prediction via dynamic multimodal regression EANN: Event adversarial neural networks for multi-modal fake news detection EANN: Event Adversarial Neural Networks for Multi-Modal Fake News Detection Decoding advertisements Analyzing and predicting news popularity on Twitter Tracing fake-news footprints: Characterizing social media messages by how they propagate Understanding the Political Ideology of Legislators from Social Media Images Cross-Domain Classification of Facial Appearance of Leaders Image captioning with semantic attention User-Guided Hierarchical Attention Network for Multi-Modal Social Image Popularity Prediction ReCOVery: A Multimodal Repository for COVID-19 News Credibility Research Popularity prediction of images and videos on Instagram