key: cord-0043168-6lntlkih authors: Zhou, Xinyi; Wu, Jindi; Zafarani, Reza title: [Formula: see text]: Similarity-Aware Multi-modal Fake News Detection date: 2020-04-17 journal: Advances in Knowledge Discovery and Data Mining DOI: 10.1007/978-3-030-47436-2_27 sha: 82eeaf3693a9a389d4264e6adb9105ba739e4cd3 doc_id: 43168 cord_uid: 6lntlkih Effective detection of fake news has recently attracted significant attention. Current studies have made significant contributions to predicting fake news with less focus on exploiting the relationship (similarity) between the textual and visual information in news articles. Attaching importance to such similarity helps identify fake news stories that, for example, attempt to use irrelevant images to attract readers’ attention. In this work, we propose a [Formula: see text]imilarity-[Formula: see text]ware [Formula: see text]ak[Formula: see text] news detection method ([Formula: see text]) which investigates multi-modal (textual and visual) information of news articles. First, neural networks are adopted to separately extract textual and visual features for news representation. We further investigate the relationship between the extracted features across modalities. Such representations of news textual and visual information along with their relationship are jointly learned and used to predict fake news. The proposed method facilitates recognizing the falsity of news articles based on their text, images, or their “mismatches.” We conduct extensive experiments on large-scale real-world data, which demonstrate the effectiveness of the proposed method. Following the 2016 U.S. presidential election, the impact of "fake news" has become a major concern. Based on a broad investigation of ∼126,000 verified true and fake news stories on Twitter from 2006 to 2017, Vosoughi and colleagues revealed that fake news stories spread more frequently and faster compared to true news stories [20] . As indicated by the fundamental theories on fake news in psychology and social sciences (see a comprehensive survey in Ref. [27] ), the more a fake news article spreads, the higher the possibility of social media users spreading and trusting it due to repeated exposure and/or peer pressure. Such levels of trust and beliefs can easily be amplified and reinforced within social media due to its echo chamber effect [3] . Hence, extensive research has been conducted on effective detection of fake news to block its dissemination on social media. Fake news detection methods can be generally grouped into (1) contentbased and (2) social-context-based methods. The main difference between the two types of methods is whether or not they rely on social context information: the information on how the news has propagated on social media, where abundant auxiliary information of social media users involved and their connections/networks can be utilized. Many innovative and significant solutions (e.g., [1, 13, 15] ) have been proposed to exploit social context information. With more social context information available, one can often better detect fake news; however, detection becomes more challenging depending on the stage the news is currently at. It is difficult to detect fake news using social-context-based methods when it has been just published and has not been propagated (i.e., no social context information), which motivates us to further explore the role that news content can play in fake news detection. As "a news article that is intentionally and verifiably false" [25] , fake news content often contains textual and visual information. Existing content-based fake news detection methods either solely consider textual information [26] , or combine both types of data ignoring the relationship (similarity) between them [4, 5, 23, 24] . The values in understanding such relationship (similarity) for predicting fake news are two-fold. To attract public attention, some fake news stories (or news stories with low-credibility) prefer to use dramatic, humorous (facetious), and tempting images whose content is far from the actual content within the news text. Furthermore, when a fake news article tells a story with fictional scenarios or statements, it is difficult to find both pertinent and nonmanipulated images to match these fictions; hence a "gap" exists between the textual and visual information of fake news when creators use non-manipulated images to support non-factual scenarios or statements. 1 With such considerations, we propose a Similarity-Aware FakE news detection method (SAFE). The method consists of three modules, performing (1) multi-modal (textual and visual) feature extraction; (2) within-modal (or say, modal-independent) fake news prediction; (3) cross-modal similarity extraction, respectively. For each news article, we first adopt neural networks to automatically obtain the latent representation of both its textual and visual information, based on which a similarity measure is defined between them. Then, such representations of news textual and visual information with their similarity are jointly learned and used to predict fake news. The proposed method aims to recognize the falsity of a news article on either its text or images, or the "mismatch" between the text and images. The main contributions of our work are summarized as below. 1. To our best knowledge, we present the first approach that investigates the role of the relationship (similarity) between news textual and visual information in predicting fake news; 2. We propose a new method to jointly exploit multi-modal (textual and visual) and relational information to learn the representation of news articles and predict fake news; and 3. We conduct extensive experiments on large-scale real-world data to demonstrate the effectiveness of the proposed method. Next, we will first review the related work in Sect. 2. The proposed method will be detailed in Sect. 3, along with its iterative learning process in Sect. 4. We will detail the experiments and the results in Sect. 5. We will conclude in Sect. 6. There has been extensive research on fake news detection. Fake news detection methods can be generally grouped into (I) content-based and (II) social-contextbased methods. I. Content-Based Fake News Detection. Content-based methods detect fake news by utilizing news content, i.e., the textual information and/or visual information within news content. Most content-based methods have comprehensively investigated news textual information. Within a traditional statistical natural language processing framework, such investigation has crossed multiple levels of language. By assuming that fake news differs from true news in linguistic/writing styles in the content, various hand-crafted features have been extracted from news content for representation and used for classification by, e.g., SVM and random forest. For example, Pérez-Rosas et al. employed lexical features by using bag-of-words with n-gram models, semantic features relying on LIWC [10] , syntactic features such as context-free grammars, and news readability [11] . Instead of extracting features based on experience, Zhou et al. [26] validated the role of fundamental theories in psychology and social science in guiding fake news feature engineering. Rhetorical structures among sentences or phrases within news content have also been investigated with either a vector space model [14] or Bi-LSTM [6] . Researchers have also explored the political bias [12] and homogeneity [2] of news publishers by mining news content that they have published, and have demonstrated how such information can help detect fake news. In addition to textual information, greater -while still limited -attention has been recently paid to visual information within news content. Jin et al. analyzed images between true news and fake news in terms of, e.g., their clarity [5] . Along with the recent advances in deep learning, various RNNs and CNNs have been developed for multi-modal fake news detection and related tasks [4, 7, 18, 21, 23, 24] . To learn the multi-modal (textual and visual) representation of news content, Jin et al. developed VGG-19 and LSTM with an attention mechanism [4] , and Khattar et al. designed an encoder-decoder mechanism [7] . Yang et al. proposed TI-CNN, which detects fake news by extracting both explicit and latent multimodal features within news content [24] . Wang et al. proposed Event Adversarial Neural Network (EANN) to learn event-invariant features representative of news content across various topics and domains [23] . While current techniques have facilitated the development of multi-modal fake news detection, the relationship across modalities has been barely explored and exploited. Our work bridges this gap by directly capturing the relationship (similarity) between the textual and visual information within news content, and firstly learning the representation of news articles through mining its multi-modal information and the relationship across modalities. Social-context-based methods detect fake news by investigating social-context information related to news articles, i.e., how news articles spread on social media. Significant contributions have been made on identifying the differences in propagation patterns between fake news and the truth [20] . Such contributions have also focused on how user profiles [1] and opinions [13, 15] can help news verification using feature engineering [1] and neural networks [13, 15] . Nevertheless, verifying a news article that has been published online, e.g., on a news outlet such as BuzzFeed (https://www. buzzfeed.com/), before it has been disseminated on social media demands contentbased methods as social-context information at this stage does not exist. For this purpose, we focus on mining news content in this work, where the proposed method will be detailed next. In this section, the proposed method (SAFE) is detailed in terms of its three modules performing: (I) multi-modal feature extraction (Sect. 3.1), (II) modalindependent fake news prediction (Sect. 3.2), and (III) cross-modal similarity extraction (Sect. 3.3). Then, we detail in Sect. 3.4 how various modules can work collectively to predict fake news. An overview of the SAFE framework is presented in Fig. 1 . Before further specification, we formally define the problem and introduce some key notations as follows. denote the similarity between t and v, where s ∈ [0, 1]. Our goal is to predict whether A is a fake news article (ŷ = 1) or a true one (ŷ = 0) by investigating its textual information, visual information, and their relationship, i.e., to determine M p : where θ * are parameters to be learned. The multi-modal feature extraction module of SAFE aims to represent the (I) textual information and (II) visual information of a given news article in ddimensional space, respectively. Text. We extend Text-CNN [8] by introducing an additional fully connected layer to automatically extract textual features for each news article. The architecture of Text-CNN is provided in Fig. 2 , which contains a convolutional layer and max pooling. Given a piece of content with n words, each word is first embedded as x l t ∈ R k , l = 1, 2, · · · , n [9] . The convolutional layer is used to produce a feature map, denoted as , from a sequence of local inputs {x , via a filter w t . As shown in Fig. 2 , each local input is a group of h continuous words. Mathematically, is a bias, ⊕ is the concatenation operator, and σ is ReLU function. Note that w t and b t are all parameters within Text-CNN to be learned. Then, a max-over-time pooling operation is applied on the obtained feature map for dimension reduction, i.e., . Finally, the representation of the news text can be obtained by t = W tĉt + b t , wherê c t ∈ R g , g is the different number of window sizes chosen; W t ∈ R d×g and b t ∈ R d are parameters to be learned. For representing news images, we also use Text-CNN with an additional fully connected layer while we first process visual information within news content using a pre-trained image2sentence model 2 [19] . Compared to existing multi-modal fake news detection studies that often directly apply a pre-trained CNN (e.g., VGG) model to obtain the representation of news images [4, 23] , we adopt the aforementioned processing strategy for consistency and to increase insights when computing the similarity across modalities. As we will demonstrate later in our experiments, it also leads to performance improvements. Let c v denote the output of the neural network with parameters w v (filter) and b v (bias). Similarly, the final representation of news visual information is then computed by v = W vĉv + b v , where W v and b v are parameters to be learned. To properly represent news textual and visual information in predicting fake news, we aim to correctly map the extracted textual and visual features of news content to their possibilities of being fake, and further to their actual labels. Mathematically, such possibilities can be computed by where 1 = [1, 0] , ⊕ is the concatenation operator, W p ∈ R 2×2d and b p ∈ R 2 are parameters. To let the computed possibilities of news articles being fake approach their actual labels, a cross-entropy-based loss function is defined: where When attempting to correctly map the multi-modal features of news articles to their labels, features belonging to two different modals are considered separately -concatenating them with no relation between them explored (see Sect. 3.2). However, besides that, the falsity of a news article can be also detected by assessing how (ir)relevant the textual information is compared to its visual information; fake news creators sometimes actively use irrelevant images for false statements to attract readers' attention, or passively use them due to the difficulty in finding a supportive non-manipulated image (see case studies in Sect. 5 for examples). Compared to news articles delivering relevant textual and visual information, those with disparate statements and images are more likely to be fake. We define the relevance between news textual and visual information as follows by slightly modifying cosine similarity: In such a way, it is guaranteed that M s (t, v) is positive and ∈ [0, 1] (to be utilized in Eq. (7)); 0 indicates that t and v are far from being similar, while 1 indicates that t and v are exactly the same. Then, we can define the loss function based on cross-entropy as below, which assumes that news articles formed with mismatched textual and visual information are more likely to be fake compared to those with matching textual statements and images, when analyzing from a pure similarity perspective: When detecting fake news, we aim to correctly recognize fake news stories whose falsity is in their (1) textual and/or visual information, or (2) their relationship, as specified in Sect. 3.2 and Sect. 3.3, respectively. To involve both cases, we specify our final loss function as where parameters can be jointly learned by We outline the optimization process to learn the model parameters, i.e., iteratively solving Eq. (10). The process is summarized in Algorithm 1. The updating rule for each parameter is as follows: Update θ p . Let γ be the learning rate, the partial derivative of L w.r.t. θ p is: As θ p = {W p , b p }, updating θ p is equivalent to updating both W p and b p in each iteration, which respectively follow the following rules: where Update θ t . The partial derivative of L w.r.t. θ t is generally computed by Let ∇L * (t) = ∂L * ∂Mt , t 0 = t ||t|| , v 0 = v ||v|| , and W p,L denote the first d columns of W p , we can have based on which the parameters in θ t are respectively updated as follows: , D t ∈ R d×d is a diagonal matrix with entry value cî t , and B t = α∇L p (t) + β∇L s (t). Update θ v . It is similar to updating θ t ; we omit details due to space constraints. We detail experimental setup in Sect. 5.1, followed by evaluating SAFE in Sect. 5.2. We detail (I) the data used in our experiments, (II) the baselines SAFE is compared to, and (III) implementation details such as how data was pre-processed and SAFE hyper-parameters were set. Our experiments are conducted on two well-established public benchmark datasets of fake news detection 3 [16] . News articles in datasets are respectively collected from PolitiFact and GossipCop. PolitiFact (https://www. politifact.com/) is a well-known non-profit fact-checking website of political statements and reports in the U.S. [22] . GossipCop (https://www.gossipcop. com/) is a website that fact-checks celebrity reports and entertainment stories published in magazines and newspapers. News articles in PolitiFact dataset were published from May 2002 to July 2018 and those in GossipCop dataset were published from July 2000 to December 2018. Ground truth labels (fake or true) of news articles in both datasets were provided by domain experts, which guarantees the quality of news labels. Statistics of the two datasets are provided in Table 1 . We compare to the following baselines, which detect fake news using (i) textual (LIWC [10] ), (ii) visual (VGG-19 [17] ), or (iii) multi-modal information (att-RNN [4] ). -LIWC [10] : LIWC is a widely-accepted psycho-linguistics lexicon. Given a news story, LIWC can count the words in the text falling into one or more of over 80 linguistic, psychological, and topical categories. These numbers act as hand-crafted features used by, e.g., random forest, to predict fake news; -VGG-19 4 [17] : VGG-19 is a widely-used CNN with 19 layers for image classification. We use a fine-tuned VGG-19 as one of the baselines; and att-RNN [4] : att-RNN is a deep neural network model applicable for multimodal fake news detection. It employs LSTM and VGG-19 with attention mechanism to fuse textual, visual and social-context features of news articles. We set the hyper-parameters the same as that in [4] and exclude the socialcontext features for a fair comparison. We also include the following variants of the proposed SAFE method: -SAFE\T: The proposed SAFE method without using textual information; -SAFE\V: The proposed SAFE method without using visual information; -SAFE\S: SAFE without capturing the relationship (similarity) between news textual and visual information. In this case, the extracted multi-modal features of each news article are fused by concatenating them; and -SAFE\W: The proposed method when only the relationship between textual and visual information is assessed. In this case, the classifier is directly connected with the output of the cross-modal similarity extraction module, i.e., where W and b are parameters. Implementation Details. In our experiments, each dataset was separated into 80% for training and 20% for testing based on the publication dates of news articles, where newly published articles were treated as test data. five-fold crossvalidation was used for model training. We set the learning rate as 10 −4 , the number of iterations as 100, and the strides (H) as {3, 4}. We evaluate the general performance of SAFE by comparing it with (I) state-ofthe-art fake news detection methods and (II) its variants. Next, (III) parameters within SAFE are analyzed and (IV) case studies are presented to validate its effectiveness. We use accuracy, precision, recall, and F 1 score to evaluate how well the representation and prediction perform. General Performance Analysis. The general performance of SAFE and baselines are provided in Table 2 . Results indicate when predicting fake news, SAFE can outperform all baselines based on the accuracy values and F 1 scores for both datasets. Based on PolitiFact data, the general performance of methods is SAFE > att-RNN ≈ LIWC > VGG-19; while for GossipCop data, such performance is SAFE > VGG-19 > att-RNN > LIWC. Note that multiple supervised learners (such as SVM, decision tree, logistic regression, and k-NN) have been used with LIWC in our experiments, where we present the best performance (obtained from random forest) in Table 2 . Table 2 and Fig. 3 . Results indicate when predicting fake news, (1) integrating news textual information, visual information, and their relationship (SAFE) performs best among all variants, (2) using multi-modal information (SAFE\S or SAFE\W) performs better compared to using single-modal information (SAFE\T or SAFE\V); (3) it is comparable to detect fake news by either independently using multi-modal information (SAFE\S) or mining their relationship (SAFE\W); and (4) textual information (SAFE\V) is more important compared to visual information (SAFE\T). Parameter Analysis. In Eq. (9), α and β are used to allocate the relative importance between the extracted multi-modal features (α) and the similarity across modalities (β). To assess their influence in method performance, we Case Study. In our case studies, we aim to answer the following questions: is there any real-world fake news story whose textual and visual information are not closely related to each other? If there is, can SAFE correctly recognize such irrelevance and further recognize its falsity? For this purpose, we went through the news articles in the two datasets, and compared their ground truth labels with their similarity scores computed by SAFE. Several examples are presented in Figs. 5-6. It can be observed that (I) the gap between textual and visual information exist for some fictitious stories for (but not limited to) two reasons. First, such stories are difficult to be supported by non-manipulated images. An example is in Fig. 5a , where no voting-and bill-related image is actually available. Compared to the couples having a real intimate relationship (see Fig. 5c ), the fake ones often have rare group photos or use collages (see Fig. 5c ). Second, using "attractive" though not closely relevant images can help increase the news traffic. For example, the fake news in Fig. 5b includes an image with a smiling individual that conflicts with the death story. (II) SAFE helps correctly assess the relationship (similarity) between news textual and visual information. For fake news stories in Fig. 5 , their corresponding similarity scores are all low and SAFE correctly labels them as fake news. Similarly, SAFE assigns all true news stories in Fig. 6 a high similarity score, and predicts them as true news. In this work, a similarity-aware multi-modal method, named SAFE, is proposed to predict fake news. The method extracts both textual and visual features of news content, and investigates their relationship. Experimental results indicate multi-modal features and the cross-modal relationship (similarity) are valuable with a comparable importance in fake news detection. Case studies conducted further validate the effectiveness of the proposed method in assessing such similarity and predicting fake news. Nevertheless, we should point out the proposed method investigates textual and visual information without considering, e.g., network and video information. Additionally, relationships within modalities are valuable as well such as the textual (or visual) similarity among or between pairwise news articles, which both will be part of our future work. Information credibility on Twitter Different spirals of sameness: a study of content sharing in mainstream and alternative media Echo Chamber: Rush Limbaugh and the Conservative Media Establishment Multimodal fusion with recurrent neural networks for rumor detection on microblogs Novel visual and statistical image features for microblogs news verification Learning hierarchical discourse-level structure for fake news detection MVAE: multimodal variational autoencoder for fake news detection Convolutional neural networks for sentence classification Efficient estimation of word representations in vector space The development and psychometric properties of LIWC2015 Automatic detection of fake news A stylometric inquiry into hyperpartisan and fake news Neural user response generator: fake news detection with collective user intelligence Truth and deception at the rhetorical structure level CSI: a hybrid deep model for fake news detection FakeNewsNet: a data repository with news content, social context and dynamic information for studying fake news on social media Very deep convolutional networks for large-scale image recognition Multimodal review generation for recommender systems Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge The spread of true and false news online Learning cross-modal embeddings with adversarial networks for cooking recipes and food images Liar, liar pants on fire": a new benchmark dataset for fake news detection EANN: event adversarial neural networks for multi-modal fake news detection TI-CNN: convolutional neural networks for fake news detection Fake news research: theories, detection strategies, and open problems Fake news early detection: a theorydriven model Fake news: a survey of research, detection methods, and opportunities