key: cord-0145902-hxrw441r authors: Cheema, Gullal S.; Hakimov, Sherzod; Ewerth, Ralph title: Check_square at CheckThat! 2020: Claim Detection in Social Media via Fusion of Transformer and Syntactic Features date: 2020-07-21 journal: nan DOI: nan sha: 5985ada6b9d1fe0ffac218065734e9357d124f78 doc_id: 145902 cord_uid: hxrw441r In this digital age of news consumption, a news reader has the ability to react, express and share opinions with others in a highly interactive and fast manner. As a consequence, fake news has made its way into our daily life because of very limited capacity to verify news on the Internet by large companies as well as individuals. In this paper, we focus on solving two problems which are part of the fact-checking ecosystem that can help to automate fact-checking of claims in an ever increasing stream of content on social media. For the first problem, claim check-worthiness prediction, we explore the fusion of syntactic features and deep transformer Bidirectional Encoder Representations from Transformers (BERT) embeddings, to classify check-worthiness of a tweet, i.e. whether it includes a claim or not. We conduct a detailed feature analysis and present our best performing models for English and Arabic tweets. For the second problem, claim retrieval, we explore the pre-trained embeddings from a Siamese network transformer model (sentence-transformers) specifically trained for semantic textual similarity, and perform KD-search to retrieve verified claims with respect to a query tweet. Social media is increasingly becoming the main source of news for so many people. With around 2.5 billion Internet users, 12% receive breaking news from Twitter instead of traditional media according to a 2018 survey report [38] . Fake news in general can be defined [46] as fabrication and manipulation of information and facts with the main intention of deceiving the reader. As a result, fake news can have several undesired and negative consequences. For example, recent news around COVID-19 pandemic with non-verified claims that masks lead to rise in carbon dioxide levels caused an online movement to not wear masks. With ease of access and sharing news on Twitter, any news spreads much faster from the moment an event occurs in any part of the world. Although, the survey report [38] found that almost 60% of users expect news on social media to be inaccurate, it still leaves millions of people who will spread fake news expecting it to be true. Considering the vast amount of news that spreads everyday, there has been a rise in independent fact-checking projects like Snopes, Alt News, Our.News, who investigate the news that spread online and publish the results for public use. Most of these independent projects rely on manual efforts that are time consuming which makes it harder to keep up with rate of news production. Therefore, it has become very important to develop tools that can process news at a rapid rate and provide news consumers with some kind of an authenticity measure that reflects the correctness of claims in the news. In this paper, we focus on two sub-problems in CheckThat! 2020 [37] 3 that are a part of larger fact-checking ecosystem. In the first task, we focus on learning a model that can recognize check-worthy claims on Twitter. We present a solution that works for both English [37] and Arabic [16] tweets. Some examples of tweets with claims are classified whether it is a check-worthy or not, shown in Table 1 . One can see that the claims which are not check-worthy look like personal opinions and do not pose any threat to a larger audience. We explore the fusion of syntactic features and deep transformer Bidirectional Encoder Representations from Transformers (BERT) embeddings, to classify check-worthiness of a tweet. We use Part-of-speech (POS) tags, named entities, and dependency relations as syntactic features and a combination of hidden layers in BERT to compute tweet embedding. Before learning the model with a Support Vector Machine (SVM) [45] , we use Principal Component Analysis (PCA) [49] for dimensionality reduction. In the second task, we focus on learning a model that can accurately retrieve verified claims w.r.t a query claim, where query claim is a tweet and verified claims are snippets from actual documents. The verified claim is true and thus acts as the evidence/support for the query tweet. Some examples pairs of tweets and claims can be seen in Table 2 , which shows that the pairs share lots of contextual information which makes this task a semantic textual similarity problem. For this reason, we explore the pre-trained embeddings from a Siamese network transformer model (sentence-transformers) specifically trained for semantic textual similarity and perform KD-search to retrieve claims. The remainder of the paper is organized as follows. Section 2 briefly discusses about previous works on fake news detection and CheckThat! tasks in particular. Section 3 presents details of our approach for Task-1 and Task-2. Section 4 describes the experimental details and results. Section 5 summarizes our conclusion and future research directions. The U.S. Army is sending text messages informing people they've been selected for the military draft. El Paso was NEVER one of the MOST dangerous cities in the US. We've had a fence for 10 years and it has impacted illegal immigration and curbed criminal activity. It is NOT the sole deterrent. Law enforcement in our community continues to keep us safe #SOTU El Paso was one of the U.S. most dangerous cities before a border fence was built there. Hey @Always since today is #TransVisibilityDay it's probably important to point out the fact that this new packaging isn't trans* friendly. Just a reminder that Menstruation does not equal Female. Maybe rethink this new look. "In 2019, trans activists or ""the transgender lobby"" forced Procter & Gamble to remove the Venus symbol from menstruation products." Fake news has been studied from different perspectives in the last five years, like factuality or credibility detection [32, 13, 36, 18, 17] , rumour detection [53, 52, 41, 51] , propagation in networks [24, 28, 39, 30] , use of multiple modalities [23, 47, 42] and also as an ecosystem of smaller sub-problems like in CheckThat! [29, 9, 1] . For social media in particular, Shu et al. [40] studied and provided a comprehensive review of fake news detection with characterizations from psychology and social science, and existing computational algorithms from data mining perspective. The fact that Twitter has become a source of news for so many people, researchers have extensively used the platform to formulate problems, extract data and test their algorithms. For instance, Zubiaga et. al. [53] extracted tweets around breaking news and used Conditional Random Fields to exploit context during the sequential learning process for rumour detection. Buntain et. al. [4] studied three large Twitter datasets and developed models to predict accuracy assessments of fake news by crowd-sourced workers and journalists. While many approaches rely on tweet content for detecting fake news, there has been a rise in methods that exploit user characteristics and metadata to model the problem as fake news propagation. For example, Liu et. al. [24] modeled the propagation path of each news story as a multivariate time series over users who engaged in spreading the news via tweets. They further classified the fake news using Gated Recurrent Unit (GRU) and Convolutional Neural Networks (CNN) to capture the global and local variations of user characteristics respectively. Monti et. al. [28] went a step further and used a hybrid feature set including user characteristics, social network structure and tweet content. They modeled the problem as binary prediction using a Graph CNN resulting in a highly accurate fake news detector. Besides fake news detection, a sub task to predict check-worthiness of claims has also been explored recently mostly in political context. For example, Hassan et. al. [18, 19] proposed a system that predicts the check-worthiness of a statement made by presidential candidates using SVM [45] classifier and combination of lexical and syntactic features. They also compared their results with fact-checking organizations like CNN 4 and PolitiFact 5 . Later, in CheckThat! 2018 [29] , several methods were proposed to improve the check-worthiness of claims in political debates. Best methods used a combination of lexical and syntactic features like Bag of Words (BoW), Parts-of-Speech (POS) tags, named Entities, sentiment, topics modeling, dependency parse trees and word embeddings [27] . Various classifiers were built using either Recurrent Neural Networks (RNN) [14, 54] , gradient boosting [50] , k-nearest neighbor [12] or SVM [54] . In 2019 edition of CheckThat! [9] , in addition to using lexical and syntactic features [11] , top approaches relied on learning richer content embeddings and utilized external data for better performance. For example, Hansen et. al. [15] used word embeddings and syntactic dependency features as input to an LSTM network, enriched the dataset with additional samples from Claimbuster system [20] and trained the network with a contrastive ranking loss. Favano et. al. [10] trained a neural network with Standard Universal Sentence Encoder (SUSE) [6] embeddings of the current sentence and previous two sentences as context. Another approach by Su et. al. [44] used co-reference resolution to replace pronouns with named entities to get a feature representation with bag of words, named entity similarity and relatedness. Other than political debates, Jaradat et. al. [22] proposed an online multilingual check-worthiness system that works for different sources (debates, news articles, interviews) in English and Arabic . They use actual annotated data from reputable fact-checking organizations and use best performing feature representations from previous approaches. For tweets in particular, Majithia et. al. [25] proposed a system to monitor, search and analyze factual claims in political tweets with Claimbuster [20] at the backend for checkworthiness. Lastly, Dogan et. al. [8] also conducted a detailed study on detecting check-worthy tweets in U.S. politics and proposed a real-time system to filter them. Check-Worthiness prediction is the task of predicting whether a tweet includes a claim that is of interest to a large audience. Our approach is motivated by the successful use of lexical, syntactic and contextual features in the previous editions of CheckThat! check-worthiness task for political debates. Given that this task contains less amount of training data, we approached this problem with the idea of creating a rich feature representation, reducing the dimensions of large feature set with PCA [49] and then learning the model with a SVM. In doing so, our goal is also to understand which features are the most important for check-worthiness prediction from tweet content. As context is very important for downstream NLP tasks, we experiment with word embeddings (word2vec [27] , GloVe [31] ) and BERT [7] embeddings to create a sentence representation of each tweet. Our pre-processing and feature extraction is agnostic to the topic of the tweet so that it can be applied to any domain. Next, we provide details about all the features used, their extraction and the encoding process. Our overall approach can be seen in Figure 2 . Pre-processing We use two publicly available pre-processing tools for English and Arabic tweets. We use Baziotis et. al.'s [2] tool for English to apply the following normalization steps: tokenization, lower-casing, removal of punctuation, spell correction, normalize hashtags, all-caps, censored, elongated and repeated words terms like URL, email, phone, user mentions. We use Stanford Stanza [33] toolkit to pre-process Arabic tweets by applying the following normalization steps: tokenization, multi-word token expansion and lemmatization. In the case of extracting word embeddings from a transformer network, we use the raw text as the networks have their own tokenization process. We use the following syntactic features for English and Arabic tasks: Parts-of-Speech (POS) tags, named entities (NE) and dependency parse tree relations. We use the pre-processed text and run off-the-shelf tools to extract syntactic information of tweets and then convert each group of information to feature sets. For English we used spaCy [21] and Stanford Stanza [33] for Arabic tweets to extract the following syntactic features. In all the features, we experiment with keeping and removing stop-words to evaluate their affect. Part-of-Speech: For both English and Arabic, we extract 16 POS tags in total and through our empirical evaluation we find that the following eight tags to be the most useful when used as features: NOUN, VERB, PROPN, ADJ, ADV, NUM, ADP, PRON. For Arabic, the additional four tags are useful features: DET, INTJ, AUX, PART. We used the chosen set of POS tags for respective language to encode the syntactic information of tweets. Named Entities: We identified the following named entity types to be the most important features through our evaluation: (GPE, PERSON, ORG, NORP, LOC, DATE, CARDINAL, TIME, ORDINAL, FAC, MONEY) for English and (LOC, PER, ORG, MISC) for Arabic. We also found that while developing feature combinations named entities do not add much value to overall accuracy, and hence our primary and contrastive submissions do not include them. Syntactic Dependencies: these features are constructed using dependency relation between tokens in a given tweet. We use the dependency relation between two nodes in the parsed tree if the the child and parent nodes' POS tags are one of the following ADJ, ADV, NOUN, PROPN, VERB or NUM. All dependency relations that match the defined constraint are converted into the triplet relation such as (child node-POS, dependency-relation, parent-POS ) and pairs such as (child node-POS, dependency-relation) where the relation is not part of a feature representation. This process is shown in Figure 1 . We found that the features based on pairs of child and parent node perform better than the triplet feature. The dimension of the feature vector for English and Arabic is 133 and 134 respectively. For encoding a feature, we get a histogram vector which contains the number of type of tag, named entity or syntactic relation pair. The process of feature encoding is shown in Figure 1 . Finally, we normalize each type of feature with maximum value in the vector. Average Word Embeddings One simple way to get a contextual representation of a sentence is to average the word embeddings of each token in a given sentence. For this purpose, we experiment with three types of word embeddings pre-trained on three different sources for English: GloVe embeddings [31] trained on Twitter and Wikipedia, word2vec embeddings [27] trained on Google News, and FastText [26] embeddings trained on multiple sources. In addition, we also experiment with removing stop-words from the average word representation, as stop-words can dominate in the average and result in less meaningful sentence representation. For Arabic, we use word2vec embeddings that are trained on Arabic tweets and Arabic Wikipedia [43] . Transformer Features Another way to extract contextual features is to use BERT [7] embeddings that are trained using the context of the word in a sentence. BERT is usually trained on a very large text corpus which makes them very useful for off-the-shelf feature extraction and fine-tuning for downstream tasks in NLP. To get one embedding per tweet, we follow the observations made in [7] that, different layers of BERT capture different kinds of information, so an appropriate pooling strategy should be applied depending on the task. The paper also suggests that the last four hidden layers of the network are good for transfer learning tasks and thus we experiment with 4 different combinations, i.e. concatenate last 4 hidden layers, average of last 4 hidden layers, last hidden layer and 2 nd last hidden layer. We normalize the final embedding so that l2 norm of the vector is 1. We also experimented with BERT's pooled sentence embedding that is encoded in the CLS (class) tag, which performed significantly poorer than the pooling strategies we employed. For Arabic, we only experimented with a sentence-transformer [34] that is trained on multilingual training corpus and outputs a sentence embedding for each tweet/sentence. Sentence Representation: To get the overall representation of the tweet, we concatenate all the syntactic features together with either average word embedding or BERT-based transformer features and then apply PCA for dimensionality reduction. SVM classifier is trained on the feature vectors of tweets to output a binary decision (check worthy or not). The overall process is shown in Figure 2 . Claim Retrieval is the task of retrieving the most similar already verified claim to the query claim. For this task, it is important that the feature representation captures the meaning and context of words and phrases so that query matches the correct verified claim. Therefore, we relied on a triplet-network setting, where the network could be trained with triplets consisting of an anchor sample a, positive sample p and a negative sample n. We use triplet loss to fine-tune a pre-trained sentence embedding network, such that the distance between a and p is smaller than the distance between a and n using the following loss function. where S a i , S p i and S n i are triplet sentence embeddings and m is the margin (set to 1), N is the number of samples in the batch. As each verified claim is a tuple consisting of text and title, we create two triplets for every true tweet-claim pair, i.e., (anchor tweet, true claim text, negative claim text) and (anchor tweet, true claim title, negative claim title). This increases the number of positive samples for training as there are only 800 samples and one true claim for every tweet. To get negative claims, we select 3 claims with highest cosine similarity that are not the true claims for the anchor tweet using the pre-trained sentence-transformer embeddings. For pre-processing, we use Baziotis et. al.'s [2] tool for processing the tweets to remove URL, email, phone, user mentions, as the claim text or title do not contain any such information. As retrieval is a search task, we used KD-Tree search to find the most similar already verified claim that has the minimum Euclidean distance to the query claim. The sentence embeddings extracted from the network are used to build a KD-Tree and for each query claim, top 1000 verified claims are extracted from the tree for evaluation. For building the KD-Tree, we average the sentence embeddings of claim text and claim title, as it performs better than just using either claim or title. In our ablation study, we directly compute the cosine similarity between each query tweet and all the verified claims, and pick the top 1000 (highest cosine similarity) verified claims for evaluation. We conduct the second evaluation because building a KD-Tree can affect the retrieval accuracy. Sentence Transformers for Textual Similarity As a backbone network to extract sentence embeddings and fine-tuning with triplet loss, we use the recently proposed Sentence-BERT [7] that learns the embeddings in a Siamese (pairs) and triplet network settings. We experiment with the pre-trained Siamaese Network models trained on SNLI (Stanford Natural Language Inference) [3] and STSb (Semantic Textural Similarity benchmark) [5] datasets that have been shown to perform very well for semantic textual similarity. Dataset and Training Details English dataset consists of training, development (dev) and test splits with 672, 150 and 140 tweets respectively on the topic of COVID-19. We perform grid search using development set to find the best parameters. Arabic dataset consists of training and test splits with 1500 tweets on 3 topics and 6000 tweets on 12 topics respectively with 500 tweets on each topic. For validation purpose, we keep 10% (150 samples) from the training data as development set. The official ranking of submitted system for this task is based on Mean Average Precision (MAP) and Precision@30 (P@30) for English and Arabic datasets, respectively. To train the SVM models for both English and Arabic, we perform grid search over PCA energy (%) conservation, regularization parameter C and RBF kernel's gamma. Parameters range for PCA varies from 100% (original features) to 95% with decrements of 1, and both C and gamma vary between -3 to 3 on a logscale with 30 steps. For faster training on a large grid search, we use ThunderSVM [48] which takes advantage of a GPU or a multi-core system to speed up SVM training. Our submissions used the best models that we obtained from the grid search and are briefly discussed below. English: We made 3 submissions in total. Our primary (Run-1) and 2 nd contrastive (Run-3) submission uses sentence embeddings computed from BERTlarge word embeddings as discussed in the proposed work section. In addition, both submissions use POS tag and dependency relation features. Interestingly, we found that the best performing sentence embeddings did not include stopwords. The primary submission (Run-1) uses an ensemble of predictions from three models trained on concatenated last 4 hidden layers, average of last 4 hidden layers and 2nd last hidden layer. The 2 nd contrastive submission (Run-3) uses predictions from the model trained on the best performing sentence embedding computed from concatenating last 4 hidden layers. Our 1 st contrastive submission (Run-2) uses an ensemble of predictions from three models trained with GloVe [31] on Twitter with 25, 50 and 100-dimensional embeddings but with the same POS tag and dependency relation features. We use majority voting to get the final prediction and mean of decision values to get the final decision value. We found that removing the stop-words to compute average of word embeddings actually degraded the performance and hence included them in the average. We also add some additional results to see the affect of stop-words, POS tags, named entities, dependency relations and ensemble predictions in Table 3 . The affect of stop-words can be clearly seen in alternative runs of Run-1 and Run-3, where the MAP clearly drops by 1-2 points. Similarly, the negative affect of removing POS tag and dependency relation features can be seen in rest of the alternative runs. Lastly, adding named entity features to the original submissions also decreases the precision by 1-2 points. This might be because the tweets have very few named entities and are not useful to distinguish between check-worthy and not check-worthy claims. Arabic There are a total of four submissions that we made in this task. Our best performing submission (Run-1) uses 100-dimensional word2vec Arabic embeddings trained on a Twitter corpus [43] in combination with POS tag features. Our second and third submissions are redundant in terms of feature use, so we only mention the second one (Run-2) here. In addition features used in first submission, it uses dependency relation features and 300-dimensional Twitter embeddings instead of 100-dimensional. Our last submission (Run-3) uses only pre-trained multilingual sentence-transformer 6 [35] that is trained on 10 languages including Arabic. In the first three submissions, we removed the stopwords from all the features as keeping them resulted in a poorer performance. Precision@K and Average Precision (AP) results on the test set are shown in the same order in Table 4 . Official metric for ranking is P@30. Dataset and Training Details The dataset in this task has 1,003 tweets for training and 200 tweets for testing. These tweets are to be matched against a set To fine-tune the sentence-transformer network with the triplet loss, we use a batch size of eight and train the network for two epochs. The official ranking of this is based on Mean Average Precision@5 (MAP@5). All tweets and verified claims are in English. Our primary (Run-1) and 2 nd contrastive (Run-3) submission uses BERTbase and BERT-large pre-trained on SNLI dataset with sentence embedding pooled from the CLS and MAX tokens respectively. We fine-tune these two networks with the triplet loss. On the contrary, our 1 st contrastive submission (Run-2) uses multilingual DistilBERT model [35] trained on 10 languages including English. This model is directly used to test the pre-trained embeddings. Results Interestingly, pre-trained embeddings extracted from DistilBERT without any fine-tuning turn out to be better for semantic similarity than fine-tuned monolingual BERT models. Having said that, the fine-tuned BERT models do perform better than extracted pre-trained embeddings and the difference can be seen in bottom 2 rows in Table 5 . We also try to fine-tune the multilingual model which drastically decreases the retrieval performance. The decrease can be attributed to the pre-training process [35] in which the model was trained in a teacher-student knowledge distillation learning framework and on multiple languages. As stated in the proposed work section, we conduct a second evaluation to retrieve the claims with highest similarity without KD-Search and the results are significantly better as shown in Table 5 . In this paper, we have presented our solutions for two tasks in CLEF Check-That! 2020. In the first task, we used syntactic, contextual features and SVMs for predicting the check-worthiness of tweets in Arabic and English. For syntactic features, we evaluated Parts-of-Speech tags, named entities and syntactic dependency relations, and used the best feature sets for both languages. In the case of contextual features, we evaluated different word embeddings, BERT models and sentence-transformers to capture the semantics of each tweet or sentence. For future work, we would like to evaluate the possibility of using relevant metadata and other modalities like images and videos present in tweets for claim's check-worthiness. In the second task, we evaluated monolingual and multilingual sentence-transformers to retrieve verified claims for the query tweet. We found that off-the-shelf multilingual sentence-transformer is very well suited for semantic textual similarity task than other monolingual BERT models. Overview of CheckThat! 2020: Automatic identification and verification of claims in social media Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis A large annotated corpus for learning natural language inference Automatically identifying fake news in popular twitter threads Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation Universal sentence encoder Bert: Pre-training of deep bidirectional transformers for language understanding Detecting Real-time Check-worthy Factual Claims in Tweets Related to US Politics Overview of the clef-2019 checkthat! lab: automatic identification and verification of claims Theearthisflat's submission to clef'19checkthat! challenge The ipipan team participation in the check-worthiness task of the clef2019 checkthat! lab Upv-inaoe-autoritas-check that: Preliminary approach for checking worthiness of claims Leveraging emotional signals for credibility detection The copenhagen team participation in the check-worthiness task of the competition of automatic identification and verification of claims in political debates of the clef-2018 checkthat! lab Neural weakly supervised fact check-worthiness detection with contrastive sampling-based ranking loss Overview of CheckThat! 2020 Arabic: Automatic identification and verification of claims in social media Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster Detecting check-worthy factual claims in presidential debates Comparing automated factual claim detection against judgments of journalism organizations Claimbuster: the first-ever end-toend fact-checking system spaCy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing Claimrank: Detecting check-worthy claims in arabic and english Mvae: Multimodal variational autoencoder for fake news detection Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks Claimportal: Integrated monitoring, searching, checking, and analytics of factual claims on twitter Advances in pretraining distributed word representations Distributed representations of words and phrases and their compositionality Fake news detection on social media using geometric deep learning checkthat! lab on automatic identification and verification of political claims Fake news propagation and detection: A sequential model Glove: Global vectors for word representation Credibility assessment of textual claims on the web Stanza: A Python natural language processing toolkit for many human languages Sentence-bert: Sentence embeddings using siamese bertnetworks Making monolingual sentence embeddings multilingual using knowledge distillation Claimeval: Integrated and flexible framework for claim evaluation using credibility of sources Overview of CheckThat! 2020 English: Automatic identification and verification of claims in social media How Social Media Has Changed How We Consume News Studying fake news via network analysis: detection and mitigation Fake news detection on social media: A data mining perspective Twitter rumour detection in the health domain Spotfake: A multi-modal framework for fake news detection Aravec: A set of arabic word embedding models for use in arabic nlp Entity detection for check-worthiness prediction: Glasgow terrier at clef checkthat! Least squares support vector machine classifiers Defining "fake news" a typology of scholarly definitions Eann: Event adversarial neural networks for multi-modal fake news detection ThunderSVM: A fast SVM library on GPUs and CPUs Principal component analysis. Chemometrics and intelligent laboratory systems bigir at clef 2018: Detection and verification of check-worthy political claims Enquiring minds: Early detection of rumors in social media from enquiry posts Detection and resolution of rumours in social media: A survey Exploiting context for rumour detection in social media A hybrid recognition system for check-worthy claims using heuristics and supervised learning This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement no 812997.