key: cord-020830-97xmu329 authors: Ghanem, Bilal; Karoui, Jihen; Benamara, Farah; Rosso, Paolo; Moriceau, Véronique title: Irony Detection in a Multilingual Context date: 2020-03-24 journal: Advances in Information Retrieval DOI: 10.1007/978-3-030-45442-5_18 sha: doc_id: 20830 cord_uid: 97xmu329 This paper proposes the first multilingual (French, English and Arabic) and multicultural (Indo-European languages vs. less culturally close languages) irony detection system. We employ both feature-based models and neural architectures using monolingual word representation. We compare the performance of these systems with state-of-the-art systems to identify their capabilities. We show that these monolingual models trained separately on different languages using multilingual word representation or text-based features can open the door to irony detection in languages that lack of annotated data for irony. Figurative language makes use of figures of speech to convey non-literal meaning [2, 16] . It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm. Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions [40] . Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts [5, 19] . Most related work concern English [17, 21] with some efforts in French [23] , Portuguese [7] , Italian [14] , Dutch [26] , Hindi [37] , Spanish variants [31] and Arabic [11, 22] . Bilingual ID with one model per language has also been explored, like English-Czech [32] and English-Chinese [38] , but not within a cross-lingual perspective. In social media, such as Twitter, specific hashtags (#irony, #sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in selflabeled data [20] , ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages [6, 33] . While multilinguality has been widely investigated in information retrieval [27, 34] and several NLP tasks (e.g., sentiment analysis [3, 4] and named entity recognition [30] ), no one explored it for irony. We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et al. [24] concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro [8] studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are: I. A new freely available corpus of Arabic tweets manually annotated for irony detection 1 . II. Monolingual ID: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent. III. Cross-lingual ID: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony. Arabic dataset (Ar = 11,225 tweets). Our starting point was the corpus built by [22] that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years 2011 to 2018. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags 2 . The collection process resulted in a set of 6,809 ironic tweets (I) vs. 15,509 non ironic (NI) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects. To investigate the validity of using the original tweets labels, a sample of 3,000 I and 3,000 NI was manually annotated by two Arabic native speakers which resulted in 2,636 I vs. 2,876 NI. The inter-annotator agreement using Cohen's Kappa was 0.76, while the agreement score between the annotators' labels and the original labels was 0.6. Agreements being relatively good knowing the difficulty of the task, we sampled 5,713 instances from the original unlabeled dataset to our manually labeled part. The added tweets have been manually checked to remove duplicates, very short tweets and tweets that depend on external links, images or videos to understand their meaning. French dataset (Fr = 7,307 tweets). We rely on the corpus used for the DEFT 2017 French shared task on irony [5] which consists of tweets relative to a set of topics discussed in the media between 2014 and 2016 and contains topic keywords and/or French irony hashtags (#ironie, #sarcasme). Tweets have been annotated by three annotators (after removing the original labels) with a reported Cohen's Kappa of 0.69. English dataset (En = 11,225 tweets). We use the corpus built by [32] which consists of 100,000 tweets collected using the hashtag #sarcasm. It was used as benchmark in several works [13, 18] . We sliced a subset of approximately 11,200 tweets to match the sizes of the other languages' datasets. Table 1 shows the tweet distribution in all corpora. Across the three languages, we keep a similar number of instances for train and test sets to have fair cross-lingual experiments as well (see Sect. 4). Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process. It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs). Feature-Based Models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural Model with Monolingual Embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by [25] . For the embeddings, we relied on AraV ec [36] for Arabic, FastText [15] for French, and Word2vec Google News [29] for English 3 . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the Hyperopt 4 library. Table 2 shows the results obtained when using train-test configurations for each language. For English, our results, in terms of macro F-score (F ), were not comparable to those of [32, 39] , as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved F = 78.3 [5] ). They outperform those obtained for Arabic (A = 71.7) [22] and are comparable to those recently reported in the irony detection shared task in Arabic tweets [11, 12] (F = 84.4). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features. We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) 5 to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar textbased pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages [28] , we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries [28] or unsupervised methods relying on monolingual corpora [1, 10, 41] . For our experiments, we use Conneau et al.'s approach as it showed superior results with respect to the literature [10] . We perform several experiments by training on one language (lang 1 ) and testing on another one (lang 2 ) (henceforth lang 1 → lang 2 ). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table 3 presents the results. From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the Ar −→ En configuration. It is worthy to mention that the highest result we get in this experiment is from the En → Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr) → Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar → (En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets. This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a crosslingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies [9, 24, 35] . The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the absence of context where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in (Let's start again, get off get off Mubarak!! ) where the writer mocks the Egyptian revolution, as the actual president "Sisi" is viewed as Mubarak's fellows. (2) Second, the presence of out of vocabulary (OOV) terms because of the weak coverage of the multilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g phat instead of fat). (3) Another important issue is the difficulty to deal with the Arabic language. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word table becomes (tabla)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in (Since many days Mubarak didn't die .. is he sick or what? #Egypt), only the words (day), (Mubarak), and (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian. Unsupervised neural machine translation Irony as relevant inappropriateness Comparative experiments using supervised learning and machine translation for multilingual sentiment analysis Bilingual sentiment embeddings: joint projection of sentiment across languages Analyse d'opinion et langage figuratif dans des tweets présentation et résultats du Défi Fouille de Textes DEFT2017 Multilingual Natural Language Processing Applications: From Theory to Practice Clues for detecting irony in user-generated contents: oh s "so easy";-) Translating irony in political commentary texts from English into Arabic Irony as indirectness cross-linguistically: on the scope of generic mechanisms Word translation without parallel data IDAT@FIRE2019: overview of the track on irony detection in Arabic tweets IDAT@FIRE2019: overview of the track on irony detection in Arabic tweets LDR at SemEval-2018 task 3: a low dimensional text representation for irony detection Annotating irony in a novel Italian corpus for sentiment analysis Learning word vectors for 157 languages Logic and conversation SemEval-2018 task 3: irony detection in English tweets Sentiment polarity classification of figurative language: exploring the role of irony-aware and multifaceted affect features Irony detection in twitter: the role of affective content Disambiguating false-alarm hashtag usages in tweets for irony detection Irony detection with attentive recurrent neural networks SOUKHRIA: towards an irony detection system for Arabic in social media Towards a contextual pragmatic model to detect irony in tweets Exploring the impact of pragmatic phenomena on irony detection in tweets: a multilingual corpus study Convolutional neural networks for sentence classification The perfect solution for detecting sarcasm in tweets# not Unsupervised cross-lingual information retrieval using monolingual data only Efficient estimation of word representations in vector space Linguistic regularities in continuous space word representations Improving multilingual named entity recognition with Wikipedia entity type mapping Overview of the task on irony detection in Spanish variants Sarcasm detection on Czech and English twitter A survey of cross-lingual embedding models Cross-lingual learning-torank with shared representations A contrastive study of ironic expressions in English and Arabic AraVec: a set of Arabic word embedding models for use in Arabic NLP A corpus of English-Hindi code-mixed tweets for sarcasm detection Chinese irony corpus construction and ironic structure analysis Reasoning with sarcasm by reading inbetween Creative language retrieval: a robust hybrid of information retrieval and linguistic creativity Unsupervised cross-lingual word embedding by multilingual neural language models