key: cord-0612160-de621vtd authors: Li, Irene; Li, Yixin; Li, Tianxiao; Alvarez-Napagao, Sergio; Garcia, Dario title: What are We Depressed about When We Talk about COVID19: Mental Health Analysis on Tweets Using Natural Language Processing date: 2020-04-22 journal: nan DOI: nan sha: 5c2117f244d38f6b6c6b04aaa2e4f122f7a883e7 doc_id: 612160 cord_uid: de621vtd The outbreak of coronavirus disease 2019 (COVID-19) recently has affected human life to a great extent. Besides direct physical and economic threats, the pandemic also indirectly impact people's mental health conditions, which can be overwhelming but difficult to measure. The problem may come from various reasons such as unemployment status, stay-at-home policy, fear for the virus, and so forth. In this work, we focus on applying natural language processing (NLP) techniques to analyze tweets in terms of mental health. We trained deep models that classify each tweet into the following emotions: anger, anticipation, disgust, fear, joy, sadness, surprise and trust. We build the EmoCT (Emotion-Covid19-Tweet) dataset for the training purpose by manually labeling 1,000 English tweets. Furthermore, we propose and compare two methods to find out the reasons that are causing sadness and fear. Mental health is becoming a common issue. According to World Health Organization (WHO), one in four people in the world will be affected by mental or neurological disorders at some point in their lives 1 . A large emergency, such as the coronavirus disease 2019 , would especially sharply increase people's mental health problems, not only from the emergency itself, but also from the subsequent social outcomes such as unemployment, shortage of resources and financial crisis. Almost all people affected by emergencies will experience psychological distress, which for most people will improve over time 2 . In order to help the society get prepared in response to surging mental problems during and after COVID-19 emergency, we need to understand people's general mental status as a first step. Language, as a direct tool for people to convey their feelings and emotions, can be very useful and helpful in the estimation of mental health conditions. Nowadays, people post their thoughts and experiences on social media including Facebook, Instagram, and Twitter. Especially, due to the recent impact of COVID-19, a large number of people move their works online, making some users are even more active than usual. Previous works have been conducted to utilize natural language processing (NLP) methods to process internet-based text data such as posts, tweets, and text messages on mental health problems (Althoff et al., 2016; Calvo et al., 2017; Larsen et al., 2015; Dini and Bittar, 2016) . There are mainly three challenges in working with tweets using NLP methods. The first challenge is the large number of new posts online but restricted availability of APIs. There may be up to 90 or even 100 million tweets per day (Calvo et al., 2017) , so most of research is conducted on random samples (Ritter et al., 2011; Mohammad et al., 2017; Pandey et al., 2017) . We are interested in a million-level of tweets and also in a larger time span. Another challenge is that many existing research only focused on English tweets (Farruque et al., 2019; Dini and Bittar, 2016) . The We Feel platform by Larsen et al. (2015) deals with real-time tweets in a large-scale but only can process English ones. To understand the global influence of corona virus, and estimate the emotion variation across culture and region, we want to utilize texts in multiple languages. The third challenge is the lack of labeled dataset for COVID-19. Though there exist labeled Twitter dataset for sentiment and emotions (Go et al., ; Mohammad et al., 2017; Hasan et al., 2014) , due to the domain discrepancy, we still wish to have a manually-labeled dataset for training to have a better performed model. The work by Larsen et al. (2015) applies principal component analysis (PCA) to predict emotions. Abidin et al. (2017) proposed to use k-Nearest Neighbors and Naive Bayes classifier to do classification on tweets. A recent work by Farruque et al. (2019) applied deep models to do multi-label classification on tweets. Very recently, many types of contextualized word embeddings are proposed and substantially improved the performance on many NLP tasks. A new language representation model, BERT (Devlin et al., 2018) , was proposed and obtains competitive results on up to 11 NLP tasks including classification, natural language inference and question answering. In this work, we apply a pre-trained BERT and fine-tune on our labeled data, providing in-depth analysis of mental health. Our contributions are three-fold: we build the EmoCT (Emotion-COVID19-Tweet) dataset for classifying COVID-19-related tweets into eight emotions; then, we propose two models to do both single-label and multi-label classification respectively based on a multilingual BERT model, which are capable to predict on up to 104 languages and achieving promising results on English tweets; further analysis on case studies provide clues to understand why and how the public may feel fear and sad about COVID-19. We applied Twitter API 3 to conduct a crawler with a list of keywords:coronavirus, covid19, covid, COVID-19, covid 19, confinamiento, flu, virus, hantavirus, fever, cough, social distance, lockdown, pandemic, epidemic, conlabelious, infection, stayhome, corona,épidémie, epidemie, epidemia, 新冠肺 炎, 新型冠 病毒, 疫情, 新冠病毒, 感染, 新型コロナウイルス, コロナ. Each day, we are able to crawl 3 million tweets in free text format from different languages. Due to the high capacity, we look at the tweets from March 24 to 26, 2020 to get language and geolocation statistics. Among these tweets, 8,148,202 tweets have the language information (lang field of the Tweet Object in Tweet API), and 76,460 tweets have the geographic information (country code value from the place field if not none). We show the distributions in Figure 1 and 2. To train the models for classification, we built EmoCT (Emotion-Covid19-Tweet) dataset. We randomly annotated 1,000 English tweets selected from our crawled data. Following the work of EmoLex (Mohammad and Turney, 2013; Hasan et al., 2014) , we classify each tweet into the following emotions: anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Each tweet is labeled as one, two or three emotion labels. For each emotion, we made sure that the primary label appeared in 125 tweets, and there is no number control in the secondary and tertiary label. We then split into 100/25 for each emotion as the training/testing set. We release two versions of the dataset: single-labeled version where only the primary label is kept for each example, and multi-labeled version where all the labels are kept. In this way, both single-label classification and multi-label classification can be conducted. We release the EmoCt dataset to the public 4 , where only Tweet IDs and labels can be found by eliminating the actual texts due to corresponding restrictions. Single-label Classification We first attempt to do a single-label classification task based on the singlelabeled version of EmoCT. We apply a pre-trained multilingual version BERT model 5 . We take the output of the [CLS] token and add a fully-connected layer, which is fine-tuned using the labeled training examples (BERT). We set the learning rate to be 10 −5 and number of epochs to be 20. Besides, we also fine-tune with the MLM (masked language model) on 1,181,342 unlabeled tweets randomly selected from our crawled data, and then trained on EmoCT (BERT(ft)). Table 1 shows the performance of the two models. As we can see, both models have competitive results on accuracy and F1, and BERT(ft) performs slightly better than BERT, so we take this model as our main model for analysis in later sections. We also perform multi-label classification on the multi-labeled version of EmoCT. In this setting, each tweet has up to three labels out of eight, and we assume the labels are independent. We build a single-layer classifier with the activation function to be Sigmoid, which receives BERT output and predicts the possibility of containing each of the eight labels (BERT). The model uses binary cross-entropy loss and is trained for 10 epochs with learning rate 10 −5 . Similarly, we also compare with a fine-tuned version as did in the previous model (BERT(ft)). For evaluation, we use example-based evaluation metrics mentioned in the work of Zhang and Zhou (2014) in Table 2 . We could see that the two models achieve relatively low scores, probably due to the small-scale training data. In Table 3 , we show the area-under-curve (AUC) of the response operating characteristic (ROC) curve for each class and their micro average. It can be noticed that both models are not performing so well by looking at the average score, and they are not very confident on certain classes like anticipation, and we leave it as future work. Due to the outbreak of coronavirus emergency, the two emotions sad and fear are more related to severe negative sentiments like depressed. To understand why the public may feel fear and sadness, we then attempt to analyze words and phrases that have a high correlation with both emotions. We apply our BERT(ft) model from the single-label classification task to predict the emotion label on randomly-picked 1 million tweets data on April 7, 2020. Then we compare two methods to do further analysis. Note that we keep only the tweets labeled as fear and sadness. Attention Weight When predicting the emotion label for each tweet, we take the last attention layer of the model and collect the top 3 tokens which have the maximum attention weights. Finally we rank the tokens by frequency and plot the wordcloud 6 of the top 500 tokens after filtering some stopwords in Figure 3 . A drawback of this method is that the tokens are split, so we can see some keywords that may not be meaningful without contexts, for example: like, know and 2020. However, we can get some reasonable keywords: fever, corona, spread, virus and so on. Such words appear with a high frequency in the tweets labeled as fear and sadness, which may explain what and why people are feeling fear or sad. Note that this method can handle multiple language input as the pre-trained BERT model supports 104 different languages, though training was conducted on an English corpus. POS tagging Intuitively, we assume that nouns are more meaningful in a tweet, making it possible and easier to understand the reasons why it is labeled as fear or sadness. As a comparison, we look at the Part-of-Speech (POS) tag of each token in the tweets and keep the nouns and noun phrases only. We apply the Stanza Python library to do POS tagging (Qi et al., 2020) and we include supporting to six languages including English, Spanish, Portuguese, Japanese, German and Chinese. Similarly, we plot the top 500 keywords and phrases based on frequency in Figure 4 . There are some informative keywords Figure 5 : Comparison of emotion distribution. and phrases captured: pandemic, China, economy, 開始 (means starting in English), President Trump, White House and so on. While working on the analysis, we saw other meaningful phrases such as gun stores, school closings, and health conditions which has a lower frequency and may not be visible. The emotion trend among different hashtags or topics is also very important, as it potentially may show the public attitude change within a period of time. We still choose the single-label classification BERT(ft) model to do prediction. We provide a case study on two words: mask and lockdown. We first pick 1 million tweets randomly from the data of the date March 29th, 2020. By filtering on the keywords, we found 8,071 tweets that contain the word mask, and 31,146 tweets that contain the word lockdown. Figure 5 shows the comparison of emotion distribution among 1 million samples (1M), tweets with mask, and tweets with lockdown. In the 1 million data, most tweets are classified into negative classes like fear, anger and sadness. But when people are talking about masks, more tweets are classified into anticipation and trust, which is sometimes more neutral and positive. For the tweets talking about lockdown, there is no significant difference with that of 1M. To further analyze the trends, we select the data of two weeks (March 25, 2020 -April 7, 2020 , and apply the same model to predict the emotion labels on all the tweets we crawled (around 3 million each day) that contains the two mentioned keywords respectively. There is no significant change for the emotion distribution in all the data. However, we found the dominating emotions and variations of the change are closely related to the topic. In Figure 6 and 7, we illustrate the emotion trend for each single day of the selected keywords. The high variation (plot in solid lines in the figures) showed up in sadness, anger and anticipation for the tweets that contain the word mask in Figure 6 , and disgust, sadness for the tweets that contain the word lockdown in Figure 7 . Especially, for the lockdown tweets, the percentage of disgust emotion had a significant increase on March 27 and dropped on the next two days, as marked with the black asterisks. To further investigate, we looked at the news in March 27, which included U.S. as the first country to report 100,000 confirmed coronavirus cases, and 9 in 10 Americans were staying home; India and South Africa joined the countries to impose lockdowns. Given that the United States, India and Brazil have large group of twitter users, we assume that this dramatic change may be triggered by those news. In this work, we build the EmoCT dataset for classifying COVID-19-related tweets into different emotions. Based on this dataset, we conducted both single-label and multi-label classification tasks and achieved promising results. Besides, to understand the reasons why the public may feel sad or fear, we applied two methods to calculate correlations of the keywords. In the future work, we will study more in-depth analysis to better understand how COVID-19 affect on mental health. It is possible to have detailed statistics and analysis grouped by languages and locations. In addition, we are planning to collect a multi-lingual version of the existing EmoCT dataset to promote related research. With the capability of tracking twitter data in a longer term, we want to investigate how people recover from this global COVID-19 crisis from sadness and fear, and rebuild trust and joy to the society. We are interested to understand the relationship of mental health curve and COVID-19 case/mortality rate curve, the variation of emotion changes among different region and culture. It will be helpful for us to have a correct estimates of the COVID-19 effects on people's long term mental health, and be prepared for the next crisis. Besides, it is also possible to crawl the tweets before the outbreak of COVID-19 and study how the mental health related issues are changed between, before and after COVID-19. We present more details in our website https://www.covid19analytics.org/. N-grams based features for indonesian tweets classification problems Large-scale analysis of counseling conversations: An application of natural language processing to mental health Natural language processing in mental health applications using non-clinical texts Bert: Pre-training of deep bidirectional transformers for language understanding Emotion analysis on twitter: The hidden challenge Basic and depression specific emotion identification in tweets : multi-label classification experiments Twitter sentiment classification using distant supervision Emotex: Detecting emotions in twitter messages We feel: mapping emotion on twitter Crowdsourcing a word-emotion association lexicon Stance and sentiment in tweets Twitter sentiment analysis using hybrid cuckoo search method Stanza: A Python natural language processing toolkit for many human languages Named entity recognition in tweets: an experimental study A review on multi-label learning algorithms