key: cord-0561996-pe39ov47 authors: Shahi, Gautam Kishore; Dirkson, Anne; Majchrzak, Tim A. title: An Exploratory Study of COVID-19 Misinformation on Twitter date: 2020-05-12 journal: nan DOI: nan sha: a378c132ccd4c39186eb7edbabf30687ba9763fc doc_id: 561996 cord_uid: pe39ov47 Although a lot of correct and useful information is shared through channels such as Twitter, it has also become a home ground for misinformation on COVID-19. To tackle this still ongoing infodemic, scientific oversight as well as a better understanding by practitioners in crisis management is needed. We have conducted an exploratory study into the propagation, authors and content of misinformation on Twitter around the topic of COVID-19 in order to gain early insights into the COVID-19 infodemic. Our results enable us to not only give first indications but also to suggest gaps in the current scientific coverage of the topic. Moreover, we propose actions for authorities to counter misinformation and hints for social media users on how to help stop the spread of misinformation. The COVID-19 pandemic is currently spreading across the world at an alarming rate [79] . It is considered by many to be the defining global health crisis of our time [66] . As WHO Director-General Tedros Adhanom Ghebreyesus proclaimed at the Munich Security Conference on 15 February 2020, âĂIJWeâĂŹre not just fighting an epidemic; weâĂŹre fighting an infodemicâĂİ. [82] . It has even been claimed that the spread of COVID-19 is supported by misinformation [22] . The actions of individual citizens guided by the quality of the information they have at hand is crucial to the success of the global response to this health crisis. By the 23 April 2020, the International Fact Checking Network (IFCN) [54] uniting over 100 fact-checking organisations unearthed over 4 000 false claims regarding the pandemic. However, misinformation does not only contribute to the spread: misinformation might bolster fear, drive societal disaccord, or could even lead to direct damage -for example through ineffective (or even directly harmful) medical advice or through over-(e.g. hoarding) or underreaction (e.g. deliberately engaging in risky behaviour) [50] . The misinformation is spreading rapidly on social media [82] . Similar trends were seen during other epidemics, such as the Ebola [48] , yellow fever [47] and Zika [45] outbreaks. This is a worrying development as even a single exposure to a piece of misinformation increases its perceived accuracy [49] . In response to this infodemic, the WHO has set up their own platform MythBusters that refutes misinformation [78] and is urging tech companies to battle fake news on their platforms [7] . 1 Fact-checking organisations ⋆ The work presented in this document results from the Horizon 2020 Marie SkÅĆodowska-Curie project RISE_SMA funded by the European Commission. * * Corresponding author gautamshahi16@gmail.com (G.K. Shahi); a.r.dirkson@liacs.leidenuniv.nl (.A. Dirkson); timam@uia.no (.T.A. Majchrzak) ORCID(s): 0000-0001-6168-0132 (G.K. Shahi); 0000-0002-4332-0296 (.A. Dirkson); 0000-0003-2581-9285 (.T.A. Majchrzak) 1 At the same time, the WHO itself faces criticism regarding how it have united under the IFCN to battle misinformation collaboratively, as individual fact-checkers like Snopes are being overwhelmed [11] . There are many pressing questions in this uphill battle. So far, four studies have investigated the magnitude or spread of misinformation on Twitter regarding the COVID-19 pandemic [16, 33, 21, 62] . However, they either investigated a very small subset of claims [62] ; manually annotated a small subset of Twitter data [33] or used the reliability of cited sources to identify misinformation [21, 16] . In line with Vosoughi et al. [77] , we believe reliance on 'reliable' sources is problematic, as the reliability of news sources is a subject of considerable disagreement. In contrast, we use the verdicts of professional fact-checking organisations which manually check each claim. Furthermore, none of the previous studies have investigated how the language use of COVID-19 misinformation differs from other COVID-19 tweets or which Twitter accounts are associated with the spreading of COVID-19 misinformation, although there have already been some indications that bots might be involved [21, 19] . We thus conduct an exploratory analysis into (1) the Twitter accounts behind COVID-19 misinformation, (2) the propagation of COVID-19 misinformation on Twitter, and (3) the content of incorrect claims on COVID-19 that circulate on Twitter. We decided to work exploratory because too little is known about the topic at hand to tailor either a purely quantitative or a purely qualitative study. The exploration of the phenomena with the aim of rapid dissemination of results combined with the demand for academic rigour make our article somewhat uncommon in nature. Nevertheless, our contributions are threefold: First, we present a synthesis of social media analytics techniques suitable for the analysis of the COVID-19 infodemic. We believe this to be a starting point for a more structured, goaloriented approach to mitigate the crisis on the go -and to learn how to decrease of negative effects from misinformation in future crisis as they unfold. Second, we contribute to handles the crisis, among others regarding the dissemination of information from member countries [71] . the scientific theory with first insights into how COVID-19 misinformation differs from other COVID-19 related tweets, who it originates from, and how it spreads.This should pose the foundation for drawing a research agenda. Third, we provide a first set of recommendations for practice. They ought to directly help social media managers of authorities, crisis managers, and social media listeners in their work. In Section 2 we provide the academic context of our work in the field of misinformation detection and propagation. In Section 4 and 3, we elaborate on our data collection process and methodology, respectively. We then present experimental result in Section 5, followed by discussing these results and providing recommendations for organisations targeting misinformation in Section 6. Finally, we draw a conclusion in Section 7. In this section, we describe the background of misinformation, propagation of misinformation, rumours detection, and the impact of fact-checking. We define misinformation broadly as circulating information that is false [84] . Commonly, it refers specifically to accidentally false information or as a consequence of an honest mistake, whereas disinformation refers to deliberately false information [26] . In this study, we do not make claims about the intent of the purveyors of information, whether accidental or malicious. In reality, claims are not always completely false or true but can be mostly false with elements of truth. Such claims are coined partially false. Two examples in this category are: images that are miscaptioned and claims omitting necessary background information. In this article, we compare such claims with completely false claims in order to attain better insight into differences in their spread. We believe it may be more challenging for users to recognise claims as false when they contain elements of truth, as this has found to be the case even for professional fact-checkers [37] . As such it is crucial for fact-checking organisations and governments battling misinformation, to better understand how to sustain information sovereignty [41] . In an ideal setting, people would always check facts and employ scientific methods. In a realistic setting, they would at least be mainly drawn to information coming from fact-based sources who work ethically and without a hidden agenda. Authorities such as cities ought to be such sources [42] . Rumours are "circulating pieces of information whose veracity is yet to be determined at time of posting" [84] . Misinformation is essentially a false rumour that has been debunked. Research on rumours is consequently closely related and the terms are often used interchangeably. Rumours on social media can be identified through topdown or bottom-up sampling [84] . A top-down strategy use rumours which have already been identified and fact-checked to find social media posts related to these rumours. This has the disadvantage that rumours that have not been included in the database are missed. Bottom-sampling strategies have emerged more recently and is aimed at collecting a wider range of rumours often prior to fact-checking. This method was first employed by [85] . However, manual annotation is necessary when using a bottom-up strategy. Often journalists with expertise in verification are enlisted since crowd-sourcing will lead to credibility perceptions rather than ground truth values. The exhaustive verification may be beyond their expertise [84] . In this study we employ a top-down sampling strategy relying on the work of on Snopes.com and over 100 different fact-checking organisations organised under the Coro-naVirusFacts/ DatosCoronaVirus alliance run by the Poynter Institute. We included all misinformation (see Section 3.2) around the topic of COVID-19 which include a Tweet ID. A similar approach was used by Jiang et al. [29] with Snopes.com and Politifact and by [77] using six independent fact checking organisations. To what extent information goes viral is often modelled using epidemiological models originally designed for biological viruses [23, 60] . The information is represented as an 'infectious agent' that is spread from 'infectives' to 'susceptibles' with some probability. This method was also employed by [16] for the propagation of information to study how infectious information on COVID-19 is on Twitter. They found that the basic reproductive number 0 , i.e. the number of infections due to one infected individual for a given period, is between 4.0 to 5.1 on Twitter, indicating a high level of 'virality' of COVID-19 information in general. 2 Additionally, they found the overall magnitude of COVID-19 misinformation on Twitter to be around 11%. They also investigated the relative amplification of reliable and unreliable information on Twitter and found it to be roughly equal. Other researchers have modelled information propagation on Twitter using the retweet (RT) trees i.e. asking who retweets whom? Various network metrics can then be applied to quantify the spread of information such as the depth (number of retweets by unique users over time), size (number of total users involved) or breadth (number of users involved as a certain depth) [77] . These measures can also be considered over time to understand how propagation fluctuates. An advantage of this approach is that unlike epidemiological modelling it does not rely on the implicit assumption that propagation is driven largely if not exclusively by peerto-peer spreading [23] . However, viral spreading is not the only mechanism by which information can spread: Information can also be spread by broadcasting, i.e. a large number of individuals receive information directly from one source. Goel et al. [23] introduced the measure of structural virality to quantify to what extent propagation relies on both mechanisms. Previous research on the efficacy of fact-checking reveals the corrections often do not have the desired effect and misinformation resists debunking. Although the likelihood of sharing does appear to drop after a fact-checker adds a comment revealing this information to be false, this effect does not seem to persist on the long run [20] . In fact, 51.9% of the re-shares of false rumours occur after this debunking comment. This may in part be due to readers not reading all the comments before re-sharing. Complete retractions of the misinformation are also generally ineffective, despite people believing, understanding and remembering the retraction [35] . Social reactance [8] may also play a role here: people do not like being told what to think and may reject authoritative retractions.Three factors that do increase their effectiveness are (a) repetition, (b) warnings at the initial exposure and (c) corrections that tell an alternate story that does not leave behind an unexplained gap [35] . Twitter users also engage in debunking rumours. Overall, research supports the idea that the Twitter community debunks inaccurate information through self-correction [84, 43] . However, self-correction can be slow to take effect [55] and it appears that in the earlier stages of a rumour circulating Twitter users have problems differentiating between true and false rumours [85] . This includes users of high reputation such as news organisations who may issue corrective statements at a later date if necessary. This underscores the necessity of dealing with newly emerging rumours around crises like the outbreak of COVID-19. Yet, these corrections also do not always have the desired effect. Fact-checking corrections are most likely to be tweeted by strangers, but are more likely to draw user attention and responses when they come from friends [24] . Although such corrections do elicit more responses from users containing words referring to facts, deceit (e.g. fake) and doubt, there is an increase in the number of swear words [29] , too. Thus, on the one hand, users appear to understand and possibly believe the rumour is false. On the other hand, swearing likely indicates backfire [29] : an increase in negative emotion is symptomatic of individuals clinging to their own worldview and false beliefs. Thus, corrections have mixed effects that may depend in part on who is issuing the correction. In this section, we describe the steps involved in the data collection and filtering the tweets for analysis. We have used two datasets for our study. First one is the Tweet which has been mentioned by fact-checker and classified as false or partially false and the second dataset of COVID-19 tweets collected from Kaggle. First we have collected the list of fact checked news articles related to the COVID-19 from Snopes [64] and Poynter [53] from 01-14-2020 to 23-04-2020. We collected 4 468 fact checked articles from Snopes and Poynter. We used Beautifulsoup [57] to crawl the content of the news articles and prepared a list of news articles which collected the information like title, content of the news article, fact checker company, location, category (e.g. False, Partially False) of fact checked claims. An overall workflow for fetching tweets mentioned in the articles on fact checked claims is shown in Figure 1 . To find the misleading posts on COVID-19 on Twiter, we crawled the content of the news article using Beautifulsoup and looked for the article, which is referring to Twitter. In the HTML DOM (Document Object Model), we looked for all anchor tags which defines a hyperlink. We filter the anchor tag which contains keyword 'twitter' and 'status' because each Tweet message is linked with the URL (Uniform Resource Locator) in the form of https://twitter.com/statuses/ID. From the collected URL, we fetched the ID, where the ID is the unique identifier for each Tweet. We fetched 473 Tweet IDs from 4468 news articles. Tweet From the Tweet ID generated in the above step, we used tweepy [58] , a python library for accessing the Twitter API. Using the library, we fetched the Tweet and its description like created_at, like, screen name, description, followers etc. To analyse the propagation of misinformation on the twitter, we fetched the all the retweet using the python library Twarc [68] . Twarc is a command-line tool for collecting Twitter data in JSON 3 format. We gathered the retweet from using the Tweet ID gathered in the above step. During the limitation of the Twitter developer account, we could collect the retweet from last seven days. User account details From the Twitter API, we also gathered the account information like, Favourites count(number of likes gained), friends count(number of accounts followed by the user), follower count(number of followers this account currently has.), account age(number of days from account creation date to 31-12-2020, the time when discussion about COVID-19 started around the world), profile description, user location. We used these information classifying the popular account, bot detection. Discounting differences in capitalisation, our data originally contained 21 different verdict classes provided by the fact-checking organisations i.e. Snopes and over 100 different organisations in the International Fact Checking Network. We provide class definition in Table 1 an overview of news category that were included or not included in our study along with a categorisation by us and the original and more granular categorisation by fact checkers. Since each fact-checking organisation has its own set of verdicts (e.g. False) that it gives to claims and these have not been normalised by Poynter, manual normalisation is necessary. Fol- Figure 1 : Illustration of data collection using the interaction between social media and fact-checking website (screenshots from [28, 18]) lowing the practice of [77] , we normalised verdicts by manually mapping them to a score of 1 to 5 (1='False', 2='Partially False', 3='Mixture', 4='Mostly True', 5='True') based on the definitions provided by the fact-checking organisations. As we are specifically interested in misinformation, we excluded claims with a score of 3 or lower i.e, considered only false and partially false category. We also excluded claims with verdicts that did not conform to this scale, e.g. sarcasm, unproven claims and disputed claims. From 473 tweets collected, 443 are used for our study -372 false and 71 partially false. The data used in our work is available through GitHub 4 . We randomly took 2 Examples of misinformation in the false and partially false category which are shown in figure 2 and figure 3 respectively. There was a rumour that Costco had issued a recall of their bath tissues in the fear that the tissues might contain Covid-19. In the first Tweet [46] a user posted a video related to the fake news of toilet paper. The author states that people were running to the store to buy and then return the toilet papers due to the news. Later the claim was fact-checked by Snopes and it was found that the claim wa false and that Costco had not announced any such recall [65] . There were several other similar fake news regarding similar claims. The second Tweet [6] was posted by the news company, ANI that people quarantined from Tablighi Jamaat [4] misbehaved with the health workers and police staff. They are not following the rules of the quarantine centre. AFP checks [1] it and found that the claim is partially false, and the video used in the claim was used from an incident in Mumbai during February 2020. Different Twitter handles circulated this 4 https://github.com/Gautamshahi/Misinormation_COVID-19 misinformation. Both claims were retweeted and liked by several users on Twitter. In order to understand how the misinformation around COVID-19 is distinct from the other tweets on this topic, we made use of a background corpus of all English COVID-19 tweets on the 15th of April 2020 [63] . It includes tweets with the following hashtags on that day: #coronavirus, #coronavirusoutbreak, #coronavirusPandemic, #covid19, #covid_19, #epitwitter and #ihavecorona. The total size of the dataset is 264 893 tweets. Originally, the data contained 26 known languages (according to Twitter -see Figure 4 ). We use the Google Translate API 5 to automatically detect the correct language and translate to English. Hereafter, tweets were lowercased and tokenized using NLTK [40] . Emojis were identified using the emoji package [31] and were removed for subsequent analyses. Mentions and URLs were also removed using regular expressions. Hashtags were not removed, as they are often used by twitter users to convey essential information. Additionally, sometimes they are used to replace regular words in the sentence (e.g. 'I was tested for #corona') and thus omitting them would remove essential words from the sentence. Therefore, we only remove the # symbol from the hashtags. In this section, we present our method for analysis and illustration of the extracted data. We follow a two-way approach. In the first, we analyse the details of the user ac- This rating is used with photographs and videos that are âĂIJrealâĂİ (i.e., not the product, partially or wholly, of digital manipulation) but are nonetheless misleading because they are accompanied by explanatory material that falsely describes their origin, context, and/or meaning. y Mostly false Misleading Offers an incorrect impression on some aspect(s) of the science, leaves the reader with false understanding of how things work, for instance by omitting necessary background context. n -Unproven This rating indicates that insufficient evidence exists to establish the given claim as true, but the claim cannot be definitively proved false. This rating typically involves claims for which there is little or no affirmative evidence, but for which declaring them to be false would require the difficult (if not impossible) task of our being able to prove a negative or accurately discern someone elseâĂŹs thoughts and motivations. This rating indicates that a claim is derived from content described by its creator and/or the wider audience as satire. Not all content described by its creator or audience as âĂŸsatireâĂŹ necessarily constitutes satire, and this rating does not make a distinction between 'real' satire and content that may not be effectively recognized or understood as satire despite being labeled as such. n -Explanatory "Explanatory" is not a rating for a checked article, but an explanation of a fact on its own n -Mixture This rating indicates that a claim has significant elements of both truth and falsity to it such that it could not fairly be described by any other rating. counts involved in the spread of misinformation and propagation of misinformation (false or partially false data). In the second, we analyse the content. With both we investigate the propagation of misinformation on social media. In order to gain a better understanding of who is spreading misinformation on Twitter, we investigated the Twitter accounts behind the tweets. First, we analyse the role of bots in spreading misinformation by using a bot detection API to automatically classify the accounts of authors. Similarly, we analyse whether accounts are brands using an available classifier. Third, we investigate some some characteristics of the accounts that reflect their popularity (e.g. follower count). A Twitter bot is a type of bot program which operate a Twitter account via the Twitter API. The pre-programmed bot autonomously performs some work such as tweeting, unfollowing, re-tweeting, liking, following or direct messaging other accounts. Shao et al. [61] discussed the role of social bots in spreading the misininformation. Previous studies show there are several types of bots are involved in social media such as "newsbots", "spambots", "malicious bot". Sometimes, newsbots or malicious bots are trained to spread the misinformation. Caldarelli et al. [12] discuss the role of bots in Twitter propaganda. To analyse the role of bots, we used the examine each account with a bot detection API [17] . Social media, such as microblogging websites, used for sharing information and gathering opinion on the treading topic. Social media has different types of user, organisation, celebrity or an ordinary user. We consider organisation, celebrity as a brand which has a big number of followers and catches more attention public attention. The brand uses a more professional way of communication, gets more user attention [67] and have high reachability due to bigger follower network and retweet count. With a large network, a piece of false or partially false information spread faster compared to a normal account. We classify the account as a brand or normal users using a modified of TwiRole [36] a python library. We use profile name, picture, latest Tweet and account description to classify the account. Popular accounts get more attention from users, so we analyse the popularity of the account; we considered the parameter number of followers, verified account. Twitter gives an option to "following" a user can follow another user by clicking the follow button, and it becomes the followers. When a Tweet is posted on Twitter, then it is visible to all of his/her followers. Twitter verifies the account, and after doing a verification, Twitter provides the user to receive a blue checkmark badge next to your name. From 2017 the service is paused by Twitter, and it's limited to only a few accounts chosen by the Twitter developer. Hence, the verified account is a kind of authentic account. We investigate several characteristics that are associated with popular accounts, namely: Favourites count, follower count, account age and verified status. If a popular user spread false or partially false news, then it is more likely to attract more attention from other users compared to the non-popular twitter handle. To investigate the diffusion of misinformation on Twitter, we explore the timeline of retweets and calculate the speed of retweets as a proxy for the speed of propagation. A retweet is a re-posting of a Tweet, which a Twitter user can do with or without an additional comment. Twitter even provides a retweet feature to share the Tweet with your follower network quickly.For our analysis, We only considered the retweet of Tweet. (1) Where P s is the propagation speed, rc is retweet count per day and N d is the total number of days. We calculated the speed of propagation over three different periods. The first metric P s_a is the average overall propagation speed: the speed of retweets from the 1st retweet to the last retweet of a Tweet. The second metric is the propagation speed during the peak time of the Tweet, denoted by P s_pt . After a time being, the Tweet does not get any retweet, but again some days again start getting user attention and retweet. So, We define the peak time of the Tweet as the time (in days) from the retweet start till retweet goes to zero for the first time. The third metric P s_pcv is the propagation speed calculated during the peak time of crisis, i.e., from 15-03-2020 to 15-04-2020. We decided the peak time according to the timeline propagation of retweets, as shown in 5, which is maximum during the mid-March and mid-April. In order to attain a better understanding of what misinformation around the topic of COVID-19 is circulating on Twitter, we investigate the content of the tweets. Due to the relatively small number of partially false claims, we combined the data for these analyses. First, we analyse the most common hashtags and emojis. Second, we investigate the most distinctive terms in our data to gain a better understanding of how COVID-19 misinformation differs from other COVID-19 related content on Twitter. To this end, we compare our data to a background corpus of all English COVID-19 tweets on 15/04 (See Section 3.4). This enables us to find the most distinctive phrases in our corpus: Which topics are discussed in misinformation that are not discussed in other COVID-19 related tweets? These topics may be of special interest, as there may be little correct information to balance the misinformation circulating on these topics. Third, we make use of the language used in the circulating misinformation to gauge the emotions and underlying psychological factors authors display in their tweets. The latter may be able to give us a first insight into why they are spreading this information. Again the prevalence of emotional and psychological factors is compared to their prevalence in a background corpus in order to uncover how false tweets differ from the general chatter on COVID-19. Hashtags are brief keywords or abbreviations prefixed by a # that are used on social media platforms to make tweets more easily searchable [10] . Hashtags can be considered self-reported topics that the author believes his or her tweet links to. Emoji are standardised pictographs originally designed to convey emotion between participants in text-based conversation [30] . Emojis can thus be considered a proxy for self-reported emotions by the author of the tweet. We analyse the top 10 hashtags by combining all terms prefixed by a #. For # symbols that are stand-alone we take the next unigram to be the hashtag. We identify emojis using the package emoji [31] To investigate the most distinctive terms in our data, we used the pointwise Kullback Leibner divergence for Informativeness and Phraseness (KLIP) [72] as presented in [76] 6 for unigrams, bigrams and trigrams. KullbackâĂŞLeibler divergence is a measure from information theory that estimates the difference between two probability distributions. The informativeness component (KLI) of KLIP compares the probability distribution of the background corpus to that of the candidate corpus to estimate the expected loss of information for each term. The terms with the largest loss are the most informative. The phraseness component (KLP) compares the probability distribution of a candidate multi-word term to the distributions of the single words it contains. The terms for which the expected loss of information is largest are those that are the strongest phrases. We set the parameter to 0.8 as recommended for English text. determines the relative weight of the informativeness component KLI versus the phraseness component KLP. The emotional and psychological processes of authors can be studied by investigating their language use. A well-known method to do so is the Linguistic Inquiry and Word Count (LIWC) method [69] . We made use of the LIWC 2015 version and focused on the categories: Emotions, Social Processes, Cognitive Processes, Drives, Time, Personal Concerns and Informal Language. In short, the LIWC counts the relative frequency of words relating to these categories based on manually curated word lists. All statistical comparisons were done with Mann-Whitney U tests. This section describes the result obtained from our analysis for both 443 tweets, which classified as misinformation and 264 893 COVID-19 tweets. From 443 tweets, we filter 375 unique accounts and performed categorisation of account using method mentioned in Section 4.1. The summary of the result obtained is shown in Table 2 . Automation Probability(CAP) score to classify the bot. CAP is the probability of the account being a bot according to the model used in the API. We choose the CAP score of more than 0.67. We discovered that there are 6 bot accounts out 372 user accounts; user IDs 1025102081265360896 and 1180933034423529473 for instance are classified as bots. Brand detection For Brand detection, we used the Twi-Role API, then categorised the accounts as a brand which has classification score more than of 62% to decode role of the account because we performed a random check and all account which has 62% prediction rate was classified correctly. We have got 154 accounts as a brand. For instance, user ID 18815507 is an organisation account while user ID 621533 is a representative of the UNICEF. gathered the information about favourite counts gained by the accounts, followers count, friends accounts, and the age of the accounts using the Twitter API. We represented the median of Favourite Count, account age and followers count, as shown in Table 2 . In this section, we describe the propagation of misinformation with timeline analysis and speed of propagation. figure 5 , we presented our result from January to April, one plot for each different month from January to April 2020. The blue colour indicates the propagation of false category, and orange colour indicates the partially false category. We calculated the number of retweets done in 3 hours for both false and partially false category. We plotted the number of retweets for each day from 20-01-2020 to 25-04-2020. We choose 3 hours duration to adjust the count for plotting false and partially false category. The timeline analysis of retweet shows that the propagation of misinformation(false category) is faster than the partially false category and spreading of misinformation was more during mid-March to mid-April 2020. The spread of misinformation was at a peak from 16th to 23rd March 2020. The time when COVID-19 was spread across the globe. The median count of total number retweet of the false and partially false category tweets are 128 and 48, respectively. The Tweet of false category gets more retweets or likes compare to the partially false, which concludes the reachability of fake news is more to the users. We calculated the three variant of propagation speed of tweet as discussed in section 4.2. Results for P s_a , P s_pt and P s_pcv are describe in table 3. We have observed that the speed of propagation is higher for the false category and it was the highest during the peak time of Tweet (time duration from the beginning to the day Tweet not getting new retweet). We performed a chi-square test on the propagation speed shown in table 3. The analysis showed that there is a difference in the speed of propagation in tweets, between false and partially false by performing (X 2 (3, N = 443) = 10.23, p <.001). In particular, the propagation speed was maximum during the peak time of the Tweet. This section discusses the result obtained after doing the content analysis of tweets discussing false and partially false claims. Hashtag analysis As can be seen in Figure 6 , many of the most commonly used hashtags in COVID-19 misinformation concerns the corona virus itself (i.e. #coronavirus, #corona, #covid19, #cov19, and #ncov2019). Since we did not use any hashtags in the data collection of our corpus of COVID-19 misinformation (See Section 3.1), this confirms that our method managed to capture misinformation related to the corona crisis. Additionally, the hashtag #fakenews stands out: the term fake news is widely used to refer to inaccurate information [84] or more specifically to "fabricated information that mimics news media content in form but not in organisational process or intent" [34] . Thus, it appears that some authors are discrediting information spread by others. Yet, we are unable to determine based on this analysis who they are discrediting. Furthermore, two locations can be discerned from the hashtags: Both Spain and Qom, the first city of Iran to have corona infections, seems to be connected to COVID-19 misinformation. Another topic that is reflected in the hashtags is Event 201 (#event201 the Bill and Melinda Gates Foundation [25] . This event is known to be used as evidence to claim that Bill and Melinda Gates predicted or profited from the coronavirus [52] . Lastly, it appears that some claims concern public applauding of Spanish health care workers (#aplausosolidario). These tweets are partially false and are accompanied by videos of the police of Madrid applauding health care workers. Emoji analysis Emojis are used on Twitter to convey emotions. We analysed the most prevalent emojis used by authors of COVID-19 misinformation on Twitter (see Figure 7 ). It appears authors make use of emojis to attract attention to their claim (loudspeaker) and to convey distrust or dislike (down-wards arrow) or danger (warning sign, police light). Also emojis relating to certain countries (i.e. United States and India) are popular. Moreover, authors use emojis to direct attention towards URLs (pointing finger). The party emojis do not appear to be used sarcastically but in reference to actual parties (e.g. 'Carnival in Bahia -WHO WILL ? -URL-' which refers to a video showing Carnival in Bahia but of which some claim it shows a gay party in Italy shortly before the COVID-19 outbreak [2] ). Analysing the most distinctive terms in our corpus compared to a corpus of general COVID-19 tweets can reveal which topics are most unique. The more unique a topic is to the misinformation corpus, the more likely it is that for this topic there is a larger amount of misinformation than correct information circulating on Twitter. We can see in Table 4 which phrases have the highest KLIP score and thus are most distinct when we compare our corpus to the background corpus of COVID-19 tweets. First, we find that tweets about the amount of corona cases are distinct to this corpus ('infected person', 'new confirmed cases', 'cases confirm coronavirus', 'tested positive', 'new cases'). Second, compared to general COVID-19 tweets, misinformation more often concerns discrediting information circulating on social media ('fake news' 'circulating on social media' 'social media'). An example for this is the following tweet: 'Messages being circulated on social media as WHO protocol for lockdown are baseless and FAKE. WHO does NOT have anyâĂę -URL-'. Third, information about corona in (certain cities of) Colombia ('colombia patients', 'bogota medellin') is distinct to our corpus of misinformation e.g. '6 new cases confirm coronavirus in colombia . patients are in bogota, medellin and rionegro'. Lastly, other phrases that more common in our corpus than in general COVID-19 tweets are: 'interferon alpha' which refers to the alleged claim that Cuba is using interferon alpha as an effective medication against the corona virus; 'aerosol infection ' which concerns whether corona is airborne; 'kibundani kwale' which refers to an alleged incident in Kenya of youth beating up a corona patient, and 'bicentennial bank' which refers to the rumours surrounding images of a robbery of this bank in Venezuela by bandits [13] . To investigate the emotions and psychological processes displayed by authors in their tweets we use the LIWC to estimate the relative frequency of words relating to each category [69] . LIWC is a good proxy for measuring emotions in tweets: In a recent study of emotional responses to COVID-19 on Twitter, Kleinberg et al. [32] found that the measures of the LIWC correlate well with self-reported emotional responses to COVID-19. We first compared the misinformation on COVID-19 with a background corpus of tweets on COVID-19. Emotions such as anger, happiness, sadness did not appear to differ (Figure 8 ). Both positive and negative emotions are significantly less prevalent ( < 0.001 and = 0.002 resp.) in tweets with COVID-19 misinformation than in COVID-19 related tweets in general. This is also the case for spe- cific negative emotions, namely anger ( = 0.002), anxiety ( = 0.01) and sadness ( < 0.001). Tweets on misinformation are also significantly less likely to discuss family ( = 0.007) but not less likely to discuss friends. When we consider cognitive processes that can be discerned from language use, we see that tweets containing misinformation are significantly less tentative in what they say ( < 0.001). They use significantly more language reflecting certainty ( = 0.001). They also give less explanatory reasons or causes (e.g. words like because, hence)( < 0.001), and contain less words relating to discrepancy between present (i.e what is now) and what could be (i.e. what would, should or could be) ( < 0.001). Yet, tweets with misinformation use more language that reflects differentiation (words like but or else) ( = 0.03). In this context, this might reflect a differentiation from what others (e.g. traditional media) say. Overall, tweets containing COVID-19 misinformation are less likely to refer to what drives the authors than COVID-19 tweets in general ( < 0.001). All drives except risk i.e. affiliations to others, reward, achievements, and power are significantly less likely to occur ( < 0.001). Yet, words relating to risk (e.g. danger) are more frequent ( = 0.003). Thus, authors posting misinformation appear to be significantly more driven by their concerns or preventing others from coming to harm. Misinformation is also less likely to discuss personal concerns, such as work, leisure and money (all < 0.001) but also death ( = 0.015). The only personal concern that was not significantly less prevalent was religion. Further, COVID-19 tweets appear to have a particularly focus on the present. COVID-19 misinformation seems to also focus on the present but to a significantly lesser degree ( < 0.001), whereas the focus on past or future does not differ. Lastly, although both corpora are from Twitter, the COVID-19 tweets containing misinformation use relatively more informal language with more so-called netspeak (e.g. lol and thx) ( < 0.001) and assent (e.g. OK) ( < 0.001), although significantly less swear words were used ( = 0.03). Based on our analysis, we discuss our findings. We first look at lessons learned from using Twitter in an ongoing crisis before deriving recommendations for practice. We then scrutinise limitations of our work, which form the basis for our summary of open questions. While conducting this research, we encountered a number of issues concerning the use of Twitter data to monitor misinformation in an ongoing crisis. We wanted to point these out in order to stimulate a discussion on these topics within the scientific community. The first issue is that the Twitter API severely limits the extent to which the reaction to and propagation of misinformation can be researched after the fact. One of the major challenges with collecting Twitter data is the fact that the Twitter API does not allow for retrieval of tweet replies over 7 days old and limits the retrieval of retweets. As it typically takes far longer for a fact-checking organisation to verify or discount a claim, this means early replies cannot be retrieved in order to gauge the public reaction before factchecking. Recently, Twitter has created a endpoint specifically for retrieving COVID-19 related tweets in real-time for researchers [73] . Although we welcome this development, this does not solve the issue at hand. Although large data sets of COVID- 19 Twitter data are increasingly being made publicly available [14] , as far as we are aware, these do not include replies or retweets either. The second issue is that there is an inherent tension between the speed at which data analysis can be done to aid practitioners combating misinformation and the magnitude of Twitter data that can be included. In a crisis where speed is of the essence, this is not trivial. Our data was limited by the number of claims that included a tweet (for more on data limitations see Section 6.3), causing a loss of around 90% of the claims we collected from fact-checking websites. This problem could be mitigated to some extent by employing similarity matching to map misinformation verified by factchecking organisations to tweets in COVID- 19 Twitter data [14] . However, this would be computationally intensive and require the creation of a reliable matching algorithm, making this approach far slower. Moreover, automatic methods for creating larger data sets will also lead to more noisy data. Thus, such an approach should rather been seen as complementary to our own. Probably, social media analytics support can draw from lessons learned on crisis management decision making under deep uncertainty [56] . Eventually, more work and a a scientific debate on this topic is necessary. Additionally, as an academic community it is important to explicitly convey what can and what cannot be learned from the data so as to prevent practitioners from drawing unfounded conclusions. The other way around, we deem it necessary to "look over the shoulder" over practitioners to learn about their way of handling the dynamics of social media, eventually leading to a better theory. A third point that must be considered by the academic community researching this subject is the risk of profiling Twitter users. There have been indications that certain user characteristics such as gender [15] and affiliation with the alt-right community [70] may be related to the likelihood of spreading misinformation. Systematic analyses of these characteristics could prove valuable to practitioners battling this infodemic but simultaneously raises serious concerns related to discrimination. In this article, we did not analyse such characteristics but we urge the scientific community to consider how this could be done in an ethical manner. Better understanding which kind of people create, share and succumb to misinformation would much help mitigating their negative influence. Fourth, relying on automatic detection on fact checked articles can lead to false results. The method used by fact checkers is often confusing and messyâĂŤ it is a muddle of claims, news articles and social media posts. Additionally, each fact checker appears to have its own process of debunking and set of verdicts. We even encountered cases where fact checkers discuss multiple claims in one go, resulting in additional confusion. Moreover, fact checkers do not always explicitly specify the final verdict or class (false or not) of the claim. For example, in a fact check performed by Pesa Check [51] the claim "Chinese woman was killed in Mom-basa over COVID-19 fears" is described and the article links various news sources. Then, abruptly in the bottom, a tweet message about a mob lynching of a man is embedded and no specification of the class (false or not) of the article is mentioned. Research on topic that relate to crisis management offer the chance to not only contribute to the scientific body of knowledge but directly (back) to the field. Our findings allow us to draw a first set of recommendations for public authorities and others with an official role in crisis communication. Ultimately, these also could be helpful for all critical users of social media, and especially those who seek to debunk misinformation. First, and rather unsurprisingly, closely watching social media is recommended (cf. e.g. with [3, 80, 75] ). COVID-19 has sparked much misinformation, and it quickly propagates. Our work indicates that this is not an ephemeral phenomenon. For the john doe user, our findings suggest to always be critical, even if alleged sources are given, and even if Tweets are rather old or make reference of old Tweets. Second, our results serves as a proof that brands (organisations or celebrities) are involved in approximately 35% of false category and partially false category of misinformation. They either create or circulate misinformation by performing activities such as liking or retweeting. This is in line with work by researchers from the Queensland University of Technology who also found that celebrities are so-called "super-spreaders" of misinformation in the current crisis [9] . Thus, we recommend close monitoring of celebrities and organisations that have been found to spread misinformation in order to catch misinformation at an early stage. For users, this means that they should be cautious, even if a Tweet comes from their favourite celebrity. Third, we recommend close monitoring of tags such as #fakenews that are routinely associated with misinformation. For Twitter users this also means that they have chances to check if a Tweet might be misinformation by checking replies to it -these replies being tagged with for instance #fakenews would be an indicator of suspicion 7 . Fourth, we advise to particularly study news that are partially false -despite an observed slower propagation it might be more dangerous to those not routinely resorting to information that provides a desired reality rather than facts. As mentioned before, it may be more challenging for users to recognise claims as false when they contain elements of truth, as this has found to be the case even for professional fact-checkers [37] . It is still an open question whether there is less partially false than false information circulating on Twitter or whether fact checkers are more likely to debunk completely false claims. Fifthly, we recommend authorities to carefully tailor their online responses. We found that for spreading fake news, emojis are used to appeal to the emotions. One the one hand, you would rather expect a trusted source to have a neutral, non-colloquial tone. On the other hand, it seems advise able to get to the typical tone in social media, e.g. by also using emojis to some degree. We also advise authorities to employ tools of social media analytics. This will help them to keep updated on developing misinformation, as we found that for example psycho-linguistic analysis can reveal particularities that differ in misinformation compared to the "usual talk" on social media. Debunking fake news and keeping information sovereignty is an arms race -using social media analytics to keep pace is therefore advisable. In fact, we would recommend authorities to employ methods such as the ones discussed in this paper as they work not only expost but also during an ongoing infodemic. However, owed to the limitation of data analysis and regarding API usage (cf. with the prior and with the following section), we recommend making social media monitoring part of the communication strategy, potentially also manually monitoring it. This advise in general applies to all Twitter users: commenting something is like shouting out loudly on a crowded street, just that the street is potentially crowded by everyone on the planet with Internet access. Whatever is tweeted might have consequences that are unsought for. Lastly, we recommend to work timely, yet calmly, and with an eye for the latest developments. During our analysis, we encountered much bias -not only on Twitter, but also on media and even in science. Topics such as the justification of lockdown measurement spark heated scientific debate already, and offer much controversy. Traditional media, which supposedly should have well-trained science journalists, will cite vague and cautiously phrases ideas from scientific preprints as seeming facts, ignoring that that ongoing crisis mandates them to be accessable before peer review. Acting cautiously on misinformation will not only likely create more misinformation, but it may erode trust. Our final recommendation for officials is, thus, to be the trusty source in an ocean of potential misinformation. These recommendations must not be mistaken for a definitely guidelines let alone a handbook. They should offers some initial aid, though. Moreover, formulating them supports the identification of research gaps, as will be discussed along with the limitations in the following two subsections. Due to its character as complete yet early research, our work is bound to several limitations. Firstly, we are aware that there may be a selection bias in the collection of our data set as we only consider rumours that were eventually investigated by a fact-checking organisation. Thus, our data probably excluded less viral rumours. Additionally, we limited our analysis to Twitter, based on prior research by [16] that found that out of the mainstream media it was most susceptible to misinformation. Nonetheless, this does limit our coverage of online COVID-19 misinformation. We are also aware that we introduce another selection bias through our data collection method, as we only include rumours for which the fact-checking organisation refers to a specific tweet id in its analysis of the claim. Furthermore, we cannot be certain that this tweet id refers to the tweet spreading misinformation, as it could also refer to a later tweet refuting this information or an earlier tweet spreading correct information that was later re-purposed for spreading misinformation. Two examples of this are: (1) "Carnival in Bahia -WHO WILL ? -URL-" which refers to a video showing Carnival in Bahia but of which some claim it shows a gay party in Italy shortly before the COVID-19 outbreak [2] and (2) " i leave a video of what happened yesterday 11/03 on a bicentennial bank in merida . yes, these are notes." which is the correct information for a video from 2011 that was repurposed to wrongly claim that Italians were throwing cash away during the corona crisis [13] . Second, our interpretation of both hashtag and emoji usage by authors of misinformation is limited by our lack of knowledge of how the authors intended them. Both are culturally and contextually bound, as well as influenced by age and gender [27] and open to changes in their interpretation over time [44] . However, none of these limitations impairs the originality and novelty of our work; in fact, we gave first recommendations for practitioners and are now able to propose directions for future research. On the one hand, work on misinformation in social media is no new emergence. On the other hand, the current crisis has made it clear how harmful misinformation is. Obviously, strategies to mitigate the spread of misinformation are needed. This leads to open research questions, particularly in the light of the limitations of our work. Open questions can be divided into four categories. First, techniques, tools and theory from social media analytics must be enhanced. It should become possible -ideally in an half-or full-automated fashion -to assess the propagation of misinformation. Understanding where misinformation originates, in which networks in circulates, how it is spread, when it it debunked, and what the effects of debunking are ought to be researched in detail. As we already set out in this paper, it would be ideal to provide as much discriminatory power as possible, for example by distinguishing misinformation that is completely and partly false; misinformation that is spread intentionally and by accident (maybe even with good intention, but not knowing better); and misinformation that is shared only in silos versus misinformation that leaves such silos and propagates further. Not only such a typology (maybe even taxonomy) would make a valuable contribution to theory but also in-depth studies of the propagation by type. Such insights would also aid fact checkers, who would for example learn when it makes sense to debunk facts, and whether there is a "break even" point after which it is justified to invest the effort for debunking. Second, since a holistic approach is necessary to effectively tackle misinformation, it is important to investigate how our results -and future results on the propagation of misinformation on Twitter -relate to other social media. While Twitter is attractive for study and important for misinformation due to the brevity and speed, other social media should also be researched. COVID-19 misinformation is not necessarily restricted to a single platform and may thus be spread from one platform to another. Consequently, factchecking organisation may not mention any tweets despite a claim also being present on Twitter. Especially if the origin of the claim was another platform there may be several seeds on Twitter as people forward links from other platforms. As part of this, the spread of fake news through closed groups and messages would make an interesting object of study. Third, the societal consequences of fake news ought to be investigated. There is no doubt society is negatively impacted, but to which extent these occur, whom they affect, and how the offline spread of misinformation can be mitigated remain open research questions. Again, achieving high discriminatory power would be much helpful to counter misinformation. For example, it would be worthwhile to investigate how the diffusion of misinformation about COVID-19 differs per country. In this regard, specifically the relation between trust and misinformation is a topic that requires closer investigation. In order for authorities to maintain information sovereignty, users -in this case typically citizens -need to trust the authorities. Such trust may vary widely from country to country. In general, a high level of trust, as achieved in the Nordic countries [38, 5] , should help mitigating misinformation. Thus, a better understanding of how authorities can gain and maintain a high level of trust could greatly benefit effective crisis management. Fourth, researching synergetically between the fields of social media analytics and crisis management could benefit both fields. On the one hand, social media analytics could benefit from the expertise of crisis managers and researchers in the field of crisis management in order to better interpret their findings and to guide their research into worthwhile directions. On the other hand, researchers in crisis management could make use of novel findings on the propagation of misinformation during crisis to improve their existing theoretical models in order to provide holistic approaches to information dissemination throughout the crisis. Crisis management in practice needs a set of guidelines. What we provided here is just a starting point; an extension requires additional quantitative and especially qualitative research as well as validation by practitioners. Further collaboration of these fields is necessary. In this article we have presented work on COVID-19 misinformation on Twitter. We have analysed Tweets that have been fact-checked by using techniques common to social media analytics. However, we decided for an exploratory approach to cater for the unfolding crisis. While this bring severe limitations with it, it also allowed us to gain insights otherwise hardly possible. Therefore, we have presented rich results, discussed our lessons learned, have first recommendations for practitioners, and raised many open questions. That there are so many questions -and thereby research gaps -is not surprising, as the COVID-19 crisis is among few stress-like disasters where misinformation is studied in detail 8 -and we are just at the beginning. Therefore, it was our aspiration to contribute to a small degree to mitigating this crisis. We hope that our work can stimulate the discussion and lead to discoveries from other researchers that make social media a more reliable data source. Some of the questions raised will also be on our future agendas. We intend to continue the very work of this paper, even though in a less exploratory fashion. Rather, we will seek to verify our early findings quantitatively with much larger data sets. We will seek collaboration with other partners to gain access to historical Twitter data in order to investigate all replies and retweets to the tweets on our corpus. This extension should not only cover additional misinformation but also full sets of replies and retweets. Moreover, it would be valuable to longitudinally study how misinformation propagates as the crisis develops. Regarding COVID-19, medical researchers warn of the second wave [81] , and maybe consecutive further ones. Will misinformation also come in waves, possibly in conjunction to societal discussion, political measurements, or other influencing factors? Besides an extension of the data set, our work will be extended methodologically. For example, we seek to stance detection methods to determine the position of replies towards the claim. At the same time, we would like to qualitatively explore the rationale behind our observation. Right as we concluded our work on this article, "doctors, nurses and health expert [. . . ] sound the alarm" over a "global infodemic, with viral misinformation on social media threatening lives around the world" [74] . They target tech companies, specifically those that run social media platforms. We take their letter as encouragement. The companies might be able to filter much more misinfomation than they do now, but to battle this infodemic much more is needed. We hope we could help arming those that seek for truth! weeks before the incident at a coronavirus quarantine facility in India This video shows a Brazil carnival Social media in disaster risk reduction and crisis management Islamic revivalism: The case of the tablighi jamaat Trust-The Nordic Gold. Nordic Council of Ministers Occupants were unruly since morning WHO says fake coronavirus claims causing 'infodemic Psychological reactance: A theory of freedom and control Celebrities 'super-spreaders' of fake news, Queensland researchers say Towards more systematic Twitter analysis: metrics for tweeting activities One of the internetâĂŹs oldest factchecking organizations is overwhelmed by coronavirus misinformation âĂŤ and it could have deadly consequences The role of bot squads in the political propaganda on twitter Italians throwing away cash in coronavirus crisis? No, photos of old Venezuelan currency dumped by robbers Covid-19 Why students share misinformation on social media: Motivation, gender, and studylevel differences The COVID-19 Social Media Infodemic Botornot: A system to evaluate social bots Was Coronavirus Predicted in a 1981 Dean Koontz Novel #COVID-19 on Twitter: Bots, Conspiracies and Social Media Activism Rumor cascades Assessing the risks of "infodemics" in response to COVID-19 epidemics COVID-19: the medium is the message The structural virality of online diffusion Get Back! You Don't Know Me Like That: The Social Mediation of Fact Checking Interventions in Twitter Conversations Disinformation and misinformation through the internet: Findings of an exploratory study Gender and Age Influences on Interpretation of Emoji Functions A Dean Koontz novel Linguistic signals under misinformation and fact-checking: Evidence from user comments on social media Characterising the inventive appropriation of emoji as relationally meaningful in mediated close personal relationships. Experiences of technology appropriation: unanticipated users, usage, circumstances, and design 20. Experiences of Technology Appropriation: Unanticipated Users, Usage, Circumstances, and Design Measuring Emotions in the COVID-19 Real World Worry Dataset Coronavirus Goes Viral: Quantifying the COVID-19 Misinformation Epidemic on Twitter The science of fake news Misinformation and Its Correction: Continued Influence and Successful Debiasing A hybrid model for rolerelated user classification on twitter Checking how fact-checkers check Trust in political institutions. Nordic social attitudes in a European perspective The reproductive number of covid-19 is higher compared to sars coronavirus Nltk: the natural language toolkit Conceptualizing and designing a resilience information portal Hawaii International Conference on Systems Science (HICSS-51) COVID-19 Misinformation on Twitter 2019. Towards a resilience management guideline -cities as a starting point for societal resilience Twitter under crisis: Can we trust what we RT? blissfully happy" or "ready to fight": Varying interpretations of emoji What are people tweeting about zika? an exploratory study concerning its symptoms, treatment, transmission, and prevention. JMIR public health and surveillance 3, e38 Officer Bandit, 2020. I feel like this is the video that Yellow fever outbreaks and Twitter: Rumors and misinformation Ebola, Twitter, and misinformation: A dangerous combination? Prior exposure increases perceived accuracy of fake news Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking HOAX: Reports that a Chinese woman was killed Fact-checking hoaxes and conspiracies about the coronavirus HereâĂŹs what to expect from fact-checking in 2019 Poynter Institute, 2020. The International Fact-Checking Network Reading the riots on Twitter: methodological innovation for the analysis of big data Deep uncertainty in humanitarian logistics operations: Decision-making challenges in responding to large-scale natural disasters Beautiful soup documentation The fake news game: actively inoculating against the risk of misinformation A survey of twitter rumor spreading simulations The spread of misinformation by social bots A first look at COVID-19 information and misinformation sharing on Twitter Coronavirus (covid19) tweets -early april Collections archive Did Costco Issue a Recall Notice for Toilet Paper World health organization declares global emergency: A review of the 2019 novel coronavirus (covid-19) Brand followers: Consumer motivation and attitude towards brand communications on twitter The psychological meaning of words: Liwc and computerized text analysis methods Disinformation and blame: how America's far right is capitalizing on coronavirus The Guardian, 2020b. The WHO v coronavirus: why it can't handle the pandemic A language model approach to keyphrase extraction COVID-19 stream Doctors sound alarm over social media infodemic A work-in-process literature review: Incorporating social media in risk and crisis communication Evaluation and analysis of term scoring methods for term extraction The spread of true and false news online COVID-19) advice for the public: Myth busters Coronavirus disease 2019 (COVID-19): situation report, 72 Social media use in emergency management Beware of the second wave of covid-19 World Report How to fight an infodemic Estimation of the reproductive number of novel coronavirus (covid-19) and the probable outbreak size on the diamond princess cruise ship: A data-driven analysis Detection and Resolution of Rumours in Social Media: A Survey Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads [1] AFP Fact Check, 2020a.This video has circulated in re- Gautam Kishore Shahi is a PhD student in the Research Training Group User-Centred Social Media at the research group for Professional Communication in Electronic Media/Social Media(PROCO) at the University of Duisburg-Essen, Germany. His research interests are Web Science, Data Science, and Social Media Analytics. He has a background in computer science, where he has gained valuable insights in India, New Zealand, Italy and now Germany. Gautam received a Master's degree from the University of Trento, Italy and Bachelor Degree from BIT Sindri, India. Outside of academia, He worked as an Assistant System Engineer for Tata Consultancy Services in India.Anne Dirkson is a PhD student at the Leiden Institute of Advanced Computer Science (LIACS) of the University of Leiden, the Netherlands. Her PhD focuses on knowledge discovery from healthrelated social media and aims to empower patients by automatically extracting information about their quality of life and knowledge that they have gained from experience from their online conversations. Her research interests include natural language processing, text mining and social media analytics. Anne received a BA in Liberal Arts and Sciences at the University College Maastricht of Maastricht University and a MSc degree in Neuroscience at the Vrije Universiteit, Amsterdam.Tim A. Majchrzak is professor in Information Systems at the University of Agder (UiA) in Kristiansand, Norway. He also is a member of the Centre for Integrated Emergency Management (CIEM) at UiA. Tim received BSc and MSc degrees in Information Systems and a PhD in economics (Dr. rer. pol.) from the University of MÃijnster, Germany. His research comprises both technical and organizational aspects of Software Engineering, typically in the context of Mobile Computing. He has also published work on diverse interdisciplinary Information Systems topics, most notably targeting Crisis Prevention and Management. Tim's research projects typically have an interface to industry and society. He is a senior member of the IEEE and the IEEE Computer Society, and a member of the Gesellschaft fÃijr Informatik e.V.