key: cord-0558685-f56kkxlf authors: Wich, Maximilian; Gorniak, Adrian; Eder, Tobias; Bartmann, Daniel; cCakici, Burak Enes; Groh, Georg title: Introducing an Abusive Language Classification Framework for Telegram to Investigate the German Hater Community date: 2021-09-15 journal: nan DOI: nan sha: d323f4f5a1c596e0bb172a8c50ae75911f131674 doc_id: 558685 cord_uid: f56kkxlf Since traditional social media platforms continue to ban actors spreading hate speech or other forms of abusive languages (a process known as deplatforming), these actors migrate to alternative platforms that do not moderate users content. One popular platform relevant for the German hater community is Telegram for which limited research efforts have been made so far. This study aims to develop a broad framework comprising (i) an abusive language classification model for German Telegram messages and (ii) a classification model for the hatefulness of Telegram channels. For the first part, we use existing abusive language datasets containing posts from other platforms to develop our classification models. For the channel classification model, we develop a method that combines channel-specific content information collected from a topic model with a social graph to predict the hatefulness of channels. Furthermore, we complement these two approaches for hate speech detection with insightful results on the evolution of the hater community on Telegram in Germany. We also propose methods for conducting scalable network analyses for social media platforms to the hate speech research community. As an additional output of this study, we provide an annotated abusive language dataset containing 1,149 annotated Telegram messages. Hate speech and other forms of abusive language are a severe challenge that social media platforms, such as Facebook, Twitter, and YouTube, are facing nowadays (Duggan 2017) . Moreover, this problem is not limited to the online world only; studies have shown that online hate correlates with physical crimes in the real world (Müller and Schwarz 2021; Williams et al. 2020) , making the phenomenon a societal challenge for everybody. To enforce a fast reaction to harmful content on social media platforms, Germany has passed a set of laws (Network Enforcement Act) to force social media companies to take action against hate speech on their platforms (Rafael 2019; Echikson and Knodt 2018) . These actions range from deleting single posts that contain hateful content to banning actors from the platform, which is called deplatforming (Fielitz and Schwarz 2020) . While deplatforming helps limit the reach of these hate actors (Fielitz and Schwarz 2020) , it often makes them migrate to less or un-regulated platforms and continue their hateful communication (Rogers 2020; Fielitz and Schwarz 2020; Urman and Katz 2020) ; one such alternative social media platform is Telegram (Rogers 2020; Fielitz and Schwarz 2020; Urman and Katz 2020) . In Germany, Telegram have become the focal point for rightwing extremists, conspiracy theorists, and COVID-19 deniers (Fielitz and Schwarz 2020; Urman and Katz 2020; Eckert, Leipertz, and Schmidt 2021) . Along with this rapid increase in popularity and usage by various user types, two important challenges regarding abusive language detection arise: first, the automatic detection of abusive content in such texts and, second, an aggregated view on the account level to identify hateful accounts. For both challenges, we propose a machine learning-based approach. Previously, most research efforts on detecting hate speech, especially in German texts, focused on posts and comments from Twitter and Facebook (Ross et al. 2016; Bretschneider and Peters 2017; Struß et al. 2019; Wiegand, Siegel, and Ruppenhofer 2018; Mandl et al. 2019 Mandl et al. , 2020 Wich, Räther, and Groh 2021; but not on Telegram. We want to bridge this gap and build abusive language classification models for Telegram messages. Because there is no abusive language dataset available that contains labeled Telegram messages in German, our approach is to use existing abusive language datasets in German collected from other platforms and construct a classification model for Telegram. This leads to the first research question for this study: RQ1 Can existing abusive language datasets from other platforms be used to develop an abusive language classification model for Telegram messages? Because the development of an abusive language classification model requires significant amounts of data, we collected such data from the platform (Telegram) over a longer period. By collecting these data, we could also formulate additional questions about the type of content and its spread on the social media platform. Because there is little research on these types of communication channels and their content, we were also interested in how this content has changed over a longer period, while deplatforming was occurring on other social media. Thus, we formulate an additional research question in terms of message contents: RQ2 How did the prevalence of abusive content evolve in the last years on Telegram? Moving away from the message-level approach and toward a user-based approach for abusive language detection, so far no methodology has been introduced to address this problem for Telegram. As a solution, we propose developing a graph model leveraging topical information for channels in a German hater community on Telegram to find suitable representations, leading to the third research question: RQ3 Can a classification model be used to predict whether a Telegram channel is hateful or not? Lastly, maintaining the channel perspective, we were interested to investigate whether our approach would allow for the derivation of channel clusters and communities, which is another important aspect regarding online hate. For this, we analyzed the topical distribution and the graph embeddings for each channel, resulting in research question four: RQ4 Can we leverage the topical distribution and graph embeddings to derive meaningful clusters from channels? As an additional contribution, we release an abusive language dataset containing 1,149 Telegram messages labeled as abusive or neutral. Studies on Telegram are limited, but the number began to grow in the past years. Baumgartner et al. (2020) released an unlabeled dataset containing 317,224,715 Telegram messages from 27,801 channels, which were posted between 2015 and 2019. They used a snowball sampling strategy to discover channels and collect messages, starting with approximately 250 seed channels (mainly right-wing channels or channels about cryptocurrency). Rogers (2020) conducted an empirical study on actors who were deplatformed on traditional social media and migrated to Telegram. As part of their study, they used a classification model based on hatebase.org to detect messages with hateful language (Rogers 2020) . Urman and Katz (2020) conducted an indepth network analysis of a far-right community on Telegram. They used a snowball sampling strategy to uncover this community, starting with a German-speaking far-right actor. Fielitz and Schwarz (2020) analyzed German hate actors across various social media platforms and investigated the impact of deplatforming activities on these actors. According to them, "Telegram has become the most important online platform for hate actors in Germany" (Fielitz and Schwarz 2020, p. 5) . With a focus on COVID-19, Hohlfeld et al. (2021) and Holzer (2021) investigated public Germanspeaking channels on Telegram. The only labeled abusive language dataset with Telegram messages that we found is provided by Solopova, Scheffler, and Popa-Wyatt (2021) . They released a dataset containing 26,431 messages in English from a channel supporting Donald Trump. To the best of our knowledge, no study has developed an abusive language classification model for German Telegram messages or channels. Because there is no annotated German Telegram dataset available, we decided to train our classification model on existing German abusive language datasets. In total, we found eight of such datasets (Ross et al. 2016; Bretschneider and Peters 2017; Wiegand, Siegel, and Ruppenhofer 2018; Struß et al. 2019; Mandl et al. 2019 Mandl et al. , 2020 Wich, Räther, and Groh 2021) . We decided to use five of themwhich constitute the most recent ones, excluding . These five datasets have comparable label schemata, and a large portion of the data is from the same period as our collected Telegram data. was excluded because their data were only pseudo-labeled. More details on the selected datasets can be found in the following section. In the first part, we describe how we collected data from Telegram. After that, we provide details on how we developed the abusive classification model for Telegram messages based on datasets from other platforms. In the third part, we describe how we developed a classification model to predict whether a channel is a hater based on the results from the message classifier and the social graph. We used a snowball sampling strategy to collect data from Telegram. We only collected messages from public channels that were accessible via the website t.me. A channel is comparable to a news feed: the channel operator can broadcast messages to subscribers of the channel, but subscribers cannot directly post messages on the channel. Groups and private chats were excluded from the data collection process. As seeds for the snowball sampling strategy, we used a list of German hate actors proposed by Fielitz and Schwarz (2020) . At the time of data collection, 51 channels from Fielitz and Schwarz (2020)'s list were still accessible. The list comprises, among others, far-right actors, supporters of Qanon, and alternative media. In the first round of snowball sampling, we collected messages from all seed channels. In the next round, we collected all channels that were mentioned in messages collected from the first round or whose messages were forwarded by the channels of the first round. We repeated this procedure in the third round, but we excluded all newly discovered channels due to a large number of channels. We defined a threshold: a channel must be mentioned or forwarded by at least five channels for us to collect its messages. From all channels, we collected messages that were posted between 01/01/2019 and 03/15/2021. After data collection, we conducted language detection on the messages because the crawling process also collects other language channels such as Russian and English and we wanted to keep the focus on German. We used multilingual word vectors from fastText to classify languages (Grave et al. 2018) . The language detection here is based on the message text and a link preview if it exists. In a second step, the language labels of messages are aggregated on a channel level. The language of a channel is German if it is the most or second most common language in the channel. The reason for the latter is that some German channels primarily share content from foreign-language sources. Models To classify Telegram messages, we trained several binary classification models on different German datasets. The goal is to combine multiple classifiers to improve classification performance because each dataset covers different aspects and topics of abusive languages. The reason for focusing on binary classification was that it makes combining classifiers easier. All classification models are based on pretrained BERT base models (Devlin et al. 2019) . We used deepset/gbert-base (Chan, Schweter, and Möller 2020) and dbmdz/bert-base-german-cased 1 depending on the model's performance on the individual dataset. Our hyperparameters for training the models comprise a maximum number of eight epochs, a learning rate of 5 × 10 −5 , and a batch size of eight. In addition, we implemented an early stopping callback that stops the training after four consecutive epochs without any improvement. We selected the model with the highest macro F1 score on the validation set. Before training the models, texts are preprocessed. The preprocessing steps comprise, among others, masking URLs and user names and replacing emojis. Data We used the following German abusive language datasets collected from different platforms (mainly Twitter) to train our models: • GermEval 2018: Wiegand, Siegel, and Ruppenhofer (2018) released an offensive language dataset as part of the shared task GermEval Task 2018. It contains 8,541 tweets with a binary label (offense, other) and a finegrained label (profanity, insult, abuse, other). We used the train/test split proposed by the authors and used a 90/10 split for the training/validation set. ,669 records with a binary label (non hate-offensive, hate and offensive) and a fine-grained label (hate, offensive, profanity). We used the train/test split proposed by the authors and used a 90/10 split for the training/validation set. • HASOC 2020: Mandl et al. (2020) published another dataset, which is comparable to the previous one. It consists of posts from YouTube and Twitter in German, English, and Hindi. The German part has a size of 3,425 1 https://huggingface.co/dbmdz/bert-base-german-cased records using the same labeling schema as the previous dataset. We used the proposed train/validation/test-split of 70%/15%/15%. • COVID-19: Wich, Räther, and Groh (2021) released an abusive language dataset containing 4,960 German tweets that primarily focus on COVID-19. These tweets have a binary label (neutral, abusive). We used a train/validation/test split of 70%/15%/15%. We trained individual classification models for all datasets, except for HASOC 2019 because we could not train a model that provides an acceptable classification performance. Furthermore, we combined the GermEval and HASOC datasets and trained two additional classifiers on the two combined datasets. Combining these datasets was possible because the respective datasets use the same labeling schema. Classifying Telegram Messages Because a Telegram message can have up to 40,986 characters, the tokenized message may exceed the maximum sequence length of the BERT model, which is 512. To tackle this problem, we split all messages that had more than 412 words into parts with a maximum length of 412 words. When splitting a message, we made sure not to split sentences. For this purpose, we used the sentence detection method of the library spaCy (Honnibal et al. 2020 ). There were two reasons for setting the threshold to 412 words. First, using words instead of tokens was easier during preprocessing. Second, a word can be tokenized into multiple tokens. Therefore, we set the threshold to 412 instead of 512. Every part of the split message was individually classified. The final label of the complete message results from the highest probability for the abusive class. The reason for this approach was because an abusive text can contain nonabusive sentences but not the other way around. In addition to the six classification models, we used Google's Perspective API 2 to classify Telegram messages. The API returns a toxicity score between 0 and 1, representing how toxic the content of a text is. We used these classification results as a baseline to benchmark our models. Evaluating Classification Models To evaluate the classification performance of our trained models on Telegram messages, five annotators manually annotated 1,150 of the classified Telegram messages. More information about the annotators follows below. The 1,150 messages originated from two different sampling strategies. The first strategy uses the classification results of the six trained models and the Perspective API. For each classifier, we sampled 50 messages classified as abusive and 50 classified as neutral, resulting in a total of 700. The second strategy used a topic model trained on Telegram messages (more details on the topic model can be found in the subsection Topic Model). We randomly sampled 30 messages from the 15 most prominent topics. Finally, we ensured that the annotation candidates do not contain any duplicates. As a result, we assured that the dataset has a certain degree of abusive content and that it represents the most relevant topics. We use the labeling schema of the COVID-19 dataset proposed by Räther (2021) and Wich, Räther, and Groh (2021) because it is compatible with the binary schema of the HASOC and GermEval datasets: • ABUSIVE: The tweet comprised "any form of insult, harassment, hate, degradation, identity attack, or the threat of violence targeting an individual or a group. " (Räther 2021, p. 36) • NEUTRAL: The tweet did "not fall into the ABUSIVE class." (Räther 2021, p. 36) Data were annotated by four nonexperts and one expert, who are males and in their twenties or early thirties. The annotation process consisted of three phases. In phase 1, the expert presented and explained the annotation guidelines to the four nonexperts. Subsequently, all five annotators annotated the same 50 messages. In 18 cases, the annotators did not agree on the final label. These cases were discussed in a meeting to align the five annotators. In phase 2, the annotators annotated the remainder of the 1,150 messages. Each message was annotated by two different annotators. The annotators were allowed to skip a message if they could not decide on a label. In phase 3, messages without a consensus were annotated by three additional annotators so that a majority vote was possible. We used Krippendorff's alpha (Krippendorff 2004) to measure interrater reliability. To assist in annotations, we used the text annotation tool of Kili Technology (Kili Technology 2021). Combining Classification Models Because the datasets and consequently the classification models cover different aspects of abusive languages, we combined the six classifiers to improve classification performance (Perspective API was not part of the combination). The labels produced by this combination were used for subsequent experiments. Analyzing Evolution of Abusive Content We performed two analyses to evaluate the evolution of abusive content in the German hater community on Telegram to answer RQ2. First, we compared the number of abusive messages with all messages from the collected German channels between 01/01/2019 and 02/28/2021 on a monthly level. We excluded the messages posted in March 2021 because we did not have data for the entire month. Then, we examined the relative share (prevalence) of abusive content in the messages from all German channels for the same period and granularity. In addition, we reported the prevalence of abusive content from the seed channels and the 1st-degree network of the seed channels. Channel Labels We had to determine a label for each channel based on the abusive messages in the channel. We defined a hater as a channel that posted or forwarded at least one abusive message. However, setting the threshold to one proved problematic due to the possibility of misclassification, meaning that false positives would cause neutral channels to be classified as haters. Instead, for each message, we calculated a threshold based on the conditional probability that a message is neutral under the condition of it being labeled as abusive. This conditional probability is retrieved from a confusion matrix (Figure 1h) . As a result, we had to adjust the weighting of the confusion matrix's rows. Because we intentionally oversampled the abusive class in the evaluation set, the ratio of abusive texts was no longer representative of the entire dataset. We assume that the relative share of abusive content is 3.1% for 2020, based on the results from the analysis of the abusive content's evolution. The resulting conditional probability is 82.9%. Assuming an error rate of smaller than 5.0% , we need at least 17 messages that are classified as abusive to be certain that at least one message is abusive. Second, we created a directed graph representing the network of channels. Each channel is a node; a directed edge from nodes A to B exists if A either mentions B or forwards a message from B. We assigned a topic distribution vector as a feature to each node, representing the topical distribution within the messages of the channel. The topical distribution was calculated on the basis of the topic model generated with Top2Vec (Angelov 2020) . We relied on the hyperparameter selection of the author, used the distiluse-base-multilingual-cased 3 pretrained sentence transformer as embedding model, and sampled 250,000 messages (500 messages from the 500 channels containing the largest amount of messages in our dataset) as training samples. From the 100 most relevant topics, we manually chose nine topics to serve as proxies for hateful content. They are listed in Table 1 : the topic name in the first column was derived on the basis of the most descriptive terms of the respective topic vectors from which we provide the first three terms in the second column (in German). Because we are working with many channels that can be associated with German hater communities, we relied only on these topics to cluster different topical emphases with respect to potentially harmful content. We aggregated the counts of all documents in our dataset with cosine similarity to any of the selected topics greater than 0.5 and normalized these counts to create a topic distribution for each node. Graph Model We used GraphSAGE to generate embeddings for the graph (Hamilton, Ying, and Leskovec (2017) ). We used the Directed GraphSAGE method from the Stellar-Graph library (CSIRO's Data61 2018). As we were learning unsupervised embeddings, i.e., we did not provide the learning model with labels of the channels, we used the Corrupted Generator of StellarGraph for sampling additional training data. During training, the model learned to differentiate between true graph instances and corrupted ones. The model was trained for 500 epochs with two layers of size 32 Channel Classification We developed a neural network (NN) classification model using the graph embeddings to predict the classes. The model consists of two densely connected NN layers. The input for the first layer is a 32dimensional graph embedding. The second layer (output) has two units due to the binary task. The first layer uses a rectified linear unit activation function, whereas soft-max was applied to the output layer. To train the model, cross-entropy was used as a loss function with accuracy as the metric using an Adam optimizer. We trained the model for a maximum of 150 epochs with a batch size of eight with an early stopping strategy that had the patience of 100 epochs and a minimum delta of 0.05 for accuracy on the validation set. The dataset was split into training/validation/test sets (70%/15%/15%). The dataset for RQ3 only used messages from 2020 as the social network on Telegram is rapidly evolving and changing. That means that older edges might no longer be relevant and the network structure would generally be less meaningful. Another aspect of this decision is that the emergence of COVID-19 strongly influenced and accelerated the evolution of the network, which did not exist pre-COVID-19 pandemic. In total, we collected 13,822,605 messages from 4,962 channels that were posted after 01/01/2019 and before 03/15/2021. 28.4% of all messages (3,931,136) are forwarded messages, showing the popularity and relevance of this feature for Telegram. In addition to the 4,962 channels, we collected the metadata of 43,142 additional channels that were either the source of forwarded messages or were mentioned in a message. 39.2% of all collected messages (5,421,845) are in German, which is the most frequent language, followed by English and Russian. 2,748 of the 4,962 crawled channels (55.4%) are classified as German-speaking according to our approach. Models Table 2 presents the classification metrics of the six trained classification models. It comprises the precision, recall, and F1 score of the abusive class as well as the macro F1 score and the used model that performed best on the dataset. Evaluating Classification Models To test the trained classification models, we annotated 1,150 Telegram messages. One message was removed during the annotation process because it did not contain any text, resulting in 1,149 annotated messages. 968 (84.2%) were labeled as neutral and 126 (15.8%) as abusive. The Krippendorff's alpha was 73.87%, which is a good inter-rater reliability score in the context of hate speech and abusive language (Kurrek, Saleem, and Ruths 2020) . Figure 1 visualized the classification performance of the various classifiers on the evaluation set. It presents the confusion matrix, the F1 score of the abusive class, and the macro F1 score of the six trained classification models (af), the Perspective API (g), and the best combination of the six classifiers (h). Let us first compare the six classification models that we trained on the different datasets. The bestperforming model is COVID-19; it outperformed the other models in terms of F1 score (54.95%) and macro F1 score (71.91%). In comparison to the COVID-19 test set, however, the performance drastically decreased. This should not be surprising because Telegram messages differ from tweets in terms of structure and content. To benchmark the performance of our classification model, we used Google's Perspective API to classify messages. The API returns a toxicity score between 0 and 1 for a text. We translated this value by setting a threshold. If the value is above or equal to the threshold, the label is abusive; otherwise, the label is neutral. We set the threshold to 0.5 because it produced the best macro F1 score. Figure 2 shows the macro F1 scores dependent on various thresholds; the highest macro F1 score is achieved with a threshold of 0.5. Comparing the performance of the Perspective API with our best-performing model, our model has a higher F1 score (54.95% vs. 53.50%) and macro F1 score (71.91% vs 70.51%). This is surprising because the Perspective API is built for comments more similar to Telegram messages than tweets. Because the datasets cover different aspects of abusive language, we also examined whether a combination of all six classifiers can improve performance. Our finding is that a majority vote (at least four classifiers vote for abusive) of all six models is the best-performing combination in terms of the macro F1 score, as shown in Figure 1h . It outperforms the Perspective API and the classifier trained on the COVID-19 datatset in terms of macro F1 score. To validate the result, we applied the McNemar's test (Dietterich 1998) to show that the best combination performs significantly differently (p < 0.05) from the Perspective API (p = 2.69 × 10 −5 ) and COVID-19 (p = 1.02 × 10 −3 ). Therefore, the best combination is the majority vote with at least four classifiers voting for abusive, which we used for the following two case studies. Figure 3b shows how the number of messages in the German Telegram channels has increased between the beginning of 2019 and 2021. We can trace the growth of these channels back to the phenomenon of deplatforming. Deplatforming means that actors are permanently banned on traditional social media platforms (e.g., Facebook, Twitter, and YouTube), resulting in them moving to less or unregulated platforms (e.g., Telegram and Gab) (Rogers 2020; Fielitz and Schwarz 2020; Urman and Katz 2020) . Notably, the increase in messages accelerated with the rise of COVID-19 (February 2020). The explanation is the same. Traditional social media platforms (e.g., Twitter and YouTube) blocked accounts of hate actors spreading conspiracy theories regarding COVID-19, causing migration to Telegram and alternative platforms (Fielitz and Schwarz 2020; Holzer 2021) . Simultaneously with the growing number of messages every month (black curve), abusive content also increased (red curve). To answer the question of whether the abusive content has grown in the same manner, we plotted the relative share of abusive content in Figure 3b . The black line represents the relative share for all messages. We observe that the share of abusive content increased from 2.4% to 3.4% during the 26 months. The red line shows the portion of abusive messages in the seed channels. It is not surprising that the share is significantly higher because these channels were classified as hater channels by Fielitz and Schwarz (2020) . The line follows the trend: the abusive content of the selected channels is growing. The green line visualizing the percentage of abusive messages in the channels being in the firstdegree network of the seed channels 4 does not show the trend. A potential explanation is that the number of channels in the first-degree network has increased over time, causing an alignment of the relative share with the overall average. Overall, the prevalence of abusive content for the entire period is 3.1%, 5.3% for the seed channels, and 3.5% for the 1st degree network of the seed channels. In summary, we observe the trend that messages classified as abusive by our combined model increase in absolute and relative terms in the German hater community on Telegram. In this section, we report the results of our classification model for identifying hateful users, along with additional findings in the process of setting up our model. The dataset for developing a channel classification model contains 2,420 German channels that were active in 2020 and posted 3,232,721 messages. 809 of 2,420 channels (33.4%) are labeled as hater, the rest as neutral. Each channel is represented by a node in the directed graph. In total, we identified 146,865 edges between channels. This leads to a density of 0.0251 and an average in-and out-degree of 60.73. Topical Distribution As the first result, we examined clusters based on the topical distribution of the seed chan-nels. To do this, the similarity between the topical distribution of each pair of users has been computed using the Jensen-Shannon divergence. For the resulting similarity matrix, a hierarchical clustering approach has been used to group similar users into clusters, as described in Figure 4 . While we only disclose an anonymized version of our results, we report that the upper left cluster consists only of sources for alternative news and the large cluster in the center mainly contains actors who belong to the far-right network. Graph Embeddings Before using the graph embeddings from the directed GraphSAGE model for the classification model, we investigate the expressiveness of the embeddings for community detection. For this, we applied the dimensional reduction method UMAP to our embeddings to find more dense representations. In the second step, we used DB-SCAN to cluster these reduced embeddings. In Figure 5 , we report the results of the community detection, along with visualization indicating the label of each node. Seed profiles are marked with a large square instead of a dot. The clustering algorithm recognizes four distinct communities along with one outlier class. The large community in the center does not only contain most of the seed channels in our dataset but also the largest proportion of channels labeled as hater (38%). In the other communities, we find a significantly lower proportion of hatefully classified users (5%-24%). In the outlier class, 33% are hater. From that, we deduce that hateful users appear more often in communities with other hateful users. Channel Classification The classification model trained to distinguish between hater and neutral channels achieves a macro F1 score of 69.5% (neutral: 74.2%; hater: 64.9%). Figure 6 visualizes the confusion matrix of the classification model for the test set. We observe that the model performs well in predicting the labels of the German Telegram channels. Regarding RQ1, we can state that existing abusive language datasets from Twitter can be used to develop an abusive language classification model for Telegram messages. However, we have to accept a decline in classification performance. Comparing the macro F1 scores of the classifiers on the original test and evaluation sets, we observe an average decline of approximately 12.5pp. To better assess this value, it is helpful to look into the study on the generalizability of abusive language datasets from Swamy, Jamatia, and Gambäck (2019). They trained models on different abusive language datasets and evaluated them on each other. The average performance decline is 18.1pp if a classifier is evaluated on another test set. Considering this aspect, we can claim that our models perform decently, especially the combination of all six classification models with a threshold of four. This claim is supported by the fact that the combined models outperform the Perspective API in terms of F1 score. We integrated this external model provided by Google as a benchmark because it is developed to handle different types of texts (e.g., comments, posts, and emails), and it is in production (Jigsaw 2021). Consequently, we can state that our approach is successful, but it still provides room for improvement. Regarding RQ2, we observe an increasing prevalence of abusive messages in the collected Telegram subnetwork, especially in the group of the seed channels. Notably, the rise of COVID-19 caused a significant increase. One may argue that the relative share of abusive content is unreliable because our combined classification model is imperfect. However, the change in the relative share provides a reliable indication of an increasing amount of abusive content. We trace this trend back to the deplatforming activities of large social media platforms and Telegram's lack of moderation. We have to point out that the prevalence of abusive content is unrepresentative of the entire German Telegram network. Due to our snowball sampling approach, we have an obvious selection bias because we started with channels that were classified as hate actors by Fielitz and Schwarz (2020) . Nevertheless, we assume that the prevalence of abusive content is larger on Telegram than on traditional social media platforms, such as Twitter, Facebook, and YouTube, that have implemented reporting and monitoring processes. In the case of Telegram, such processes are missing. Regarding RQ3, we developed a classification model to predict whether a channel is a hate actor. It uses the network structure and the topic distribution of messages in each channel for prediction. Our model achieves a macro F1 score of 69.5%. To the best of our knowledge, we are the first to develop such a classification model for Telegram channels. Therefore, we do not have a baseline to compare our results with. However, Ribeiro et al. (2018) and Li et al. (2021) developed comparable models classifying Twitter accounts as hateful or normal. For the same dataset, Ribeiro et al. (2018) and Li et al. (2021) achieved F1 scores of 67.0% and 79.9%, respectively, for hateful users. Our F1 score of 64.9% is not directly comparable with these results, but it is in a similar order of magnitude, supporting our approach. Addressing RQ4, we presented two approaches that allow clustering channels: The first approach leverages the topical distribution of channels to group actors based on the topical similarity of the content they spread. Applying this to the seed channels for the collection of the dataset indicates promising results for future research attempts in clustering actors on social media based on the content of their postings in a time-saving manner. The second method we propose in this context leverages embeddings learned from the social graph that we developed from the dataset. It also uses the results from the topic model, i.e., it merges relational data between the channels with the content they shared. Our results indicate different communities that vary in the number of hateful users. Large communities appear to be spanned by seed users; however, we also detected smaller communities that do not contain any seed users, indicating that our sampling approach could find new user clusters. For a more precise evaluation of these results, more general information about the German hater community would have been helpful. To the best of our knowledge, we are the first to develop abusive language classification models for German messages on Telegram. Our results look promising. The text model outperforms Google's Perspective API in terms of F1 score (macro F1: 73.2%). Similarly, the channel classification model provides good performance in detecting hater channels (macro F1: 69.5%). In addition, we have outlined methods for facilitating and scaling abusive language analysis on a message level as well as on a channel level. In the latter case, we fully relied on unsupervised learning methods, which makes these approaches particularly appealing. Furthermore, we published the first abusive language dataset consisting of German Telegram messages. However, we see room for improvement and potential for future work. The research community would benefit from larger annotated corpora, including media files shared in Telegram channels (e.g., photos with messages, memes, and videos). Because such media files (e.g., memes) are used to transport hate (Kiela et al. 2021) , they are relevant for the problem of detecting abusive content but were not part of this study. Regarding the classification model for hater channels, integrating additional data (e.g., metadata of the channels) and enhancing the NN architecture could improve classification performance. An explorative network analysis of the subnetwork could help identify additional features. In addition, a larger portion of Telegram should be collected with other seed users to mitigate the selection bias introduced by our hateful seed users. We also encourage researchers from various core disciplines, such as machine learning and social sciences, to synergize to validate the performances achieved by sophisticated learning frameworks applied to large amounts of data. Due to the unstoppable increase in content produced on social platforms such as Telegram, automatic methods for generating insights will become indispensable. Finally, the hate speech detection community should look into applying approaches such as ours to alternative social media platforms as hate actors will congregate there as deplatforming efforts continue. Top2vec: Distributed representations of topics The Pushshift Telegram Dataset Detecting offensive statements towards foreigners in social media German's Next Language Model International Committee on Computational Linguistics StellarGraph Machine Learning Library BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms Online harassment 2017 Germany's NetzDG: A key test for combatting online hate Querdenker: Wie die Corona-Krise zu Radikalisierung führte Hate not Found?! Deplatforming the Far-Right and its Consequences Learning Word Vectors for 157 Languages Inductive representation learning on large graphs Communicating COVID-19 against the backdrop of conspiracy ideologies: How public figures discuss the matter on Facebook and Telegram Die Misstrauensgemeinschaft der Querdenker : Die Corona-Proteste aus kultur-und sozialwissenschaftlicher Perspektive spaCy: Industrial-strength Natural Language Processing in Python The Hateful Memes Challenge: Competition Report Kili Technology. 2021. Text annotation tool. URL https: //kili-technology Content Analysis: An Introduction to Its Methodology. Content Analysis: An Introduction to Its Methodology Towards a Comprehensive Taxonomy and Large-Scale Annotated Corpus for Online Slur Usage Neighbours and Kinsmen: Hateful Users Detection with Graph Neural Network Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil, Malayalam, Hindi, English and German. In Forum for Information Retrieval Evaluation Overview of the HASOC Track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages Fanning the flames of hate: Social media and hate crime Hate Speech and Radicalisation Online -The OCCI Research Report, chapter Background: the ABC of hate speech, extremism and the NetzDG Investigating Techniques for Learning with Limited Labeled Data for Hate Speech Classification. Master's thesis, Technical University of Munich Characterizing and Detecting Hateful Users on Twitter Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis A Telegram corpus for hate speech, offensive language, and online harm Overview of GermEval Task 2, 2019 shared task on the identification of offensive language Studying Generalisability across Abusive Language Detection Datasets China: Association for Computational Linguistics What they do in the shadows: examining the far-right networks on Are your Friends also Haters? Identification of Hater Networks on Social Media: Data Paper German Abusive Language Dataset with Focus on COVID-19 Overview of the germeval 2018 shared task on the identification of offensive language Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime