key: cord-0336300-kef0dokr authors: Xu, Xovee; Zhou, Fan; Zhang, Kunpeng; Liu, Siyuan title: CCGL: Contrastive Cascade Graph Learning date: 2021-07-27 journal: nan DOI: 10.1109/tkde.2022.3151829 sha: b2dca4718f445ca085006ed1e0b171113333e42d doc_id: 336300 cord_uid: kef0dokr Supervised learning, while prevalent for information cascade modeling, often requires abundant labeled data in training, and the trained model is not easy to generalize across tasks and datasets. It often learns task-specific representations, which can easily result in overfitting for downstream tasks. Recently, self-supervised learning is designed to alleviate these two fundamental issues in linguistic and visual tasks. However, its direct applicability for information cascade modeling, especially graph cascade related tasks, remains underexplored. In this work, we present Contrastive Cascade Graph Learning (CCGL), a novel framework for information cascade graph learning in a contrastive, self-supervised, and task-agnostic way. In particular, CCGL first designs an effective data augmentation strategy to capture variation and uncertainty by simulating the information diffusion in graphs. Second, it learns a generic model for graph cascade tasks via self-supervised contrastive pre-training using both unlabeled and labeled data. Third, CCGL learns a task-specific cascade model via fine-tuning using labeled data. Finally, to make the model transferable across datasets and cascade applications, CCGL further enhances the model via distillation using a teacher-student architecture. We demonstrate that CCGL significantly outperforms its supervised and semi-supervised counterparts for several downstream tasks. I N recent years, information cascades have received considerable attention in various research areas, such as decision support systems, social network analysis, and graph learning [1] . Understanding cascades becomes important and can lead to significant economical and societal impacts. For example, predicting the number of affected cases and deaths as well as the "superspreaders" in a region during the COVID-19 pandemic is critical for policy makers to plan subsequent actions [2] . Such a predictive task typically involves several key modeling components. Specifically, the number of affected cases in one region (e.g., county) might be very related to its neighbors in a geo-network at a regionlevel, since the mobility is an important factor of COVID-19 transmission. This in turn indicates that a good network modeling and representation might affect the prediction performance. This forecasting task also often involves modeling dynamics, e.g., the number of affecting cases evolves and depends on the past. In addition, the size of (labeled) data could be limited, which requires extra effort to model variation and uncertainty to learn generic knowledge in data and thus alleviate overfitting and enhance knowledge transfer. With the advances of deep neural networks, many supervised learning approaches for modeling information cascades have been proposed. For example, recurrent neural networks (RNNs) were applied to model users in cascade and their temporal dependencies [3] ; deep language models and convolutional neural networks are implemented to learn words and visual representations for the content in cascades; and graph embedding techniques and graph neural networks (GNNs) are usually used to learn diffusion structures [4] , [5] . However, being supervised, the model training of these approaches requires abundant labeled data which is expensive to obtain [6] . And these models are not easily generalized across datasets and different cascade-related applications. Furthermore, a large amount of unlabeled data exist and are often not utilized in these models. To leverage unlabeled data for learning more generic representations, researchers have developed various unsupervised or semisupervised methods, primarily focusing on the generative models, such as unsupervised domain adaptive graph convolutional network (GCN) [7] , auto-encoding variational Bayes [8] , and strategies for pre-training GNN [9] . These methods intend to learn how to reconstruct instances via embedding low-level features in a fine-grain detailed manner, which can easily lead to overfitting [6] . In many situations (e.g., information cascade prediction), we need to learn general knowledge from both labeled and unlabeled data without relying solely on specific downstream task and/or label supervision. That being said, learning such representations in an abstractive way to capture high-level semantics might be more useful, e.g.: (i) a more generalized model to different tasks and data; (ii) the characteristics of unlabeled data can be utilized; (iii) the model's performance can be improved via fine-tuning and distillation; and (iv) better transfer the learned knowledge to other prediction tasks and datasets. These benefits, in turn, motivate us to consider contrastive self-supervised learning. Such learning paradigms have made significant advances in both natural language processing (NLP) and computer vision (CV), especially shedding more light on improving learning ability of models without human supervision and model generalization via knowledge transfer. However, its applicability of arXiv:2107.12576v2 [cs.SI] 20 Feb 2022 understanding information cascade on graphs still remains underexplored in the community. Directly applying contrastive self-supervised learning to graph-based information cascade tasks is desired to study but faces several big challenges, such as: (i) how to learn generic knowledge of graph cascades in a contrastive, selfsupervised, and task-agnostic way, in particular how to leverage a large amount of unlabeled cascade data; (ii) how to construct positive and negative sample pairs in a contrastive learning framework while capturing variations of data and the dynamic diffusion characteristics of cascades; and (iii) how to fine-tune the pre-trained model for downstream cascade prediction tasks in a semi-supervised and task-specific way. In addition, how to distill the model for knowledge transfer across applications and datasetswhile mitigating the effect of "negative transfer" -is another hurdle to overcome. To tackle the above obstacles, we introduce a general semi-supervised learning framework, CCGL (Contrastive Cascade Graph Learning), in which information cascade graphs are augmented by simulating the information diffusion in graphs: manually perform perturbations (adding and removing actions on both nodes and edges), and manipulate node features. CCGL does not require label information for pre-training, and the trained model is further fine-tuned and distilled for downstream cascade prediction tasks and different datasets. The pre-trained model focuses on learning the inner differences and characteristics between cascade graphs rather than solely for accurate prediction, providing a better starting point for supervised training. A comparison between traditional supervised models and CCGL is shown in Fig. 1 . To summarize, the main contributions of our work are as follows: • To the best of our knowledge, this is the first work to utilize unlabeled cascades, devise an effective data augmentation strategy, and design contrastive self-supervised learning for general cascade modeling and prediction. We further employ the semi-supervised fine-tuning and model distillation to improve the cascade prediction. • We propose a novel framework CCGL 1 which learns cascade graph representations in two steps: (i) with labeled/unlabeled cascades and graph data augmentation, we pre-train the model and learn cascade representations in a self-supervised and task-agnostic way. In particular, we create different cascade graph views and use a contrastive loss to discriminate between similar and dissimilar cascade graphs; and (ii) we fine-tune and distill the pretrained model in a semi-supervised and task-specific manner. • Extensive experiments on five real-world datasets are conducted to show CCGL's effectiveness, robustness, and generalizability compared to supervised counterparts. With all the new designs together, CCGL achieves the state-of-the-art performance in information cascade graph prediction, and we have several interesting findings: (i) instead of potential overfitting, CCGL uniformly improves prediction performance on all five cascade datasets with fewer labeled data, e.g., with only 1% of labeled cascades, fine-tuned and distilled CCGL decreases the cascade popu-1. Five datasets, pre-trained and fine-tuned models, as well as source codes are publicly available at https://github.com/Xovee/ccgl. larity prediction errors up to 9.2%, and improves the cascade outbreak prediction accuracy up to 19.9%; (ii) larger models and deeper MLP-based projection heads are essential for cascade self-supervised learning, while the larger batch sizes and longer pre-training epochs do not bring additional improvements; (iii) a teacher-student distilling framework is essential to the usage of unlabeled data and mitigates the negative effect of knowledge transferring; and (iv) for the knowledge transferring across datasets and two cascade prediction tasks, CCGL outperforms supervised counterparts by non-trivial margins. The remainder of this paper is organized as follows. In Section 2 and 3, we give a detailed literature review of related work and necessary preliminaries. Section 4 presents the fundamentals of our proposed framework CCGL. In Section 5, we evaluate CCGL on five large-scale information cascade datasets (Weibo, Twitter, ACM, APS and DBLP) and two downstream tasks (popularity prediction and outbreak prediction). Furthermore, we conduct extensive ablation studies and sensitive analyses. Section 6 concludes this work and points out potential future directions. We overview four main streams of relevant literature and position our contribution in that context. This has been a longstanding and critical problem in the field of information diffusion and social network analysis [1] . Most of existing works on information cascades can be categorized into three distinct groups: • Feature-based models characterize information cascades in different aspects by hand-crafted features, e.g., temporal and structural characteristics, textual and visual content, metadata and historical behaviors [10] , [11] . • Temporal models mainly study the time-series data, incorporated by additional social information. They adopt various stochastic processes to model and simulate the information diffusion in networks, in a statistical and generative manner [12] , [13] ; • Deep learning-based models are utilizing state-of-the-art techniques from neural networks to learn expressive representations of cascades [14] , [15] . Among them many adopted techniques from recurrent and graph neural networks. Although they have achieved promising results in comparison to more traditional approaches, relying on supervised models still need extensive annotated labels and lack of generalization capability. Self-supervised learning (SSL) leverages data itself as supervision and therefore exploits the massive unlabeled data for improving representation learning and downstream tasks. Existing SSL methods usually follow a paradigm called contrastive learning, which learns representations by contrasting positive and negative instances. For example, discriminative approaches have been proposed, mainly studying the design of pretext tasks and contrastive losses: • Pretext tasks are generally for recovering noisy data, predicting neighboring words, or transforming the original data. Examples include instance discrimination between positive and negative samples [16] , [17] , global-local contrast for relative position prediction [18] or mutual information maximization [19] , [20] , [21] . These pretext tasks are used to learn and extract useful data representations, but not for actual demands. • Contrastive losses focus on the similarity between positive and negative samples and are critical for learning good representations, along with negative sampling strategies. Previous work have come up with many learning mechanisms, e.g., end-to-end [19] , memory mechanism [16] , projection head, augmentation and fine-tuning [22] , [23] . Data augmentation on graphs. One of the critical components in SSL is data augmentation, used to improve the capability of model generalization and performance. Augmentation procedures are used in NLP and CV domains [24] and previous works have investigated various strategies for data augmentations [18] , [23] . Unlike texts and images, graph data is usually non-Euclidean, sparse, and complex. Graphs can be directed, dynamic, attributed, and heterogeneouswhich further increases the complexity of their modeling. Since graph data have no analogous augmentations within languages and images (due to their irregularities), there are very few works that tackle the graph data augmentation problem. How to define an effective graph augmentation strategy is a challenging and non-trivial task to retrieve general knowledge from graph structures under an unsupervised learning paradigm. Existing augmentation procedures for images, such as rotation, random crop and resize, cutout, color distortion, Gaussian blur, etc. [22] , cannot be directly applied to graph data for pre-training and fine-tuning [9] . The most straightforward way to augment graph data is adding or removing nodes and/or edges. However, such operations may confront multiple obstacles [25] , including: how to choose the target nodes/edges to add or remove, how to label the newly added nodes/edges, how to process the features associated with these nodes/edges, etc. There are few attempts in the existing literature, such as removing edges at random and covering node features [9] , [26] to prevent over-fitting and over-smoothing; relieving the oversmoothing for GNNs by adding or removing edges between nodes based on model predictions [27] ; utilizing graph autoencoder as an edge predictor module to add "missing" edges and remove "noisy" edges [25] . GCC designs a pretraining task as a sub-graph instance discrimination. An rego network was augmented by random walk with restart, sub-graph induction, and anonymization, to capture general and transferable patterns in the graph structure [28] . Augmenting graphs by manipulate (i) node features, e.g., masking or adding Gaussian noise; and (ii) graph structures, e.g., adding/removing connectivities or sub-sampling [29] . In CCGL, we design a new data augmentation strategy specifically for information cascade graph. Different from existing graph data augmentations which cannot be directly used for cascade graphs (as we will explain in Section 4.2), we simulate the information diffusion process to create new graph views as re-diffusion, and then contrast them for cascade representation learning. The unique aspects of our augmentation strategy are: (i) they are suitable for cascade graphs, while most of the previous models are not; (ii) we add and remove both edges and nodes, as well as features in the graph; and (iii) we do not study node/graph classification but information cascade prediction. Pre-training and transfer learning on graphs. One important application of unsupervised graph pre-training is to learn a transferrable graph encoder for downstream graphbased tasks. While commonly seen in CV and NLP domains, there are relatively few work aiming to tackle the graph pre-training and transferring problems [9] , [30] , given the fact that it faces several challenges such as designing graph pre-training strategies and mitigating negative transfer. A GNN-based strategy proposed in [9] combines both nodeand graph-level pre-training to handle out-of-distribution samples when transferring. UDA-GCN developed by [7] jointly integrates local and global consistency and facilitates knowledge transferring across graphs. GPT-GNN [31] is another generative GNN framework that uses attribute/edge generations for pre-training on large-scale graphs. In this work, CCGL has largely expanded the training data including labeled, unlabeled, and augmented cascade graphs from different data domains, enabling the model to learn general and robust graph representations from abundant data, and then transferring the learned knowledge to downstream prediction tasks, at where the task-specific labeled data are scarce. CCGL does not rely on special pretext tasks nor domain expertise and can be easily extended to other cascade graph applications. We now introduce the basic settings and describe the necessary preliminaries. Information cascade is a behavior of information adoption by people [32] , in various applications such as social networks of Twitter and Weibo, academic networks of authors and papers. The sequence of adoptions over time forms information cascade. If we have the diffusion path of each adoption, then the cascade graph [10] , [33] can be constructed, formally defined as: Definition 1. Information Cascade Graph. Given an information item I, e.g., a tweet or a paper, published at time t 0 , over a period of time, item I receives several adoptions, e.g., M retweets or citations, then the sequence of adoptions composes an information cascade C where user u j adopts the item I at time t j . Then a cascade graph can be defined as a diffusion tree G(t) = {V, E}, where V denotes the set of users in C I (t) and E denotes the adoption relationship between users, e.g., retweeting or citing. In this paper, we focus on the temporal-structural modeling of information cascades, i.e., given a sequence of cascade graphs, we aim to learn effective representations of cascade graphs for downstream cascade applications, e.g., outbreak prediction, recommendation, rumor detection, and user activation prediction. Most of this study focuses on information cascade popularity prediction [1]: Given an observed cascade graph G i (t o ) at observation time t o , the popularity prediction problem aims to predict the future popularity (or size) P i (t p ) of this cascade (graph) at a prediction time t p t o . Additional experiments conducted on another downstream task outbreak prediction can be found in Section 5.5. Other modalities of information cascades, e.g., global structures, individual user/item features, and content of texts/images, are not studied and left for future work. In order to explore and understand which parts of an unsupervised learning framework can benefit the learning of cascade representations, we concentrate on answering the following three questions. Q1: Will unlabeled data improve the learning of cascade graph representation and prediction? Most of the supervised models are not possible to benefit from unlabeled data. In the context of graph-based cascade understanding, unlabeled graphs can be obtained at their early evolving stage, which is not able to make meaningful predictions. In traditional prediction models, these graphs are simply filtered out from training and evaluation [6] , [12] , [14] . Selfsupervised approaches emphasize the importance of learning effective representations from a large amount of unlabeled data in an unsupervised and task-agnostic way. Unlabeled data can be easily obtained from one [23] or multiple datasets [28] for learning generic representations. Therefore, we believe that we can improve the cascade graph learning by incorporating unlabeled cascades in CCGL. Q2: Will data augmentation improve the cascade prediction and if so, how can we design augmentation strategies for cascade graphs? Since graph augmentation procedures have no analogues to texts or images due to the non-Euclidean, complex structures of graphs, how to design new strategies for graph data to improve model generalization ability and to benefit downstream prediction tasks becomes critical both for graph and contrastive learning. As we discussed earlier, previous graph data augmentation techniques mainly focus on graph neural networks and node/graph classifications [26] , [27] , [34] , and most of them only consider edge and feature manipulations while ignore node handling. This calls for an urgent research to devise cascade graph-specific augmentation strategies [25] . In CCGL, we propose a novel strategy for cascade graph data augmentation: we simulate the mechanism of information diffusion in social networks (or other similar networks, e.g., news and academics), where we first traverse every node in the graph in their adoption time order, and then for each node we compute its attractiveness probability. Depends on their degrees in the cascade graph, current nodes can attract new adopters or lose existing followers. Q3: Will contrastive self-supervised learning framework improve the cascade learning and prediction? Pre-training models has recently received a great deal of attention and achieved good performance in both linguistic and visual tasks. However, their applicability on information cascades is still underexplored by research community, to the best of our knowledge. In this study, based on the idea of pre-training [20] , [22] , [34] , we develop CCGL where the selection of cascade graph encoder network is generic. Any graph representation learning models and graph neural networks, or other specifically designed cascade learning models (e.g., DeepCas [4] , VaCas [14] , Coupled-GNNs [15] ) can be used as cascade graph encoders. We implement and compare these techniques in CCGL, aiming at shedding some light on the ability of unsupervised cascade representation learning and seeking additional performance improvement for downstream cascade prediction tasks. We finish this section with a summary of the symbols that will be used in the rest of the paper, presented in Table 1 . We now discuss the main aspects of CCGL sketched in Fig. 3 , which consists of three major components: (i) Cascade graph data augmentation. To learn a more generic and transferable knowledge of graph cascade representations, we design an augmenting strategy using both labeled and unlabeled data to capture some variation and uncertainty via an information diffusion process simulation. See details in Section 4.2. Unlabeled Data (56.7%) (ii) Self-supervised pre-training. We leverage the contrastive pre-training framework to learn abstract-level representations when encoding cascade graphs, which can help alleviate overfitting that typically occurs in many generative representation learning paradigms where fine-grained featurelevel representations are learned (linking data to a specific task). See details in Section 4.3. (iii) Model fine-tuning and distillation. For certain downstream tasks, we fine-tune the pre-trained model using labeled data. We also distill the model to make it more generalized and robust. Without having the distillation, we usually end up with a task-specific model where negative transferring happens when we apply the learned model to different datasets or other tasks. See details in Section 4.4. In typical information cascade prediction tasks, unlabeled data are simply excluded from training and evaluation. For example, when predicting the popularity of an information item, say, a tweet or a scientific paper, we first need to observe its early growth trend and try to predict its future popularity at a given time. In [6] , given a scientific paper and its observed citations in the first few years, the authors predict its citations in 20 years. However, the papers with life span less than 20 years, i.e., without appropriate labels specified, are simply filtered out from the dataset. As shown in Fig. 2 , for papers in the APS dataset, only about 43.3% papers have been used during training and evaluation, leaving others as unlabeled cascades. That is to say, this prediction system is only applicable to papers published at least 20 years ago, since only older papers (those with labels) are trained and evaluated. If we use this system to predict citations of newly published papers, we are biased in favor of past publications, and ignore recent diffusion characteristics during paper dissemination. To address such issues, we need to take unlabeled cascades into consideration, and learn representations of both labeled and unlabeled cascades in an unsupervised manner, which we detail next. Existing data augmentation strategies designed in selfsupervised learning frameworks are specific for learning language or visual representations, and basically built based on GNNs [16] , [23] , [28] . They are not directly applicable for cascade learning, due to the following reasons: (i) there are no straightforward ways to project text/image augmentation strategies to graphs; (ii) a tree-structured cascade graph starts from a root node (e.g., a newly published post or paper), and then diffuses and dynamically evolves to larger audience (e.g., retweets or citations). Adding/deleting nodes or edges in an arbitrary way may substantially change the structure of a cascade graph (e.g., deleting any edge would result in a disconnected G i ); and (iii) nodes in a cascade graph are temporally characterized, i.e., the adoption time of nodes is of paramount importance to model cascade graphs, since cascade graphs with similar number of nodes, or even with the same structures, may have very different temporal behaviors [1] , [10] . To address the above three challenges, we propose a novel and effective cascade graph data augmentation procedure: AugSIM. AugSIM: Augmenting cascade graphs by SIMulating an information diffusion process. In order to create different graph views of a target cascade graph for the subsequent contrastive prediction tasks, while also capturing a certain degree of similarity of graph topology or node/edge features, we propose a simple but effective augmentation procedure based on user's influence and adoption time. For each user u i j in a cascade graph G i , we compute an attractiveness probability a i j to manage the node adding process: where the augmentation strength η i denotes a cascade-level hyper-parameter controlling the number of added nodes to G i . The added node u i new connects to u i j and should be assigned with an adoption time t i new ∈ [t i j , t o ] as a node feature. The adoption time t i j of an item can be viewed as an instance of human reaction time [35] . We compute both a local (cascade-level) adoption time t i local and a global (dataset-level) adoption time t global to specify the adoption time of the newly added node: where θ t is a weight parameter balancing two adoption times, t i local is the average adoption time ( 1 |Vi| j∈|Vi| t i j ) of cascade C i , and t global is the global adoption time drawn from an exponential distribution: Fig. 3 . The overview of our proposed CCGL framework for learning cascade graph representations. It consists of three components: (i) data augmentation strategy AugSIM designed for information cascade graphs; (ii) unsupervised pre-training the CCGL framework by minimizing the contrastive losses on cascade graph representations in a task-agnostic way; and (iii) fine-tuning and further distilling the CCGL framework in a task-specific way. Pre-training and distillation stages utilize both labeled and unlabeled data. Predictor and Teacher are the same network. we compute a removal probability r i j , defined as: where v i j is the parent node of u i j ∈ V leaf i . The expected number of removed nodes/edges is For simplicity, the node removal process is only conducted on leaf nodes, as the main cascade graph structure is maintained. However, other more sophisticated strategies can be used to simulate the information diffusion, such as: (i) allow added nodes to attract more followers; (ii) remove not only leaf nodes but also their parents; (iii) consider more features, such as the number of followers/followees, the number of citations or h-index of authors, as a surrogate of user influence to choose (added/removed) nodes or to specify appropriate adoption times; (iv) adopt stochastic point processes such as Poisson process and Hawkes selfexciting process, which are frequently used to describe the cascading behaviors in information diffusion [12] , [35] , and use them as generative models to augment cascade graphs or expand training data [31] . Potential improvement of cascade graph data augmentation is subject to future work. Here we use node degrees and adoption times to augment cascade graphs, creating different but similar views of graphs for later contrastive modeling. AugSIM strategy can be viewed as an instance of re-diffusion of an information in a network, which preserves the basic patterns of diffusion and introduces some variation and uncertainty. We also designed another two augmentation strategies for comparison: AugRWR and AugAttr, cf. Section 5.3.3. With data augmentation for cascade graph in place, we now introduce the CCGL framework for learning generalized cascade representations without label supervision. Data augmentation. We first use one of the augmentation strategies to create related views of the same cascade graph. Given G i , we augment this graph twice to create two different but similar views, denoted asG 1 i andG 2 i . These two augmented graphs are considered as a positive pair (G 1 i ,G 2 i ) in the subsequent contrastive learning. Cascade graph encoding. We then encode to represent a cascade graph to a vector while capturing temporal and structural information in the graph. The choice of cascade graph encoder is not confined to a particular approach. Any encoder that can map the sparse cascade graph into a dense representation vector is qualified. We employ a graph encoder from a state-of-the-art cascade prediction model VaCas [14] , which has two main components: (i) a graph embedding based on spectral graph wavelets; and (ii) a bidirectional GRU-based network to learn contextualized user behaviors in cascade data. Note that this model is equivalent to Cas-RNN as described in [14] . These two components map the cascade graph G i to a fixed-length representation h i ∈ R d h . To further understand the relations among latent factors and avoid possible noises in representation, we follow a prior study [23] and add a MLP-based projection head to project h i to a new representation z i ∈ R dz . This has been demonstrated a significant improvement for some applications [22] , [36] , as well as ours. We experiment with different designs of projection head (cf. Section 5.4). Note that the projection head only takes part in an unsupervised learning stage, i.e., we still use h i for subsequent downstream task fine-tuning, and use z i for computing the contrastive loss and optimizing the CCGL framework. Contrastive loss. In order to train our CCGL framework, following [23] , the contrastive learning loss is defined to maximize the latent similarity between two augmented views of the same cascade graph, and discriminate between a positive pair (G 1 i ,G 2 i ) and all other negative pairs in a mini-batch. Specifically, CCGL first randomly samples B cascade graphs, and then augments each graph twice to obtain 2B augmented cascade graphs. For a pair of positive graphs in a mini-batch, we leave the remaining 2B − 2 cascade graphs as negative samples. Given a similarity function sim(·, ·) over two vectors (in our case the cosine similarity), the contrastive loss function for a positive pair where 1(·) is an indicator function, τ is a temperature parameter. This contrastive loss function for unsupervised learning is noted as InfoNCE [21] or NT-Xent [23] , that has been widely used in previous SSL models [16] , [17] , [28] . During pre-training, positive samples (two new views drawn from the original cascade graph) are attracted together in the representation space, while negative samples are repelled away from positive samples. In this way, we learn cascade graph representations in an abstract way, without linking them to any specific downstream task labels/signals. Discussion of loss mechanism. Since contrastive learning often requires a large number of negative samples to contrast, maintaining larger mini-batches/dictionaries and building larger network architectures are non-economic and limited by computational resources, although improve the model performance. For example, SimCLR [23] uses a batch size as large as 8, 192 (with 16,382 negative samples per positive pair) in a platform of 128 TPU v3 cores. The learning ability of end-to-end models is constrained by batch sizes. Some of them rely on special pretext tasks, which may change the network architecture, such as constraining the receptive field size [19] , [37] , patching the graph into sub-structures [34] , and limiting their generalization performance. Memory bank [17] and momentum update [16] are another line of contrastive mechanisms that decouple the number of negative samples from the mini-batch size, while providing smooth and consistent encoder update. Previous works have discussed the impact of end-to-end and memory mechanisms. For example, [16] found that end-to-end models have competitive performance when the batch size is small. [28] and [22] concluded that memory mechanism only provides marginal improvement when a large batch size is used. However, larger mini-batch requires more GPU/TPU memory and longer training time. A general model is obtained via the contrastive learning by far, but requires a further fine-tuning using labeled data for certain downstream tasks. In this paper, we focus on the information cascade popularity prediction (cf. Definition 2) as our primary downstream task. Fine-tuning. After unsupervised training of CCGL with unlabeled and augmented cascade graphs in a task-agnostic way, we use labeled cascade graphs to fine-tune the CCGL framework in a task-specific way. Following [22] , the MLP-based projection head used for contrastive learning can be fully discarded (i.e., only the cascade graph encoder is used for fine-tuning), partially discarded (i.e., an encoder followed by fully connected layers for fine-tuning), or fully included (i.e., we use z i for both downstream fine-tuning task and contrastive learning). Detailed discussion of projection head is provided in Section 5.4. For N observed training cascade graphs, the training loss is defined as mean logarithmic squared error (MSLE): where P i (t p ),P i (t p ) are the true and the predicted popularity, respectively. Semi-supervised learning and model distillation. Following previous studies [23] , [34] , we use the unsupervised contrastive loss Eq. (7) for pre-training and supervised loss Eq. (8) for fine-tuning. Unsupervised contrastive loss L contrastive is computed by all positive pairs, forcing the model to discriminate augmented views for labeled and unlabeled cascade graphs. Supervised loss L supervised is computed by labeled data for learning to predict future popularity. However, this loss combination setting may suffer from "negative transfer" [9] . Inspired by [22] , [34] , we adopt two separate networks to mitigate this issue -a teacher network copied from the fine-tuned predictor and a student network started from the scratch. We enforce the prediction of the student network as similar as possible to the teacher network by minimizing the following revised loss function: where N is the number of labeled samples and U is the number of unlabeled samples,P T i (t p ) andP S i (t p ) are predictions of the teacher and student networks, respectively. The weights of the teacher network are fixed and the weights of the student network are updated under Eq. (9). In this way, CCGL benefits from both labeled and unlabeled data: the teacher network produces pseudo labels to the student network. The architecture of the student network can be identical to the teacher network (self-distillation), or be a smaller network to distill. In theory, learning information cascade graph representation is to maximize the mutual information (MIM) [19] , [21] I MI (Φ(Ĝ 1 i ); Φ(Ĝ 2 i )) between two augmented cascade graph viewsĜ 1 i andĜ 2 i (which should be classified in the same category or located closer in the embedding space) by a neural network Φ(·). The network consists of a graph encoder maps the cascade graph G i to h i for downstream tasks and a MLP-based projection head maps the h i to z i for contrastive learning. From the perspective of probability, given two random variables z 1 i and z 2 i , the model is forced to discriminate between positive pairs from joint distribution p(z 1 i , z 2 i ) and negative pairs from the product of marginals p(z 1 i )p(z 2 i ). Minimizing the loss (7) is equivalent to maximize a lower bound on the mutual information I MI (z 1 i ; z 2 i ) as: The key factor of contrastive frameworks is the design of different data views. In CCGL, we propose AugSIM to augment cascade graphs by simulating a new information diffusion process based on existing observation. AugSIM is able to preserve the high-level shared information between graph views while also capturing the variation and uncertainty during diffusion, i.e., the learned cascade graph representations should resist to random perturbations. On one hand, the model intends to discriminate augmented positive/negative graph views as much as possible (for maximizing the mutual information). On the other hand, the model is optimized to ignore trivial differences in graph views to retain only the necessary information (for minimizing the prediction error of downstream tasks). Compared to supervised baselines, CCGL framework has three more components that bring extra computational overhead: (i) graph data augmentation; (ii) contrastive selfsupervised pre-training; and (iii) model distillation. Table 2 . We use default hyper-parameter settings (cf. Table 4 ). Following common protocols of unsupervised and semisupervised learning [16] , [22] , [28] , we summarize experimental settings and several social/scientific cascade datasets in Section 5.1 and 5.2, respectively. Baselines and their configurations are provided in Section 5.3. We have the discussions of experimental results, several observations, and ablation studies in Section 5.4. We conduct knowledge transferring experiments (among different tasks and datasets) in Section 5.5. For all experiments of CCGL and baselines, unless otherwise specified, we uniformly adopt the following settings for a fair comparison. We use Adam optimizer, and each dataset is divided into training (50%), validation (10%), and test (40%) sets, plus unlabeled data. The pre-training (finetuning) is early stopped when training (validation) loss has not declined with a patience of 20 epochs. We report MSLE with logarithmic base 2 following [4] , [6] , [14] . Cascade graphs with nodes |V(t o )| < 10 are filtered out, and for graphs with |V(t o )| > 100, we select the first 100 nodes (sorted by adoption time). We manually tune the model by searching the hyperparameter space. Table 4 lists hyper-parameters, searching space, and their default values used throughout this paper. We used five large-scale publicly available cascade datasets, which can be categorized into two types: social and scientific. Detailed statistics of datasets is presented in Table 3 . • Weibo retweet cascade dataset is introduced by [6] . For cascades we set the observation time t o to 1 hour, prediction time t p to 24 hours. • Twitter hashtag cascade dataset is collected by [38] . The cascade graphs are built by adopting, retweeting, and mentioning relationships. We set observation time t o to 2 days, prediction time t p to 32 days. • ACM citation cascade dataset is rearranged from Association for Computing Machinery [39] (released at Jan 20, 2017), which contains 2,385,057 scientific papers in the field of computer science. We set the observation time t o to 3 years, prediction time t p to 10 years. • APS citation cascade dataset is rearranged from American Physical Society at https://journals.aps.org/datasets (accessed on Jan 17, 2019), containing 616,316 papers published by 17 APS journals. We set the observation time t o to 3 years, prediction time t p to 20 years. • DBLP citation cascade dataset is from DBLP citation network V9 [39] (released at Jul 3, 2017), containing 3,680,006 scientific papers. We set the observation time t o to 5 years, prediction time t p to 20 years. We filter papers with less than 5 citations within the first 5 years, that is |V(t o )| < 5. To evaluate the performance of CCGL and show the benefits of its three major components: a self-supervised taskagnostic contrastive model for generalizability, graph cascade augmentation to capture diffusion variation and uncertainty, and task-specific model fine-tuning and model distillation for knowledge transfer, we include several strong supervised and semi-supervised models as follows. • Feature-based models extract hand-crafted features from information cascades to make predictions. Following [10] , [14] , we select several structural and temporal features: cumulative popularity series, time between root user and first adopter, mean time between the first and second half of adoptions, number of leaf nodes, mean of node degrees, mean and max length of diffusion paths; extracted features are then fed into fully connected layers for training and evaluation. • node2vec [40] is a node embedding technique based on random walk. We leverage it to obtain embeddings of nodes in cascade graph, and then feed them into Bi-GRUs followed by MLPs to make predictions. We set the embedding dimension to 128, window size to 10, walk length to 30, number of walks to 200, and both parameters p and q for neighborhood sampling to 1. The implementation used is: https://github.com/eliorc/node2vec. • DeepHawkes [6] model predicts the popularity of information cascades by combining Hawkes process with deep learning. The embedding dimension is 64, learning rate is 5e −3 , embedding learning rate is 5e −4 , dropout prob. is 0.8, and time interval is 5 minutes. The implementation used is: https://github.com/CaoQi92/DeepHawkes. • Base model [14] , a standard supervised benchmark, has the same network architecture (i.e., cascade graph encoder + MLPs) as our fine-tuned CCGL but can not learn from unlabeled data. It lacks of contrastive pre-training and model distillation. For fairness, hyper-parameters in the Base model are exactly the same as CCGL. • Auto-encoder (AE) and variational auto-encoder (VAE) [8] are deep generative models for unsupervised representations learning. The encoder and decoder of (V)AEs are both composed by GRUs and MLPs. The sequence of node embeddings in cascade graphs are first fed into the encoder to generate a dense latent representation z, used by the decoder to reconstruct the input (i.e., node embeddings). The AE is optimized by minimizing the reconstruction loss; the objective for VAE includes an evidence lower bound (ELBO) term. We set the pre-training epoch to 100. To demonstrate the superiority of our AugSIM, we implement two additional augmentation strategies. • AugRWR: Augmenting cascade graphs by random walk with restart (RWR) and sub-graph induction, is partly inspired by GCC [28] . By repeating the RWR process, we collect a subset of nodes in V i , denoted as V rwr i . Then the augmented graph G i can be induced by removing the nodes in V i \ V rwr i . We restrict the walking step at most γ|V i | times, to avoid identical views for small-sized graphs and insufficient walk for large-sized graphs. The transition probability of RWR is specified by the degrees of neighboring nodes in graph: is the set of neighboring nodes of u i j . • AugAttr: Augmenting cascade graphs by replacing node Attributes, is partly inspired by NodeAug [41] . We alter node attribute (in our case the adoption time t i j ) by iteratively and randomly sampling a new time t i j,new during timespan [t i j−1,new , t i j+1 ] for each node u i j in cascade graph G i (except the first and last nodes). Comparison results are shown in Table 5 , including three kinds of evaluation protocols: supervised, linear evaluation on frozen features, and semi-supervised with finetuning. The experimental results show that our proposed CCGL framework outperforms all the baselines by a large margin. In the following, we give detailed discussions w.r.t.: (i) answering three research questions; (ii) two notable observations; and (iii) four ablation studies. Usage of unlabeled data improves prediction with model distillation involved. In Table 5 , the results suggest that introducing a large amount of unlabeled data into CCGL's pre-training stage without distillation decreases the prediction performance, especially when labeled data are few and the model is linear evaluated (i.e., parameters in the pretraining phase are frozen). We speculate that the degradation of performance is owing to the interference of unlabeled data in the latent space. With model distillation, this TABLE 5 Prediction performance comparison of both supervised and semi-supervised baselines and our proposed CCGL framework, under different experimental settings, measured by 10 runs of mean MSLEs (lower is better) on the Weibo dataset with varied label fractions. Results of another four datasets (Twitter, ACM, APS, DBLP) are in Table 11 . We use a self-distilled student. The bottom right number of MSLE is standard deviation, upper right is improvement compared to the Base at least 0.05 point. Numbers in parentheses refer to equations. deficiency can be largely abated and improves over the finetuned model. Another benefit is that the model becomes more generalizable for knowledge transfer as the unlabeled data is again well incorporated during this distillation process (see Section 5.5). The cascade graph data augmentation is effective for contrastive learning. Three augmentation strategies all improve the prediction performance. The improvement becomes larger when fine-tuning on only a small fraction of labeled cascades. The performance of AugSIM strategy is better than AugRWR and AugAttr, which suggests that simulating the information diffusion in networks is a promising direction for graph contrastive learning on cascade modeling and prediction, compared to random walk-based or heuristic models. Specifically, when fine-tuned on 1%, 10%, and 100% of the labeled data, CCGL improves the Base model by approximately 5.3%, 9.3%, and 1.4%, respectively, in terms of MSLE. This further supports that data augmentation capturing some variation and uncertainty of cascading is useful and can potential alleviate overfitting. The CCGL model performs on par with or even beats strong supervised counterparts. With all components we described in Section 4 equipped and combined, i.e., with unlabeled cascade data, graph data augmentation strategy AugSIM specifically designed for cascades, contrastive graph self-supervised pre-training, and task-specific finetuning and model distillation, our proposed CCGL framework achieves a new state-of-the-art for cascade popularity prediction, outperforming a strong supervised Base model up to 9.2%, 11.7% and 2.9% when fine-tuned on 1%, 10%, and 100% of labeled data, respectively, in terms of MSLE on the Weibo dataset. Furthermore, we have several notable observations. Observation 1: CCGL is label-efficient compared to baselines. When fine-tuned on different fractions of labeled cascades, CCGL is more label-efficient compared to the supervised model. With only 1% of labels available, the performance of CCGL is on par with Base trained on 10% of labels (3.25 vs. 3.24) . This can be explained by the contrastive learning and data augmentation in CCGL. Observation 2: Data augmentation does not benefit supervised learning for cascade prediction. When original cascade graphs are augmented by AugSIM, we observe that there is no benefit of graph data augmentation for supervised cascade learning. Actually, introducing AugSIM into the Base model substantially lowers the prediction performance. The gap becomes larger when more labeled cascades are involved. One plausible reason is that supervised models learn feature-level representations rather abstractlevel semantics, which are not able to capture variations and uncertainties brought in by augmentation. To demonstrate the robustness and sensitivity of CCGL, we perform several ablation studies. Ablation 1: Cascade self-supervised learning benefits more from a deeper projection head, especially when labeled data are few. Introducing a non-linear and learnable MLP-based projection head (cf. [23] ) showed that this simple mechanism can provide significant improvement for visual representation learning, as verified in [36] . Subsequently, in [22] it was shown that a deeper projection head and finetuning from a middle layer are a more powerful approach. However, it is not clear whether this mechanism has a significant advantage for cascade learning and prediction. We experimented with 15 different projection heads in CCGL with varied label fractions, and the results are shown in Fig. 4 . We tag each projection head design as i-j, where i is the depth of the projection head from 0 to 4, and j denotes that the CCGL framework is fine-tuned from the j-th layer of the head. For different fractions of labeled cascades used for fine-tuning, the behavior of projection heads varies. When fine-tuning on 1% and 10% of labeled cascades, the projection head significantly improves the prediction performance, on 13 out of 14 projection head designs other than 0-0 (i.e., without projection head), leading up to ∼6.3% improvement on 1% labels and ∼6.2% improvement on 10% labels. When labeled data expand (e.g., with 100% labels), only 2 out of 14 designs improve the prediction, which provides a negative case for using the projection head on cascade contrastive learning. As for projection head depth, we found that deeper heads are more effective than shallow heads. There is no evidence that fine-tuning from a middle layer is significantly better (contrary to previous work [22] ), as shown in Table 6 . Compared to visual representation learning, where projection head brings common performance improvement, whether to use projection heads (and what kind) still lacks a universal guidance, at least for cascade graph learning. Ablation 2: Model size and pre-training epochs. Fig. 5 shows the relationship between different combinations of model sizes and pre-training epochs. Here we denote 1× as the model width (embedding dimension, units of RNNs and MLPs), set to 32. The setting of other hyper-parameters are: batch size is 64, augmentation strategy is AugSIM, augmentation strength η is 0.1, temperature τ is 0.05, and projection head is 2-0. From the results, we can see that a large model is essential to guarantee a satisfactory performance. When the model size is already large, pre-training longer does not provide additional improvement, and even decreasing the performance, which might be due to that negative pretraining happens. The best performance is achieved by pretraining CCGL (16×) for 30 epochs. We also investigated the impact of model size on supervised and semi-supervised models in Table 7 . From the results, we can see that for all model sizes, semi-supervised model outperforms supervised counterpart. As model size grows, supervised model is prone to overfitting, whereas semi-supervised model largely alleviates this due to two possible reasons: (i) incorporation of unlabeled data and data augmentation make the model more generalizable; and (ii) unlike supervised models where fine-grained feature-level representations are learned, the contrastive learning paradigm learns high-level abstract semantic representations. Table 8 . The teacher network is pre-trained with labeled cascades, or both labeled and unlabeled cascades. The student network is distilled with (i) labeled; (ii) unlabeled; or (iii) both labeled and unlabeled cascades. From the results we can see that model distillation indeed provides non-trivial additional performance improvement compared to contrastive learning, as much as 73.7% relative improvement in MSLE by using 1% of labels (26.7% for 10%, and 50.0% for all labels). One plausible explanation might be that the distillation makes the model more task-agnostic and alleviate the issue of negative transfer. Here we study several hyper-parameters and their impacts on cascade prediction performance through the Weibo dataset with varied label fractions. The results are shown in Fig. 6 . We use the following parameter settings by default: batch size is 64, augmentation strategy is AugSIM, augmentation strength η is 0.1, temperature τ is 0.05, pre-training epochs is 100, embedding dimension is 64, model size is 64 (2×), projection head is two fully-connected layers and we fine- tune the model before the projection head (i.e., 2-0). • Impact of augmentation strength η: while augmentation strength η controls the number of added/removed nodes in cascade graphs, we observe that strong augmentations are better when pre-trained on 1% of the labels. • Impact of contrastive loss temperature τ : the results show that a small value of temperature (around 0.2) is preferred. Large temperatures (e.g., 1 or 2) sometimes make the model unable to converge (8 of 30 trials do not converge). • Impact of batch size B: contrary to previous conclusions [23] for contrastive learning on visual representation, learning of cascade representation prefers a smaller batch size. When full set of labeled cascades is used, the results with larger batch sizes (128 and 256) become worse and unstable. When 100× fewer labeled data are used, the model performs significantly worse. This suggests that a large batch size is not always necessary in contrastive learning. We conjecture that this deficiency is because the information and variances in cascades are significantly less than those in images (e.g., ImageNet), thus a smaller batch is sufficient for the model to distinguish different cascades. Such trends are also reported in [28] , where building a large batch (a.k.a. dictionary size) is not always helpful, and sometimes decreases the performance. However, a much larger batch (e.g., 4096/8192), or a momentum mechanism [16] , may improve the prediction when more (and complicated) cascade features are available, e.g., global user/item graph, texts and images of cascades. We leave this hypothesis for future work. • Impact of embedding dimension: we vary the embedding dimension from 16 to 256. The results indicate that a larger dimension is good for prediction. It also brings more time/space consumption and consequently makes the model prone to overfitting when labeled data are very few. To investigate the generalization capability of our model, we explore the transferring ability of CCGL on five information cascade datasets (Weibo, Twitter, ACM, APS, and DBLP) and two cascade prediction tasks (popularity prediction and outbreak prediction). To demonstrate that the representations learned by CCGL have general transferable knowledge across cascade datasets, we conduct the following experiments on Weibo and Twitter datasets: (i) pre-train on one dataset and finetune on another dataset; (ii) pre-train on both datasets and fine-tune on one dataset. The results are shown in Table 9 . We have two notable findings: (i) CCGL pre-trained on Weibo and then fine-tuned on Twitter, significantly outperforms random initialized Base model by large margins. When labeled data are few, its performance also surpasses the model which is both pre-trained and fine-tuned on Twitter. This suggests that not only the Weibo dataset helps the Twitter cascade predictions, but also provides a better starting point for fine-tuning compared to Twitter dataset itself; and (ii) when pre-trained both on Weibo & Twitter, fine-tuned CCGL model achieves even better prediction performance compared to other combinations, up to 72.3% relative improvement (1.43 vs. 0.83) on only 1% of the Twitter labels. This might be because the model learns the generic knowledge in pre-training and task-specific knowledge in fine-tuning and distillation. Transferring knowledge to another prediction task. In addition to the task of popularity prediction, we investigate the capability of knowledge transfer to different tasks using another downstream prediction task, cascade outbreak prediction. For each of the five datasets, we select top 10% cascades as outbreak and others as non-outbreak (i.e., negative instances). Since the distribution of cascade popularity is highly skewed, we undersampling the non-outbreak cascades to create a balanced dataset. The process of knowledge transfer is as follows. CCGL is pre-trained on the Weibo dataset and all its hyper-parameters are fixed, which allows us to transfer knowledge without additional hyperparameter tuning. However, this may lower the prediction accuracy. Overall, CCGL achieves comparable performance to the randomly initialized Base model if not better across all five datasets. The results in Table 10 give us the following two observations: (i) when only 1% of labeled data is used, CCGL substantially outperforms the supervised Base model by 19.9 in accuracy on the Twitter dataset; (ii) although CCGL is pre-trained only on the Weibo cascades, it performs well on other datasets, including both in social and scientific scenarios. This suggests that the knowledge pretrained from Weibo cascades are successfully transferred across different datasets. In summary, CCGL outperforms the Base model on 12 out of 15 experiments. We believe this transferring capability can be attributed to the general knowledge learned during the pre-training with unlabeled data in the contrastive learning framework, as well as the model distillation under the teacher-student framework. Visualization of latent representations. In Fig. 7 we visualize the learned representations on Weibo cascades using t-SNE. The first two plots (a) and (b) are representations h and z pre-trained by CCGL on 19,538 Weibo cascades (without label information). Recall that we use h for downstream prediction tasks and z for contrastive loss. The projection head is 4-1, the embedding dimension of latent vectors is 256. The last four plots are representations h retrieved from (c) supervised Base model; (d) linear evaluated CCGL; (e) fine-tuned CCGL; and (f) distilled CCGL. Models are trained on 10% of the labels. The color of each point (i.e., cascade) indicates its future popularity, where darker ones are larger cascades. We can see that compared to pre-trained or linear-evaluated h, the visualization of task-specific finetuned/distilled h is more separable between small and large cascades, which indicates that the model fine-tuning and distillation is effective for cascade popularity prediction. The representations in (a), (b), and (d) are smooth and not distinguishable, which suggests that the pre-trained model carries over and the "negative" transfer might occur. The representation from the supervised Base model in (c) does not well cluster larger cascades together. This verifies its inferior performance due to the fact that it lacks of contrastive 11 Performance comparison of CCGL and baselines on Twitter (T), ACM (A), APS (AP), DBLP (D) datasets, measured by 5 runs of mean MSLEs and std with varied label fractions (1%, 10%, 100%). We use a self-distilled CCGL trained on labeled and unlabeled cascades for 30 epochs. pre-training and model distillation. Impact of model size. In Table 12 , we report an additional ablation study on model size for 1% or 100% of the label fractions. We can see that for supervised models, when model becomes larger, the prediction performance decreases severely. On the other side, semi-supervised model which involves pre-training and fine-tuning, is stable and even better when model size is large. Results on other datasets. In Table 11 we show the prediction performance of distilled CCGL on Twitter, ACM, APS, and DBLP datasets. Our proposed CCGL model performs on par or better than the supervised Base model as well as feature-based model) on four datasets. The achieved improvements are generally larger when label data are fewer. We presented CCGL, a contrastive cascade graph learning framework which provides a new perspective of modeling cascade graphs, bridging the gap between supervised and unsupervised information cascade modeling and prediction, and enabling data augmentation strategies. Our experiments conducted on five different information diffusion datasets and two cascade prediction tasks, demonstrate the effectiveness of the devised cascade graph data augmentation strategy and CCGL's contrastive self-supervised pretraining, fine-tuning, and distillation paradigm. In addition to performance improvement, our method is capable of exploiting unlabeled data and extracting supervision signals in a self-supervised manner. We also showed that the proposed model is label-efficient and can generalize information across different cascade data and applications. As our future work, there are several aspects of the proposed model that warrant further investigation: (i) learning cascade graph representations dynamically via temporal embedding techniques [42] , [43] ; (ii) combining multiple datasets into unsupervised pre-training for better knowledge transfer; (iii) other possible cascade graph augmentation strategies, such as introducing more cascade features into selection of nodes/edges, as well as alternative contrastive mechanisms, e.g., momentum updates [16] , autoregressive modeling [21] , mutual information maximization [20] or multi-view contrasting [29] ; and (iv) learning cascade representation in multi-modalities setting [1] . A survey of information cascade analysis: Models, predictions and recent advances Mobility network models of covid-19 explain inequities and inform reopening Fullscale information diffusion prediction with reinforced recurrent networks DeepCas: An end-to-end predictor of information cascades Fully exploiting cascade graphs for real-time forwarding prediction Deep-Hawkes: Bridging the gap between prediction and understanding of information cascades Unsupervised domain adaptive graph convolutional networks Auto-encoding variational bayes Strategies for pre-training graph neural networks Can cascades be predicted? A comparative study of transactional and semantic approaches for predicting cascades on Twitter SEISMIC: A self-exciting point process model for predicting tweet popularity Using survival theory in early pattern detection for viral cascades Variational information diffusion for probabilistic cascades prediction Popularity prediction on social platforms with coupled graph neural networks Momentum contrast for unsupervised visual representation learning Unsupervised feature learning via non-parametric instance discrimination Self-supervised learning of pretextinvariant representations Learning representations by maximizing mutual information across views Deep graph infomax Representation learning with contrastive predictive coding Big self-supervised models are strong semi-supervised learners A simple framework for contrastive learning of visual representations Autoaugment: Learning augmentation strategies from data Data augmentation for graph neural networks DropEdge: Towards deep graph convolutional networks on node classification Measuring and relieving the over-smoothing problem for graph neural networks from the topological view GCC: Graph contrastive coding for graph neural network pre-training Contrastive multi-view representation learning on graphs Transfer learning via learning to transfer GPT-GNN: Generative pre-training of graph neural networks Information diffusion in online social networks: A survey Information diffusion prediction via recurrent cascades convolution InfoGraph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization Robust dynamic classes revealed by measuring the response function of a social system Improved baselines with momentum contrastive learning Learning deep representations by mutual information estimation and maximization Virality prediction and community structure in social networks ArnetMiner: Extraction and mining of academic social networks node2vec: Scalable feature learning for networks NodeAug: Semi-supervised node classification with data augmentation Inductive representation learning on temporal graphs Node embedding over temporal graphs (GS'20) was born in Yulin, Shaanxi, China, in 1996. He received the B.S. degree and M.S. degree in software engineering from the University of Electronic Science and Technology of China (UESTC), Chengdu, Sichuan, China, in 2018 and 2021, respectively. He is currently pursuing the Ph.D. degree in computer science at UESTC.He is the author of several research articles published in INFOCOM, SIGIR, TKDE, AAAI, and CSUR. His recent research interests include social network data mining and knowledge discovery, primarily focuses on information diffusion in full-scale graphs, human-centered data mining, representation learning, and their novel applications in various social and scientific scenarios such as information cascade popularity prediction, urban flow inference, and scientific impact prediction. His research interests include machine learning, neural networks, spatio-temporal data management, graph learning, recommender systems, and social network mining and knowledge discovery.Kunpeng Zhang is Assistant Professor at Robert H. Smith School of Business, University of Maryland, College Park. He received the Ph.D. degree in computer science from Northwestern University, USA. He is interested in large-scale data analysis, with particular focuses on social data mining, image understanding via machine learning, social network analysis, and causal inference.He has published papers in the area of social media, artificial intelligence, network analysis, and information systems on various conferences and journals.Siyuan Liu is Assistant Professor at Smeal College of Business, Pennsylvania State University. He received his first Ph.D. degree from Department of Computer Science and Engineering at Hong Kong University of Science and Technology, and the second Ph.D. degree from University of Chinese Academy of Sciences.His research interests include spatial and temporal data mining, social networks analytics, and data-driven behavior analytics.