key: cord-0598739-bapyegjj authors: Gomez-Rodriguez, Manuel; Leskovec, Jure; Krause, Andreas title: Inferring Networks of Diffusion and Influence date: 2010-06-01 journal: nan DOI: nan sha: 80021105f77622d69ba25555dfee0e3cec11dd76 doc_id: 598739 cord_uid: bapyegjj Information diffusion and virus propagation are fundamental processes taking place in networks. While it is often possible to directly observe when nodes become infected with a virus or adopt the information, observing individual transmissions (i.e., who infects whom, or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks. We demonstrate the effectiveness of our approach by tracing information diffusion in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news for the top 1,000 media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them. The dissemination of information, cascading behavior, diffusion and spreading of ideas, innovation, information, influence, viruses and diseases are ubiquitous in social and information networks. Such processes play a fundamental role in settings that include the spread of technological innovations [Rogers 1995; Strang and Soule 1998 ], word of mouth effects in marketing [Domingos and Richardson 2001; Kempe et al. 2003; ], the spread of news and opinions [Adar et al. 2004; Gruhl et al. 2004; Leskovec et al. 2009; Liben-Nowell and Kleinberg 2008] , collective problemsolving [Kearns et al. 2006 ], the spread of infectious diseases [Anderson and May 2002; Bailey 1975; Hethcote 2000 ] and sampling methods for hidden populations [Goodman 1961; Heckathorn 1997] . In order to study network diffusion there are two fundamental challenges one has to address. First, to be able to track cascading processes taking place in a network, one needs to identify the contagion (i.e., the idea, information, virus, disease) that is actually spreading and propagating over the edges of the network. Moreover, one has then to identify a way to successfully trace the contagion as it is diffusing through the network. For example, when tracing information diffusion, it is a non-trivial task to automatically and on a large scale identify the phrases or "memes" that are spreading through the Web [Leskovec et al. 2009 ]. Second, we usually think of diffusion as a process that takes place on a network, where the contagion propagates over the edges of the underlying network from node to node like an epidemic. However, the network over which propagations take place is usually unknown and unobserved. Commonly, we only observe the times when particular nodes get "infected" but we do not observe who infected them. In case of information propagation, as bloggers discover new information, they write about it without explicitly citing the source. Thus, we only observe the time when a blog gets "infected" with information but not where it got infected from. Similarly, in virus propagation, we observe people getting sick without usually knowing who infected them. And, in a viral marketing setting, we observe people purchasing products or adopting particular behaviors without explicitly knowing who was the influencer that caused the adoption or the purchase. These challenges are especially pronounced in information diffusion on the Web, where there have been relatively few large scale studies of information propagation in large networks [Adar and Adamic 2005; Liben-Nowell and Kleinberg 2008] . In order to study paths of diffusion over networks, one essentially requires to have complete information about who influences whom, as a single missing link in a sequence of propagations can lead to wrong inferences [Sadikov et al. 2011 ]. Even if one collects near complete large scale diffusion data, it is a non-trivial task to identify textual fragments that propagate relatively intact through the Web without human supervision. And even then the question of how information diffuses through the network still remains. Thus, the questions are, what is the network over which the information propagates on the Web? What is the global structure of such a network? How do news media sites and blogs interact? Which roles do different sites play in the diffusion process and how influential are they? Our approach to inferring networks of diffusion and influence. We address the above questions by positing that there is some underlying unknown network over which information, viruses or influence propagate. We assume that the underlying network is static and does not change over time. We then observe the times when nodes get infected by or decide to adopt a particular contagion (a particular piece of information, product or a virus) but we do not observe where they got infected from. Thus, for each contagion, we only observe times when nodes got infected, and we are then interested in determining the paths the diffusion took through the unobserved network. Our goal is to reconstruct the network over which contagions propagate. Figure 1 gives an example. Edges in such networks of influence and diffusion have various interpretations. In virus or disease propagation, edges can be interpreted as who-infects-whom. In information propagation, edges are who-adopts-information-from-whom or who-listens-to-whom. In a viral marketing setting, edges can be understood as who-influences-whom. The main premise of our work is that by observing many different contagions spreading among the nodes, we can infer the edges of the underlying propagation network. If node v tends to get infected soon after node u for many different contagions, then we can expect · 3 (a) True network G * (b) Inferred networkĜ using heuristic baseline method (c) Inferred networkĜ using NETINF algorithm Fig. 1 . Diffusion network inference problem. There is an unknown network (a) over which contagions propagate. We are given a collection of node infection times and aim to recover the network in figure (a). Using a baseline heuristic (see Section 4) we recover network (b) and using the proposed NETINF algorithm we recover network (c). Red edges denote mistakes. The baseline makes many mistakes but NETINF almost perfectly recovers the network. an edge (u, v) to be present in the network. By exploring correlations in node infection times, we aim to recover the unobserved diffusion network. The concept of set of contagions over a network is illustrated in Figure 2 . As a conta- For each cascade, nodes in gray are the "infected" nodes and the edges denote the direction in which the contagion propagated. Now, given only the node infection times in each cascade we aim to infer the connectivity of the underlying network G * . gion spreads over the underlying network it creates a trace, called cascade. Nodes of the cascade are the nodes of the network that got infected by the contagion and edges of the cascade represent edges of the network over which the contagion actually spread. On the top of Figure 2 , the underlying true network over which contagions spread is illustrated. Each subsequent layer depicts a cascade created by a particular contagion. A priori, we do not know the connectivity of the underlying true network and our aim is to infer this connectivity using the infection times of nodes in many cascades. We develop NETINF, a scalable algorithm for inferring networks of diffusion and influence. We first formulate a generative probabilistic model of how, on a fixed hypothetical network, contagions spread as directed trees (i.e., a node infects many other nodes) through the network. Since we only observe times when nodes get infected, there are many possible ways of the contagion could have propagated through the network that are consistent with the observed data. In order to infer the network we have to consider all possible ways of the contagion spreading through the network. Thus, naive computation of the model takes exponential time since there is a combinatorially large number of propagation trees. We show that, perhaps surprisingly, computations over this super-exponential set of trees can be performed in polynomial (cubic) time. However, under such model, the network inference problem is still intractable. Thus, we introduce a tractable approximation, and show that the objective function can be both efficiently computed and efficiently optimized. By exploiting a diminishing returns property of the problem, we prove that NETINF infers near-optimal networks. We also speed-up NETINF by exploiting the local structure of the objective function and by using lazy evaluations ]. In a broader context, our work here is related to the network structure learning of proba-· 5 bilistic directed graphical models [Friedman et al. 1999; Getoor et al. 2003 ] where heuristic greedy hill-climbing or stochastic search that both offer no performance guarantees are usually used in practice. In contrast, our work here provides a novel formulation and a tractable polynomial time algorithm for inferring directed networks together with an approximation guarantee that ensures the inferred networks will be of near-optimal quality. Our results on synthetic datasets show that we can reliably infer an underlying propagation and influence network, regardless of the overall network structure. Validation on real and synthetic datasets shows that NETINF outperforms a baseline heuristic by an order of magnitude and correctly discovers more than 90% of the edges. We apply our algorithm to a real Web information propagation dataset of 170 million blog and news articles over a one year period. Our results show that online news propagation networks tend to have a core-periphery structure with a small set of core blog and news media websites that diffuse information to the rest of the Web, news media websites tend to diffuse the news faster than blogs and blogs keep discussing about news longer time than media websites. Inferring how information or viruses propagate over networks is crucial for a better understanding of diffusion in networks. By modeling the structure of the propagation network, we can gain insight into positions and roles various nodes play in the diffusion process and assess the range of influence of nodes in the network. The rest of the paper is organized as follows. Section 2 is devoted to the statement of the problem, the formulation of the model and the optimization problem. In section 3, an efficient reformulation of the optimization problem is proposed and its solution is presented. Experimental evaluation using synthetic and MemeTracker data are shown in section 4. We conclude with related work in section 5 and discussion of our results in section 6. We next formally describe the problem where contagions propagate over an unknown static directed network and create cascades. For each cascade we observe times when nodes got infected but not who infected them. The goal then is to infer the unknown network over which contagions originally propagated. In an information diffusion setting, each contagion corresponds to a different piece of information that spreads over the network and all we observe are the times when particular nodes adopted or mentioned particular information. The task then is to infer the network where a directed edge (u, v) carries the semantics that node v tends to get influenced by node u (i.e., mentions or adopts the information after node u does so as well). Given a hidden directed network G * , we observe multiple contagions spreading over it. As the contagion c propagates over the network, it leaves a trace, a cascade, in the form of a set of triples (u, v, t v ) c , which means that contagion c reached node v at time t v by spreading from node u (i.e., by propagating over the edge (u, v)). We denote the fact that the cascade initially starts from some active node v at time t v as (∅, v, t v ) c . Now, we only get to observe the time t v when contagion c reached node v but not how it reached the node v, i.e., we only know that v got infected by one of its neighbors in the network but do not know who v's neighbors are and who of them infected v. Thus, instead of observing the triples (u, v, t v ) c that fully specify the trace of the contagion c through the network, we only get to observe pairs (v, t v ) c that describe the time t v when node v got infected by the contagion c. Now, given such data about node infection times for many different contagions, we aim to recover the unobserved directed network G * , i.e., the network over which the contagions originally spread. We use the term hit time t u to refer to the time when a cascade created by a contagion hits (infects, causes the adoption by) a particular node u. In practice, many contagions do not hit all the nodes of the network. Simply, if a contagion hits all the nodes this means it will infect every node of the network. In real-life most cascades created by contagions are relatively small. Thus, if a node u is not hit by a cascade, then we set t u = ∞. Then, the observed data about a cascade c is specified by the vector t c = [t 1 , . . . , t n ] of hit times, where n is the number of nodes in G * , and t i is the time when node i got infected by the contagion c (t i = ∞ if i did not get infected by c). Our goal now is to infer the network G * . In order to solve this problem we first define the probabilistic model of how contagions spread over the edges of the network. We first specify the contagion transmission model P c (u, v) that describes how likely is that node u spreads the contagion c to node v. Based on the model we then describe the probability P (c|T ) that the contagion c propagated in a particular cascade tree pattern T = (V T , E T ), where tree T simply specifies which nodes infected which other nodes (e.g., see Figure 2 ). Last, we define P (c|G), which is the probability that cascade c occurs in a network G. And then, under this model, we show how to estimate a (near-)maximum likelihood networkĜ, i.e., the networkĜ that (approximately) maximizes the probability of cascades c occurring in it. We start by formulating the probabilistic model of how contagions diffuse over the network. We build on the Independent Cascade Model [Kempe et al. 2003 ] which posits that an infected node infects each of its neighbors in the network G independently at random with some small chosen probability. This model implicitly assumes that every node v in the cascade c is infected by at most one node u. That is, it only matters when the first neighbor of v infects it and all infections that come afterwards have no impact. Note that v can have multiple of its neighbors infected but only one neighbor actually activates v. Thus, the structure of a cascade created by the diffusion of contagion c is fully described by a directed tree T , that is contained in the directed graph G, i.e., since the contagion can only spread over the edges of G and each node can only be infected by at most one other node, the pattern in which the contagion propagated is a tree and a subgraph of G. Refer to Figure 2 for an illustration of a network and a set of cascades created by contagions diffusing over it. Probability of an individual transmission. The Independent Contagion Model only implicitly models time through the epochs of the propagation. We thus formulate a variant of the model that preserves the tree structure of cascades and also incorporates the notion of time. We think of our model of how a contagion transmits from u to v in two steps. When a new node u gets infected it gets a chance to transmit the contagion to each of its currently uninfected neighbors w independently with some small probability β. If the contagion is transmitted we then sample the incubation time, i.e., how long after w got infected, w will get a chance to infect its (at that time uninfected) neighbors. Note that cascades in this model are necessarily trees since node u only gets to infect neighbors w that have not yet been infected. Directed graph with nodes V and edges E over which contagions spread β Probability that contagion propagates over an edge of G α Incubation time model parameter (refer to Eq. 1) Contagion that spreads over G tu Time when node u got hit (infected) by a particular cascade tc Set of node hit times in cascade c. Time difference between the node hit times tv − tu in a particular cascade C = {(c, tc)} Set of contagions c and corresponding hit times, i.e., the observed data Tc(G) Set of all possible propagation trees of cascade c on graph G First, we define the probability P c (u, v) that a node u spreads the cascade to a node v, i.e., a node u influences/infects/transmits contagion c to a node v. Formally, P c (u, v) specifies the conditional probability of observing cascade c spreading from u to v. Consider a pair of nodes u and v, connected by a directed edge (u, v) and the corresponding hit times (u, t u ) c and (v, t v ) c . Since the contagion can only propagate forward in time, if node u got infected after node v (t u > t v ) then P c (u, v) = 0, i.e., nodes can not influence nodes from the past. On the other hand (if t u < t v ) we make no assumptions about the properties and shape of P c (u, v). To build some intuition, we can think that the probability of propagation P c (u, v) between a pair of nodes u and v is decreasing in the difference of their infection times, i.e., the farther apart in time the two nodes get infected the less likely they are to infect one another. However, we note that our approach allows for the contagion transmission model P c (u, v) to be arbitrarily complicated as it can also depend on the properties of the contagion c as well as the properties of the nodes and edges. For example, in a disease propagation scenario, node attributes could include information about the individual's socio-economic status, commute patterns, disease history and so on, and the contagion properties would include the strength and the type of the virus. This allows for great flexibility in the cascade transmission models as the probability of infection depends on the parameters of the disease and properties of the nodes. Purely for simplicity, in the rest of the paper we assume the simplest and most intuitive model where the probability of transmission depends only on the time difference between the node hit times ∆ u,v = t v − t u . We consider two different models for the incubation time distribution ∆ u,v , an exponential and a power-law model, each with parameter α: Both the power-law and exponential waiting time models have been argued for in the literature [Barabási 2005; Malmgren et al. 2008 ]. In the end, our algorithm does not depend on the particular choice of the incubation time distribution and more complicated non-monotonic and multimodal functions can easily be chosen [Crane and Sornette 2008; Wallinga and Teunis 2004; Gomez-Rodriguez et al. 2011 ]. Also, we interpret ∞ + ∆ u,v = ∞, i.e., if t u = ∞, then t v = ∞ with probability 1. Note that the parameter α can potentially be different for each edge in the network. Considering the above model in a generative sense, we can think that the cascade c reaches node u at time t u , and we now need to generate the time t v when u spreads the cascade to node v. As cascades generally do not infect all the nodes of the network, we need to explicitly model the probability that the cascade stops. With probability (1 − β), the cascade stops, and never reaches v, thus t v = ∞. With probability β, the cascade transmits over the edge (u, v) , and the hit time t v is set to t u + ∆ u,v , where ∆ u,v is the incubation time that passed between the hit times t u and t v . Likelihood of a cascade spreading in a given tree pattern T. Next we calculate the likelihood P (c|T ) that contagion c in a graph G propagated in a particular tree pattern T (V T , E T ) under some assumptions. This means we want to assess the probability that a cascade c with hit times t c propagated in a particular tree pattern T . Due to our modeling assumption that cascades are trees the likelihood is simply: where E T is the edge set and V T is the vertex set of tree T . Note that V T is the set of nodes that got infected by c, i.e., V T ⊂ V and contains elements i of t c where t c (i) < ∞. The above expression has an intuitive explanation. Since the cascade spread in tree pattern T , the contagion successfully propagated along those edges. And, along the edges where the contagion did not spread, the cascade had to stop. Here, we assume independence between edges to simplify the problem. Despite this simplification, we later show empirically that NETINF works well in practice Moreover, P (c|T ) can be rewritten as: where q = |E T | = |V T | − 1 is the number of edges in T and counts the edges over which the contagion successfully propagated. Similarly, r counts the number of edges that did not activate and failed to transmit the contagion: r = u∈VT d out (u) − q, and d out (u) is the out-degree of node u in graph G. We conclude with an observation that will come very handy later. Examining Eq. 3 we notice that the first part of the equation before the product sign does not depend on the edge set E T but only on the vertex set V T of the tree T . This means that the first part is constant for all trees T with the same vertex set V T but possibly different edge sets E T . For example, this means that for a fixed G and c maximizing P (c|T ) with respect to T (i.e., finding the most probable tree), does not depend on the second product of Eq. 2. This means that when optimizing, one only needs to focus on the first product where the edges of the tree T simply specify how the cascade spreads, i.e., every node in the cascade gets influenced by exactly one node, that is, its parent. Cascade likelihood. We just defined the likelihood P (c|T ) that a single contagion c propagates in a particular tree pattern T . Now, our aim is to compute P (c|G), the probability that a cascade c occurs in a graph G. Note that we observe only the node infection times while the exact propagation tree T (who-infected-whom) is unknown. In general, over a given graph G there may be multiple different propagation trees T that are consistent with the observed data. For example, Figure 3 shows three different cascade propagation paths (trees T ) that are all consistent with the observed data t c = (t a = 1, t c = 2, t b = 3, t e = 4) So, we need to combine the probabilities of individual propagation trees into a probability of a cascade c. We achieve this by considering all possible propagation trees T that are supported by network G, i.e., all possible ways in which cascade c could have spread over G: where c is a cascade and T c (G) is the set of all the directed connected spanning trees on a subgraph of G induced by the nodes that got hit by cascade c. Note that even though the sum ranges over all possible spanning trees T ∈ T c (G), in case T is inconsistent with the observed data, then P (c|T ) = 0. Assuming that all trees are a priori equally likely (i.e., P (T |G) = 1/|T c (G)|) and using the observation from Equation 3 we obtain: Basically, the graph G defines the skeleton over which the cascades can propagate and T defines a particular possible propagation tree. There may be many possible trees that explain a single cascade (see Fig. 3 ), and since we do not know in which particular tree pattern the cascade really propagated, we need to consider all possible propagation trees T in T c (G). Thus, the sum over T is a sum over all directed spanning trees of the graph induced by the vertices that got hit by the cascade c. We just computed the probability of a single cascade c occurring in G, and we now define the probability of a set of cascades C occurring in G simply as: where we again assume conditional independence between cascades given the graph G. Now that once we have formulated the cascade transmission model, we now state the diffusion network inference problem, where the goal is to findĜ that solves the following optimization problem: PROBLEM 1. Given a set of node infection times t c for a set of cascades c ∈ C, a propagation probability parameter β and an incubation time distribution P c (u, v), find the networkĜ such that:Ĝ where the maximization is over all directed graphs G of at most k edges, and P (C|G) is defined by equations 6, 4 and 2. We include the constraint on the number of edges inĜ simply because we seek for a sparse solution, since real graphs are sparse. We discuss how to choose k in further sections of the paper. The above optimization problem seems wildly intractable. To evaluate Eq. (6), we need to compute Eq. (4) for each cascade c, i.e., the sum over all possible spanning trees T . The number of trees can be super-exponential in the size of G but perhaps surprisingly, this super-exponential sum can be performed in time polynomial in the number n of nodes in the graph G, by applying Kirchhoff's matrix tree theorem [Knuth 1968] : y is the matrix created by removing any row x and column y from A, then where T is each directed spanning tree in A. In our case, we set w i,j to be simply P c (i, j) and we compute the product of the determinants of |C| matrices, one for each cascade, which is exactly Eq. 4. Note that since edges (i, j) where t i ≥ t j have weight 0 (i.e., they are not present), given a fixed cascade c, the collection of edges with positive weight forms a directed acyclic graph (DAG). A DAG with a time-ordered labeling of its nodes has an upper triangular connectivity matrix. Thus, the matrix A x,y of Theorem 1 is, by construction, upper triangular. Fortunately, the determinant of an upper triangular matrix is simply the product of the elements of its diagonal. This means that instead of using super-exponential time, we are now able to evaluate Eq. 6 in time (|C| · |V | 2 ) (the time required to build A x,y and compute the determinant for each of the |C| cascades). However, this does not completely solve our problem for two reasons: First, while cuadratic time is a drastic improvement over a super-exponential computation, it is still too expensive for the large graphs that we want to consider. Second, we can use the above result only to evaluate the quality of a particular graph G, while our goal is to find the best graphĜ. To do this, we would need to search over all graphs G to find the best one. Again, as there is a super-exponential number of graphs, this is not practical. To circumvent this one could propose some ad hoc search heuristics, like hill-climbing. However, due to the combinatorial nature of the likelihood function, such a procedure would likely be prone to local maxima. We leave the question of efficient maximization of Eq. 4 where P (c|G) considers all possible propagation trees as an interesting open problem. · 11 The diffusion network inference problem defined in the previous section does not seem to allow for an efficient solution. We now propose an alternative formulation of the problem that is tractable both to compute and also to optimize. We use the same tree cascade formation model as in the previous section. However, we compute an approximation of the likelihood of a single cascade by considering only the most likely tree instead of all possible propagation trees. We show that this approximate likelihood is tractable to compute. Moreover, we also devise an algorithm that provably finds networks with near optimal approximate likelihood. In the remainder of this section, we informally write likelihood and log-likelihood even though they are approximations. However, all approximations are clearly indicated. First we introduce the concept of ε-edges to account for the fact that nodes may get infected for reasons other than the network influence. For example, in online media, not all the information propagates via the network, as some is also pushed onto the network by the mass media [Katz and Lazarsfeld 1955; Watts and Dodds 2007] and thus a disconnected cascade can be created. Similarly, in viral marketing, a person may purchase a product due to the influence of peers (i.e., network effect) or for some other reason (e.g., seing a commercial on TV) ]. To account for such phenomena when a cascade "jumps" across the network we can think of creating an additional node x that represents an external influence and can infect any other node u with small probability. We then connect the external influence node x to every other node u with an ε-edge. And then every node u can get infected by the external source x with a very small probability ε. For example, in case of information diffusion in the blogosphere, such a node x could model the effect of blogs getting infected by the mainstream media. However, if we were to adopt this approach and insert an additional external influence node x into our data, we would also need to infer the edges pointing out of x, which would make our problem even harder. Thus, in order to capture the effect of external influence, we introduce a concept of ε-edge. If there is not a network edge between a node i and a node j in the network, we add an ε-edge and then node i can infect node j with a small probability ε. Even though adding ε-edges makes our graph G a clique (i.e., the union of network edges and ε-edges creates a clique), the ε-edges play the role of external influence node. Thus, we now think of graph G as a fully connected graph of two disjoint sets of edges, the network edge set E and the ε- Now, any cascade propagation tree T is a combination of network and ε-edges. As we model the external influence via the ε-edges, the probability of a cascade c occurring in tree T (i.e., the analog of Eq. 2) can now be computed as: where we compute the transmission probability P ′ c (u, v) as follows: (a) Graph G on five vertices and four network edges (solid edges). ε-edges shown as dashed lines. Four types of edges are labeled: (i) network edges that transmitted the contagion (solid bold), (ii) ε-edges that transmitted the contagion (dashed bold), (iii) network edges that failed to transmit the contagion (solid), and (iv) ε-edges that failed to transmit the contagion (dashed). Note that above we distinguish four type of edges: network and ε-edges that participated in the diffusion of the contagion and network and ε-edges that did not participate in the diffusion. Figure 4 further illustrates this concept. First, Figure 4 (a) shows an example of a graph G on five nodes and four network edges E (solid lines), and any other possible edge is the ε-edge (dashed lines). Then, Figure 4 (b) shows an example of a propagation tree We only show the edges that play a role in Eq. 9 and label them with four different types: (a) network edges that transmitted the contagion, (b) ε-edges that transmitted the contagion, (c) network edges that failed to transmit the contagion, and (d) ε-edges that failed to transmit the contagion. We can now rewrite the cascade likelihood P (c|T ) as combination of products of edgetypes and the product over the edge incubation times: where q is the number of network edges in T (type (a) edges in Fig. 4(b) ), q ′ is the number of ε-edges in T , s is the number of network edges that did not transmit and s ′ is the number of ε-edges that did not transmit. Note that the above approximation is valid since real networks are sparse and cascades are generally small, and hence s ′ ≫ s. Thus, even though β ≫ ε we expect (1 − β) s to be of about same order of magnitude as (1 − ε) s ′ . The formulation in Equation 11 has several benefits. Due to the introduction of ε-edges the likelihood P (c|T ) is always positive. For example, even if we consider graph G with no edges, P (c|T ) is still well defined as we can explain the tree T via the diffusion over the ε-edges. A second benefit that will become very useful later is that the likelihood now becomes monotonic in the network edges of G. This means that adding an edge to G (i.e., converting ε-edge into a network edge) only increases the likelihood. Considering only the most likely propagation tree. So far we introduced the concept of ε-edges to model the external influence or diffusion that is exogenous to the network, and introduce an approximation to treat all edges that did not participate in the diffusion as ε-edges. Now we consider the last approximation, where instead of considering all possible cascade propagation trees T , we only consider the most likely cascade propagation trees T : Thus now we aim to solve the network inference problem by finding a graph G that maximizes Equation 12, where P (c|T ) is defined in Equation 11. This formulation simplifies the original network inference problem by considering the most likely (best) propagation tree T per cascade c instead of considering all possible propagation trees T for each cascade c. Although in some cases we expect the likelihood of c with respect to the true tree T ′ to be much higher than that with respect to any competing tree T ′′ and thus the probability mass will be concentrated at T ′ , there might be some cases in which the probability mass does not concentrate on one particular T. However, we run extensive experiments on small networks with different structures in which both the original network inference problem and the alternative formulation can be solved using exhaustive search. Our experimental results looked really similar and the results were indistinguishable. Therefore, we consider our approximation to work well in practice. For convenience, we work with the log-likelihood log P (c|T ) rather than likelihood P (c|T ). Moreover, instead of directly maximizing the log-likelihood we equivalently maximize the following objective function that defines the improvement of log-likelihood for cascade c occurring in graph G over c occurring in an empty graphK (i.e., graph with only ε-edges and no network edges): Maximizing Equation (12) is equivalent to maximizing the following log-likelihood function: We now expand Eq. (13) and obtain an instance of a simplified diffusion network infer- where w c (i, j) = log P ′ c (i, j) − log ε is a non-negative weight which can be interpreted as the improvement in log-likelihood of edge (i, j) under the most likely propagation tree T in G. Note that by the approximation in Equation 11 one can ignore the contribution of edges that did not participate in a particular cascade c. The contribution of these edges is constant, i.e., independent of the particular shape that propagation tree T takes. This is due to the fact that each spanning tree T of G with node set V T has |V T | − 1 (network and ε-) edges that participated in the cascade, and all remaining edges stopped the cascade from spreading. The number of non-spreading edges depends only on the node set V T but not the edge set E T . Thus, the tree T that maximizes P (c|T ) also maximizes (i,j)∈ET w c (i, j). Since T is a tree that maximizes the sum of the edge weights this means that the most likely propagation tree T is simply the maximum weight directed spanning tree of nodes V T , where each edge (i, j) has weight w c (i, j), and F c (G) is simply the sum of the weights of edges in T . We also observe that since edges (i, j) where t i ≥ t j have weight 0 (i.e., such edges are not present) then the outgoing edges of any node u only point forward in time, i.e., a node can not infect already infected nodes. Thus for a fixed cascade c, the collection of edges with positive weight forms a directed acyclic graph (DAG). Now we use the fact that the collection of edges with positive weights forms a directed acyclic graph by observing that the maximum weight directed spanning tree of a DAG can be computed efficiently: PROPOSITION 1. In a DAG D(V, E, w) with vertex set V and nonnegative edge weights w, the maximum weight directed spanning tree can be found by choosing, for each node v, an incoming edge (u, v) with maximum weight w(u, v). of a tree T is the sum of the incoming edge weights w(P ar T (i), i) for each node i, where P ar T (i) is the parent of node i in T (and the root is handled appropriately). Now, Latter equality follows from the fact that since G is a DAG, the maximization can be done independently for each node without creating any cycles. This proposition is a special case of the more general maximum spanning tree (MST) problem in directed graphs [Edmonds 1967 ]. The important fact now is that we can find the best propagation tree T in time O(|V T |D in ), i.e., linear in the number of edges and the maximum in-degree D in = max u∈VT d in (u) by simply selecting an incoming edge of highest weight for each node u ∈ V T . Algorithm 1 provides the pseudocode to efficiently compute the maximum weight directed spanning tree of a DAG. Putting it all together we have shown how to efficiently evaluate the log-likelihood F C (G) of a graph G. To find the most likely tree T for a single cascade takes O(|V T |D in ), and this has to be done for a total of |C| cascades. Interestingly, this is independent of the size of graph G and only depends on the amount of observed data (i.e., size and the number of cascades). Now we aim to find graph G that maximizes the log-likelihood F C (G). First we notice that by construction F C (K) = 0, i.e., the empty graph has score 0. Moreover, we observe that the objective function F C is non-negative and monotonic. This means that Hence adding more edges to G does not decrease the solution quality, and thus the complete graph maximizes F C . Monotonicity can be shown by observing that, as edges are added to G, ε-edges are converted to network edges, and therefore the weight of any tree (and therefore the value of the maximum spanning tree) can only increase. However, since real-world social and information networks are usually sparse, we are interested in inferring a sparse graph G, that only contains some small number k of edges. Thus we aim to solve: PROBLEM 2. Given the infection times of a set of cascades C, probability of propagation β and the incubation time distribution P c (i, j), findĜ that maximizes: where the maximization is over all graphs G of at most k edges, and F C (G) is defined by Eqs. 14 and 15. Naively searching over all k edge graphs would take time exponential in k, which is intractable. Moreover, finding the optimal solution to Eq. (16) is NP-hard, so we cannot expect to find the optimal solution: THEOREM 2. The network inference problem defined by equation (16) is NP-hard. PROOF. By reduction from the MAX-k-COVER problem [Khuller et al. 1999 ]. In MAX-k-COVER, we are given a finite set W , |W | = n and a collection of subsets S 1 , . . . , S m ⊆ W . The function F MC (A) = | ∪ i∈A S i | counts the number of elements of W covered by sets indexed by A. Our goal is to pick a collection of k subsets A maximizing F MC . We will produce a collection of n cascades C over a graph G such that max |G|≤k F C (G) = max |A|≤k F MC (A). Graph G will be defined over the set of vertices V = {1, . . . , m} ∪ {r}, i.e., there is one vertex for each set S i and one extra vertex r. For each element s ∈ W we define a cascade which has time stamp 0 associated with all nodes i ∈ V such that s ∈ S i , time stamp 1 for node r and ∞ for all remaining nodes. Furthermore, we can choose the transmission model such that w c (i, r) = 1 whenever s ∈ S i and w c (i ′ , j ′ ) = 0 for all remaining edges (i ′ , j ′ ), by choosing the parameters ε, α and β appropriately. Since a directed spanning tree over a graph G can contain at most one edge incoming to node r, its weight will be 1 if G contains any edge from a node i to r where s ∈ S i , and 0 otherwise. Thus, a graph G of at most k edges corresponds to a feasible solution A G to MAX-k-COVER where we pick sets S i whenever edge (i, r) ∈ G, and each solution A to MAX-k-COVER corresponds to a feasible solution G A of (16). Furthermore, by construction, F MC (A G ) = F C (G). Thus, if we had an efficient algorithm for deciding whether there exists a graph G, |G| ≤ k such that F C (G) > c, we could use the algorithm to decide whether there exists a solution A to MAX-k-COVER with value at least c. While finding the optimal solution is hard, we now show that F C satisfies submodularity, a natural diminishing returns property. The submodularity property allows us to efficiently find a provably near-optimal solution to this otherwise NP-hard optimization problem. A set function F : 2 W → R that maps subsets of a finite set W to the real numbers is submodular if for A ⊆ B ⊆ W and s ∈ W \ B, it holds that This simply says adding s to the set A increases the score more than adding s to set B (A ⊆ B). Now we are ready to show the following result that enables us to find a near optimal network G: THEOREM 3. Let V be a set of nodes, and C be a collection of cascades hitting the nodes V . Then F C (G) is a submodular function F C : 2 W → R defined over subsets W ⊆ V × V of directed edges. PROOF. Fix a cascade c, graphs G ⊆ G ′ and an edge e = (r, s) not contained in G ′ . We will show that F Since nonnegative linear combinations of submodular functions are submodular, the function F C (G) = c∈C F c (G) is submodular as well. Let w i,j be the weight of edge (i, j) in G ∪ {e}, and w ′ i,j be the weight in G ′ ∪ {e}. As argued before, the maximum weight directed spanning tree for DAGs is obtained by assigning to each node the incoming edge with maximum weight. Let (i, s) be the edge incoming at s of maximum weight in G, and (i ′ , s) the maximum weight incoming edge in proving submodularity of F c . Maximizing submodular functions in general is NP-hard [Khuller et al. 1999] . A commonly used heuristic is the greedy algorithm, which starts with an empty graphK, and ACM Transactions on Knowledge Discovery from Data, Vol. V, No. N, Month 20YY. · 17 iteratively, in step i, adds the edge e i which maximizes the marginal gain: The algorithm stops once it has selected k edges, and returns the solutionĜ = {e 1 , . . . , e k }. The stopping criteria, i.e., value of k, can be based on some threshold of the marginal gain, of the number of estimated edges or another more sophisticated heuristic. In our context we can think about the greedy algorithm as starting on an empty graph K with no network edges. In each iteration i, the algorithm adds to G the edge e i that currently improves the most the value of the log-likelihood. Another way to view the greedy algorithm is that it starts on a fully connected graphK where all the edges are ε-edges. Then adding an edge to graph G corresponds to that edge changing the type from ε-edge to a network edge. Thus our algorithm iteratively swaps ε-edges to network edges until k network edges have been swapped (i.e., inserted into the network G). Considering the NP-hardness of the problem, we might expect the greedy algorithm to perform arbitrarily bad. However, we will see that this is not the case. A fundamental result of Nemhauser et al. [Nemhauser et al. 1978] proves that for monotonic submodular functions, the setĜ returned by the greedy algorithm obtains at least a constant fraction of (1 − 1/e) ≈ 63% of the optimal value achievable using k edges. Moreover, we can acquire a tight online data-dependent bound on the solution quality: Theorem 4 computes how far a givenĜ (obtained by any algorithm) is from the unknown NP-hard to find optimum. To make the algorithm scale to networks with thousands of nodes we speed-up the algorithm by several orders of magnitude by considering two following two improvements: Localized update: Let C i be the subset of cascades that go through the node i (i.e., cascades in which node i is infected). Then consider that in some step n the greedy algorithm selects the network edge (j, i) with marginal gain δ j,i , and now we have to update the optimal tree of each cascade. We make a simple observation that adding the network edge (j, i) may only change the optimal trees of the cascades in the set C i and thus we only need to revisit (and potentially update) the trees of cascades in C i . Since cascades are local (i.e., each cascade hits only a relatively small subset of the network), this localized updating procedure speeds up the algorithm considerably. Lazy evaluation: It can be used to drastically reduce the number of evaluations of marginal gains F C (G ∪ {e}) − F C (G) ]. This procedure relies on the submodularity of F C . The key idea behind lazy evaluations is the following. Suppose G 1 , . . . , G k is the sequence of graphs produced during iterations of the greedy algorithm. Now let us consider the marginal gain of adding some edge e to any of these graphs. Due to the submodularity of F C it holds that ∆ e (G i ) ≥ ∆ e (G j ) whenever i ≤ j. Thus, the marginal gains of e can only monotonically decrease over the course of the greedy algorithm. This means that elements which achieve very little marginal gain at iteration i cannot suddenly produce large marginal gain at subsequent iterations. This insight can be exploited by maintaining a priority queue data structure over the edges and their respective marginal gains. At each iteration, the greedy algorithm retrieves the highest weight (priority) edge. Since its value may have decreased from previous iterations, it recomputes its marginal benefit. If the marginal gain remains the same after recomputation, it has to be the edge with highest marginal gain, and the greedy algorithm will pick it. If it decreases, one reinserts the edge with its new weight into the priority queue and continues. Formal details and pseudo-code can be found in ]. As we will show later, these two improvements decrease the run time by several orders of magnitude with no loss in the solution quality. We call the algorithm that implements the greedy algorithm on this alternative formulation with the above speedups the NETINF algorithm (Algorithm 2). In addition, NETINF nicely lends itself to parallelization as likelihoods of individual cascades and likelihood improvements of individual new edges can simply be computed independently. This allows us to to tackle even bigger networks in shorter amounts of time. A space and runtime complexity analysis of NETINF depends heavily of the structure of the network, and therefore it is necessary to make strong assumptions on the structure. Due to this, it is out of the scope of the paper to include a formal complexity analysis. Instead, we include an empirical runtime analysis in the following section. In this section we proceed with the experimental evaluation of our proposed NETINF algorithm for inferring network of diffusion. We analyze the performance of NETINF on synthetic and real networks. We show that our algorithm performs surprisingly well, outperforms a heuristic baseline and correctly discovers more than 90% of the edges of a typical diffusion network. The goal of the experiments on synthetic data is to understand how the underlying network structure and the propagation model (exponential and power-law) affect the performance of our algorithm. The second goal is to evaluate the effect of simplification we had to make in order to arrive to an efficient network inference algorithm. Namely, we assume the contagion propagates in a tree pattern T (i.e., exactly E T edges caused the propagation), consider only the most likely tree T (Eq. 12), and treat non-propagating network edges as ε-edges (Eq. 11). In general, in all our experiments we proceed as follows: We are given a true diffusion network G * , and then we simulate the propagation of a set of contagions c over the network G * . Diffusion of each contagion creates a cascade and for each cascade, we record the node hit times t u . Then, given these node hit times, we aim to recover the network G * using the NETINF algorithm. For example, Figure 1(a) shows a graph G * of 20 nodes and 23 directed edges. Using the exponential incubation time model and β = 0.2 we generated 24 cascades. Now given the node infection times, we aim to recover G * . A baseline method (b) (described below) performed poorly while NETINF (c) recovered G * almost perfectly by making only two errors (red edges). C = {(c, t c )}, number of edges k G ←K for all c ∈ C do T c ← dag tree(c) {Find most likely tree (Algorithm 1)} while |G| < k do for all (j, i) / ∈ G do δ j,i = 0 {Marginal improvement of adding edge (j, i) to G} M j,i ← ∅ for all c : t j < t i in c do Let w c (m, n) be the weight of (m, n) in G ∪ {(j, i)} if w c (j, i) ≥ w c (P ar Tc (i), i) then δ j,i = δ j,i + w c (j, i) − w c (P ar Tc (i), i) M j,i ← M j,i ∪ {c} (j * , i * ) ← arg max (j,i)∈C\G δ j,i G ← G ∪ {(j * , i * )} for all c ∈ M j * ,i * do P ar Tc (i * ) ← j * return G; Our experimental methodology is composed of the following steps: (1) Ground truth graph G * (2) Cascade generation: Probability of propagation β, and the incubation time model with parameter α. (3) Number of cascades (1) Ground truth graph G * : We consider two models of directed real-world networks to generate G * , namely, the Forest Fire model [Leskovec et al. 2005] and the Kronecker Graphs model . For Kronecker graphs, we consider three sets of parameters that produce networks with a very different global network structure: a random graph [Erdős and Rényi 1960] (Kronecker parameter matrix [0.5, 0.5; 0.5, 0.5]), a core-periphery network [Leskovec et al. 2008 ] ([0.962, 0.535; 0.535, 0.107]) and a network with hierarchical community structure [Clauset et al. 2008 ] ([0.962, 0.107; 0.107, 0.962]). The Forest Fire generates networks with power-law degree distributions that follow the densification power law [Barabási and Albert 1999; ]. (2) Cascade propagation: We then simulate cascades on G * using the generative model defined in Section 2.1. For the simulation we need to choose the incubation time model (i.e., power-law or exponential and parameter α). We also need to fix the parameter β, that controls probability of a cascade propagating over an edge. Intuitively, α controls how fast the cascade spreads (i.e., how long the incubation times are), while β controls the size of the cascades. Large β means cascades will likely be large, while small β makes most of the edges fail to transmit the contagion which results in small infections. (3) Number of cascades: Intuitively, the more data our algorithm gets the more accurately it should infer G * . To quantify the amount of data (number of different cascades) we define E l to be the set of edges that participate in at least l cascades. This means E l is a set of edges that transmitted at least l contagions. It is important to note that if an edge of G * did not participate in any cascade (i.e., it never transmitted a contagion) then there is no trace of it in our data and thus we have no chance to infer it. In our experiments we choose the minimal amount of data (i.e., l = 1) so that we at least in principle could infer the true network G * . Thus, we generate as many cascades as needed to have a set E l that contains a fraction f of all the edges of the true network G * . In all our experiments we pick cascade starting nodes uniformly at random and generate enough cascades so that 99% of the edges in G * participate in at least one cascade, i.e., 99% of the edges are included in E 1 . Table II shows experimental values of number of cascades that let E 1 cover different percentages of the edges. To have a closer look at the cascade size distribution, for a Forest Fire network on 1,024 nodes and 1,477 edges, we generated 4,038 cascades. The majority of edges took part in 4 to 12 cascades and the cascade size distribution follows a power law ( Figure 5(b) ). The average and median number of cascades per edge are 9.1 and 8, respectively (Figure 5(a) ). To infer a diffusion networkĜ, we consider the a simple baseline heuristic where we compute the score of each edge and then pick k edges with highest score. More precisely, for each possible edge (u, v) of G, we compute w(u, v) = c∈C P c (u, v), i.e., overall how likely were the cascades c ∈ C to propagate over the edge (u, v). Then we simply pick the k edges (u, v) with the highest score w(u, v) to obtainĜ. For example, Figure 1 (b) shows the results of the baseline method on a small graph. Solution quality. We evaluate the performance of the NETINF algorithm in two different ways. First, we are interested in how successful NETINF is at optimizing the objective function F C (G) that is NP-hard to optimize exactly. Using the online bound in Theorem 4, Table II . Performance of synthetic data. Break-even Point (BEP) and Receiver Operating Characteristic (AUC) when we generated the minimum number of |C| cascades so that f -fraction of edges participated in at least one cascades |E l | ≥ f |E|. These |C| cascades generated the total of r edge transmissions, i.e., average cascade size is r/|C|. All networks have 1,024 nodes and 1,446 edges. We use the exponential incubation time model with parameter α = 1, and in each case we set the probability β such that r/|C| is neither too small nor too large (i.e., β ∈ (0.1, 0.6)). we can assess at most how far from the unknown optimal the NETINF solution is in terms of the log-likelihood score. Second, we also evaluate the NETINF based on accuracy, i.e., what fraction of edges of G * NETINF managed to infer correctly. Figure 6 (a) plots the value of the log-likelihood improvement F C (G) as a function of the number of edges in G. In red we plot the value achieved by NETINF and in green the upper bound using Theorem 4. The plot shows that the value of the unknown optimal solution (that is NP-hard to compute exactly) is somewhere between the red and the green curve. Notice that the band between two curves, the optimal and the NETINF curve, is narrow. For example, at 2,000 edges inĜ, NETINF finds the solution that is least 97% of the optimal graph. Moreover, also notice a strong diminishing return effect. The value of the objective function flattens out after about 1,000 edges. This means that, in practice, very sparse solutions (almost tree-like diffusion graphs) already achieve very high values of the objective function close to the optimal. We also evaluate our approach by studying how many edges inferred by NETINF are actually present in the true network G * . We measure the precision and recall of our method. For every value of k (1 ≤ k ≤ n(n − 1)) we generateĜ k on k edges by using NETINF or the baseline method. We then compute precision (which fraction of edges inĜ k is also present G * ) and recall (which fraction of edges of G * appears inĜ k ). For small k, we expect low recall and high precision as we select the few edges that we are the most confident in. As k increases, precision will generally start to drop but the recall will increase. Figure 7 shows the precision-recall curves of NETINF and the baseline method on three different Kronecker graphs (random, hierarchical community structure and core-periphery structure) with 1024 nodes and two incubation time models. The cascades were generated with an exponential incubation time model with α = 1, or a power law incubation time model with α = 2 and a value of β low enough to avoid generating too large cascades (in all cases, we pick a value of β ∈ (0.1, 0.6)). For each network we generated between 2,000 and 4,000 cascades so that 99% of the edges of G * participated in at least one cascade. We chose cascade starting points uniformly at random. We view this as a particularly strong result as we were especially careful not to generate too many cascades since more cascades mean more evidence that makes the problem easier. Thus, using a very small number of cascades, where every edge of G * participates in only a few cascades, we can almost perfectly recover the underlying diffusion network G * . Second important point to notice is that the performance of NETINF seems to be strong regardless of the structure of the network G * . This means that NETINF works reliably regardless of the particular structure of the network of which contagions propagated (refer to Table II) . Similarly, Figures 7(d) , 7(e) and 7(f) show the performance on the same three networks but using the power law incubation time model. The performance of the baseline now dramatically drops. This is likely due to the fact that the variance of power-law (and heavy tailed distributions in general) is much larger than the variance of an exponential distribution. Thus the diffusion network inference problem is much harder in this case. As the baseline pays high price due to the increase in variance with the break-even point dropping below 0.1 the performance of NETINF remains stable with the break even point in the high 90s. We also examine the results on the Forest Fire network (Figures 7(g) and 7(h) ). Again, the performance of the baseline is very low while NETINF achieves the break-even point at around 0.90. Generally, the performance on the Forest Fire network is a bit lower than on the Kronecker graphs. However, it is important to note that while these networks have very different global network structure (from hierarchical, random, scale free to core periphery) the performance of NETINF is remarkably stable and does not seem to depend on the structure of the network we are trying to infer or the particular type of cascade incubation time model. Finally, in all the experiments, we observe a sharp drop in precision for high values of recall (near 1). This happens because the greedy algorithm starts to choose edges with low marginal gains that may be false edges, increasing the probability to make mistakes. Performance vs. cascade coverage. Intuitively, the larger the number of cascades that spread over a particular edge the easier it is to identify it. On one hand if the edge never transmitted then we can not identify it, and the more times it participated in the transmission of a contagion the easier should the edge be to identify. In our experiments so far, we generated a relatively small number of cascades. Next, we examine how the performance of NETINF depends on the amount of available cascade data. This is important because in many real world situations the data of only a few different cascades is available. Figure 8 plots the break-even point of NETINF as a function of the available cascade data measured in the number of contagion transmission events over all cascades. The total number of contagion transmission events is simply the sum of cascade sizes. Thus, x = 1 means that the total number of transmission events used for the experiment was equal to the number of edges in G * . Notice that as the amount of cascade data increases the performance of NETINF also increases. Overall we notice that NETINF requires a total number of transmission events to be about 2 times the number of edges in G * to successfully recover most of the edges of G * . Moreover, the plot shows the performance for different values of edge transmission probability β. As noted before, big values of β produce larger cascades. Interestingly, when cascades are small (small β) NETINF needs less data to infer the network than when cascades are larger. This occurs because the larger a cascade, the more difficult is to infer the parent of each node, since we have more potential parents for each the node to choose from. For example, when β = 0.1 NETINF needs about 2|E| transmission events, while when β = 0.5 it needs twice as much data (about 4|E| transmissions) to obtain the break even point of 0.9. Stopping criterion. In practice one does not know how long to run the algorithm and how many edges to insert into the networkĜ. Given the results from Figure 6 , we found the following heuristic to give good results. We run the NETINF algorithm for k steps where k is chosen such that the objective function is "close" to the upper bound, i.e., F C (Ĝ) > x · OPT, where OPT is obtained using the online bound. In practice we use values of x in range 0.8-0.9. That means that in each iteration k, OPT is computed by evaluating the right hand side expression of the equation in Theorem 4, where k is simply the iteration number. Therefore, OPT is computed online, and thus the stopping condition is also updated online. Figure 9 shows the average computation time per edge added for the NETINF algorithm implemented with lazy evaluation and localized update. We use a hierarchical Kronecker network and an exponential incubation time model with α = 1 and β = 0.5. Localized update speeds up the algorithm for an order of magnitude (45×) and lazy evaluation further gives a factor of 6 improvement. Thus, overall, we achieve two orders of magnitude speed up (280×), without any loss in solution quality. In practice the NETINF algorithm can easily be used to infer networks of 10,000 nodes in a matter of hours. In our experiments so far, we have assumed that the incubation time values between infections are not noisy and that we have access to the true distribution from which the incubation times are drawn. However, real data may violate any of these two assumptions. We study the performance of NETINF (break-even point) as a function of the noise of the waiting time between infections. Thus, we add Gaussian noise to the waiting times between infections in the cascade generation process. Figure 10 plots the performance of NETINF (break-even point) as a function of the amount of Gaussian noise added to the incubation times between infections for both an exponential incubation time model with α = 1, and a power law incubation time model with α = 2. The break-even point degrades with noise but once a high value of noise is reached, an additional increment in the amount of noise does not degrade further the performance of NETINF. Interestingly, the break-even point value for high values of noise is very similar to the break-even point achieved later in a real dataset (Figures 13(a) and 13(b) ). In all our experiments so far, we have assumed that we have access to complete cascade data, i.e., we are able to observe all the nodes taking part in each cascade. Thereby, except for the first node of a cascade, we do not have any "jumps" or missing nodes in the cascade as it spreads across the network. Even though techniques for coping with missing data in information cascades have recently been investigated [Sadikov et al. 2011 ], we evaluate NETINF against both scenarios. First, we consider the case where a random fraction of each cascade is missing. This means that we first generate a set of cascades, but then only record node infection times of f -fraction of nodes. We first generate enough cascades so that without counting the missing nodes in the cascades, we still have that 99% of the edges in G * participate in at least one cascade. Then we randomly delete (i.e., set infection times to infinity) f -fraction of nodes in each cascade. Figure 11 (a) plots the performance of NETINF (break-even point) as a function of the percentage of missing nodes in each cascade. Naturally, the performance drops with the amount of missing data. However, we also note that the effect of missing nodes can be mitigated by an appropriate choice of the parameter ε. Basically, higher ε makes propagation via ε-edges more likely and thus by giving a cascade a greater chance to propagate over the ε-edges NETINF can implicitly account for the missing data. Second, we also consider the case where the contagion does not spread through the network via diffusion but rather due to the influence of an external source. Thus, the contagion does not really spread over the edges of the network but rather appears almost at random at various nodes of the network. Figure 11 (b) plots the performance of NETINF (break-even point) as a function of the percentage of nodes that are infected by an external source for different values of ε. In our framework, we model the influence due to the external source with the ε-edges. Note that appropriately setting ε can appropriately account for the exogenous infections that are not the result of the network diffusion but due to the external influence. The higher the value of ε, the stronger the influence of the external source, i.e., we assume a greater number of missing nodes or number of nodes that are infected by an external source. Thus, the break-even is more robust for higher values of ε. Dataset description. We use more than 172 million news articles and blog posts from 1 million online sources over a period of one year from September 1 2008 till August 31 2009 2 . Based on this raw data, we use two different methodologies to trace information on the Web and then create two different datasets: (1) Blog hyperlink cascades dataset: We use hyperlinks between blog posts to trace the flow of information ]. When a blog publishes a piece of information and uses hyper-links to refer to other posts published by other blogs we consider this as events of information transmission. A cascade c starts when a blog publishes a post P and the information propagates recursively to other blogs by them linking to the original post or one of the other posts from which we can trace a chain of hyperlinks all the way to the original post P . By following the chains of hyperlinks in the reverse direction we identify hyperlink cascades . A cascade is thus composed of the time-stamps of the hyperlink/post creation times. (1) MemeTracker dataset: We use the MemeTracker [Leskovec et al. 2009 ] methodology to extract more than 343 million short textual phrases (like, "Joe, the plumber" or "lipstick on a pig"). Out of these, 8 million distinct phrases appeared more than 10 times, with the cumulative number of mentions of over 150 million. We cluster the phrases to aggregate different textual variants of the same phrase [Leskovec et al. 2009 ]. We then consider each phrase cluster as a separate cascade c. Since all documents are time stamped, a cascade c is simply a set of time-stamps when blogs first mentioned phrase c. So, we observe the times when blogs mention particular phrases but not where they copied or obtained the phrases from. We consider the largest 5,000 cascades (phrase clusters) and for each website we record the time when they first mention a phrase in the particular phrase cluster. Note that cascades in general do not spread over all the sites, which our methodology can successfully handle. Figure 12 further illustrates the concept of hyper-link and MemeTracker cascades. Accuracy on real data. As there is not ground truth network for both datasets, we use the following way to create the ground truth network G * . We create a network where there is a directed edge ( site v. To construct the network we take the top 500 sites in terms of number of hyperlinks they create/receive. We represent each site as a node in G * and connect a pair of nodes if a post in first site linked to a post in the second site. This process produces a ground truth network G * with 500 nodes and 4,000 edges. First, we use the blog hyperlink cascades dataset to infer the networkĜ and evaluate how many edges NETINF got right. Figure 13 (a) shows the performance of NETINF and the baseline. Notice that the baseline method achieves the break-even point of 0.34, while our method performs better with a break-even point of 0.44, almost a 30% improvement. NETINF is basically performing a link-prediction task based only on temporal linking information. The assumption in this experiment is that sites prefer to create links to sites that recently mentioned information while completely ignoring the authority of the site. Given such assumption is not satisfied in real-life, we consider the break even point of 0.44 a good result. Now, we consider an even harder problem, where we use the Memetracker dataset to infer G * . In this experiment, we only observe times when sites mention particular textual phrases and the task is to infer the hyperlink structure of the underlying web graph. Figure 13(b) shows the performance of NETINF and the baseline. The baseline method has a break-even point of 0.17 and NETINF achieves a break-even point of 0.28, more than a 50% improvement To have a fair comparison with the synthetic cases, notice that the exponential incubation time model is a simplistic assumption for our real dataset, and NETINF can potentially gain additional accuracy by choosing a more realistic incubation time model. Similarly as with synthetic data, in Figure 6 (b) we investigate the value of the objective function and compare it to the online bound. Notice that the bound is almost as tight as in the case of synthetic networks, finding the solution that is least 84% of optimal and both curves are similar in shape to the synthetic case value. Again, as in the synthetic case, the value of the objective function quickly flattens out which means that one needs a relatively few number of edges to capture most of the information flow on the Web. In the remainder of the section, we use the top 1,000 media sites and blogs with the largest number of documents. Visualization of diffusion networks. We examine the structure of the inferred diffusion networks using both datasets: the blog hyperlink cascades dataset and the MemeTracker dataset. Figure 14 shows the largest connected component of the diffusion network after 100 edges have been chosen using the first dataset, i.e., using hyperlinks to track the flow of information. The size of the nodes is proportional to the number of articles on the site and the width of the edge is proportional to the probability of influence, i.e., stronger edges have higher width. The strength of an edge across all cascades is simply defined as the marginal gain given by adding the edge in the greedy algorithm (and this is proportional to the probability of influence). Since news media articles rarely use hyperlinks to refer to one another, the network is somewhat biased towards web blogs (blue nodes). There are several interesting patterns to observe. First, notice that three main clusters emerge: on the left side of the network we can see blogs and news media sites related to politics, at the right top, we have blogs devoted to gossip, celebrity news or entertainment and on the right bottom, we can distinguish blogs and news media sites that deal with technological news. As Huffington Post and Political Carnival play the central role on the political side of the network, mainstream media sites like Washington Post, Guardian and the professional blog Salon.com play the role of connectors between the different parts of the network. The celebrity gossip part of the network is dominated by the blog Gawker and technology news gather around blogs Gizmodo and Engadget, with CNet and TechChuck establishing the connection to the rest of the network. Figure 15 shows the largest connected component of the diffusion network after 300 edges have been chosen using the second methodology, i.e. using short textual phrases to track the flow of information. In this case, the network is biased towards news media sites due to its higher volume of information. Insights into the diffusion on the web. The inferred diffusion networks also allow for analysis of the global structure of information propagation on the Web. For this analysis, we use the MemeTracker dataset and analyze the structure of the inferred information diffusion network. First, Figure 16 (a) shows the distribution of the influence index. The influence index is defined as the number of reachable nodes from w by traversing edges of the inferred diffusion network (while respecting edge directions). Nevertheless, we are also interested in the distance from w to its reachable nodes, i.e. nodes at shorter distances are more likely to be infected by w. Thus, we slightly modify the definition of influence index to be u 1/d wu where we sum over all the reachable nodes from w and d wu is the distance between w and u. Notice that we have two types of nodes. There is a small set of nodes that can reach many other nodes, which means they either directly or indirectly propagate information to them. On the other side we have a large number of sites that only get influenced but do not influence many other sites. This hints at a core periphery structure of the diffusion network with a small set of sites directly or indirectly spreading the information in the rest of the network. Figure 16 (b) investigates the number of links in the inferred network that point between different types of sites. Here we split the sites into mainstream media and blogs. Notice that most of the links point from news media to blogs, which says that most of the time information propagates from the mainstream media to blogs. Then notice how at first many media-to-media links are chosen but in later iterations the increase of these links starts to slow down. This means that media-to-media links tend to be the strongest and NETINF picks them early. The opposite seems to occur in case of blog-to-blog links where relatively few are chosen first but later the algorithm picks more of them. Lastly, links capturing the influence of blogs on mainstream media are the rarest and weakest. This suggests that most information travels from mass media to blogs. Last, Figure 16 (c) shows the median time difference between mentions of different types of sites. For every edge of the inferred diffusion network, we compute the median time needed for the information to spread from the source to the destination node. Again, we distinguish the mainstream media sites and blogs. Notice that media sites are quick to infect one another or even to get infected from blogs. However, blogs tend to be much slower in propagating information. It takes a relatively long time for them to get "infected" with information regardless whether the information comes from the mainstream media or the blogosphere. Finally, we have observed that the insights into diffusion on the web using the inferred network are very similar to insights obtained by simply taking the hyperlink network. However, our aim here is to show that (i) although the quantitative results are modest in terms of precision and recall, the qualitative insights makes sense, and that (ii) it is surprising that using simply timestamps of links, we are able to draw the same qualitative insights as using the hyperlink network There are several lines of work we build upon. Although the information diffusion in online settings has received considerable attention [Gruhl et al. 2004; Kumar et al. 2004; Adar and Adamic 2005; Liben-Nowell and Kleinberg 2008] , only a few studies were able to study the actual shapes of cascades Liben-Nowell and Kleinberg 2008; Ghosh and Lerman 2011; Romero et al. 2011; Ver Steeg et al. 2011 ]. The problem of inferring links of diffusion was first studied by Adar and Adamic [Adar and Adamic 2005] , who formulated it as a supervised classification problem and used Support Vector Machines combined with rich textual features to predict the occurrence of individual links. Although rich textual features are used, links are predicted independently and thus their approach is similar to our baseline method in the sense that it picks a threshold (i.e., hyperplane in case of SVMs) and predicts individually the most probable links. The work most closely related to our approach, CONNIE [Myers and Leskovec 2010 ] and NETRATE [Gomez-Rodriguez et al. 2011 ], also uses a generative probabilistic model for the problem of inferring a latent social network from diffusion (cascades) data. However, CONNIE and NETRATE use convex programming to solve the network inference problem. CONNIE includes a l 1 -like penalty term that controls sparsity while NETRATE provides a unique sparse solution by allowing different transmission rates across edges. For each edge (i, j), CONNIE infers a prior probability β i,j and NETRATE infers a transmission rate α i,j . Both algorithms are computationally more expensive than NETINF. In our work, we assume that all edges of the network have the same prior probability (β) and transmission rate (α). From this point of view, we think the comparison between the algorithms is unfair since NETRATE and CONNIE have more degrees of freedom Network structure learning has been considered for estimating the dependency structure of probabilistic graphical models Friedman et al. 1999 ]. However, there are fundamental differences between our approach and graphical models structure learning. (a) we learning directed networks, but Bayes netws are DAGs (b) undirected graphical model structure learning makes no assumption about the network but they learn undirected and we learn directed networks First, our work makes no assumption about the network structure (we allow cycles, reciprocal edges) and are thus able to learn general directed networks. In directed graphical models, reciprocal edges and cycles are not allowed, and the inferred network is a directed acyclic graph (DAG). In undirected graphical models, there are typically no assumptions about the network structure, but the inferred network is undirected. Second, Bayesian network structure inference methods are generally heuristic approaches without any approximation guarantees. Network structure learning has also been used for estimating epidemiological networks [Wallinga and Teunis 2004] and for estimating probabilistic relational models [Getoor et al. 2003 ]. In both cases, the problem is formulated in a probabilistic framework. However, since the problem is intractable, heuristic greedy hill-climbing or stochastic search that offer no performance guarantee were usually used in practice. In contrast, our work provides a novel formulation and a tractable solution together with an approximation guarantee. Our work relates to static sparse graph estimation using graphical Lasso methods [Wainwright et al. 2006; Schmidt et al. 2007; Friedman et al. 2008; Meinshausen and Buehlmann 2006] , unsupervised structure network inference using kernel methods [Lippert et al. 2009 ], mutual information relevance network inference [Butte and Kohane 2000] , inference of influence probabilities [Goyal et al. 2010] , and extensions to time evolving graphical models [Ahmed and Xing 2009; Ghahramani 1998; Song et al. 2009 ]. Our work is also related to a link prediction problem [Jansen et al. 2003; Taskar et al. 2003; Liben-Nowell and Kleinberg 2003; Backstrom and Leskovec 2011; Vert and Yamanishi 2005] but different in a sense that this line of work assumes that part of the network is already visible to us. Last, although submodular function maximization has been previously considered for sensor placement ] and finding influencers in viral marketing [Kempe et al. 2003] , to the best of our knowledge, the present work is the first that considers submodular function maximization in the context of network structure learning. We have investigated the problem of tracing paths of diffusion and influence. We formalized the problem and developed a scalable algorithm, NETINF, to infer networks of influence and diffusion. First, we defined a generative model of cascades and showed that choosing the best set of k edges maximizing the likelihood of the data is NP-hard. By exploiting the submodularity of our objective function, we developed NETINF, an efficient algorithm for inferring a near-optimal set of k directed edges. By exploiting localized updates and lazy evaluation, our algorithm is able to scale to very large real data sets. We evaluated our algorithm on synthetic cascades sampled from our generative model, and showed that NETINF is able to accurately recover the underlying network from a relatively small number of samples. In our experiments, NETINF drastically outperformed a naive maximum weight baseline heuristic. Most importantly, our algorithm allows us to study properties of real networks. We evaluated NETINF on a large real data set of memes propagating across news websites and blogs. We found that the inferred network exhibits a core-periphery structure with mass media influencing most of the blogosphere. Clusters of sites related to similar topics emerge (politics, gossip, technology, etc.), and a few sites with social capital interconnect these clusters allowing a potential diffusion of information among sites in different clusters. There are several interesting directions for future work. Here we only used time difference to infer edges and thus it would be interesting to utilize more informative features (e.g., textual content of postings etc.) to more accurately estimate the influence probabilities. Moreover, our work considers static propagation networks, however real influence networks are dynamic and thus it would be interesting to relax this assumption. Last, there are many other domains where our methodology could be useful: inferring interaction networks in systems biology (protein-protein and gene interaction networks), neuroscience (inferring physical connections between neurons) and epidemiology. We believe that our results provide a promising step towards understanding complex processes on networks based on partial observations. Tracking information epidemics in blogspace Implicit structure and the dynamics of blogspace Recovering time-varying networks of dependencies in social and biological studies Infectious diseases of humans: Dynamics and control Supervised random walks: Predicting and recommending links in social networks The Mathematical Theory of Infectious Diseases and its Applications The origin of bursts and heavy tails in human dynamics Mutual information relevance networks: functional genomic clustering using pairwise entropy measurements Hierarchical structure and the prediction of missing links in networks Robust dynamic classes revealed by measuring the response function of a social system Mining the network value of customers Optimum branchings On the evolution of random graphs. Publication of the Mathematical Institute of the Sparse inverse covariance estimation with the graphical lasso Being Bayesian about network structure. A Bayesian approach to structure discovery in Bayesian networks Learning Bayesian network structure from massive datasets: The "Sparse Candidate" algorithm Learning probabilistic models of link structure Learning dynamic Bayesian networks A framework for quantitative analysis of cascades on networks Uncovering the Temporal Dynamics of Diffusion Networks Snowball sampling Learning influence probabilities in social networks Information diffusion through blogspace Respondent-driven sampling: A new approach to the study of hidden populations The mathematics of infectious diseases A bayesian networks approach for predicting proteinprotein interactions from genomic data Personal influence: The part played by people in the flow of mass communications An experimental study of the coloring problem on human subject networks Maximizing the spread of influence through a social network The budgeted maximum coverage problem The art of computer programming Structure and evolution of blogspace The dynamics of viral marketing Meme-tracking and the dynamics of the news cycle Scalable modeling of real graphs using kronecker multiplication Graphs over time: densification laws, shrinking diameters and possible explanations Graph evolution: Densification and shrinking diameters Cost-effective outbreak detection in networks Statistical properties of community structure in large social and information networks Cascading behavior in large blog graphs Patterns of influence in a recommendation network The link prediction problem for social networks Tracing the flow of information on a global scale using Internet chain-letter data A kernel method for unsupervised structured network inference A poissonian explanation for heavy tails in e-mail communication High-dimensional graphs and variable selection with the lasso On the Convexity of Latent Social Network Inference An analysis of approximations for maximizing submodular set functions Diffusion of Innovations Differences in the mechanics of information diffusion across topics: Idioms, political hashtags, and complex contagion on twitter Correcting for missing data in information cascades Learning graphical model structure using l1-regularization paths Time-varying dynamic bayesian networks Diffusion in organizations and social movements: From hybrid corn to poison pills Link prediction in relational data The disection of equilateral triangles into equilateral triangles What stops social epidemics? Supervised graph inference High-dimensional graphical model selection using '1-regularized logistic regression Different epidemic curves for severe acute respiratory syndrome reveal similar impacts of control measures Influentials, networks, and public opinion formation We thank Spinn3r for resources that facilitated the research. The research was supported in part by Albert Yu & Mary Bechmann Foundation, IBM, Lightspeed, Microsoft, Yahoo, grants ONR N00014-09-1-1044, NSF CNS0932392, NSF CNS1010921, NSF IIS1016909, NSF IIS0953413, AFRL FA8650-10-C-7058 and Okawa Foundation Research Grant. Manuel Gomez Rodriguez has been supported in part by a Fundacion Caja Madrid Graduate Fellowship, a Fundacion Barrie de la Maza Graduate Fellowship and by the Max Planck Society.