key: cord-0514260-d8te8j9l authors: Tong, Alexander; Huguet, Guillaume; Natik, Amine; MacDonald, Kincaid; Kuchroo, Manik; Coifman, Ronald; Wolf, Guy; Krishnaswamy, Smita title: Diffusion Earth Mover's Distance and Distribution Embeddings date: 2021-02-25 journal: nan DOI: nan sha: 1f32677b1a61c30be6c1ba443df0663eee2b19a4 doc_id: 514260 cord_uid: d8te8j9l We propose a new fast method of measuring distances between large numbers of related high dimensional datasets called the Diffusion Earth Mover's Distance (EMD). We model the datasets as distributions supported on common data graph that is derived from the affinity matrix computed on the combined data. In such cases where the graph is a discretization of an underlying Riemannian closed manifold, we prove that Diffusion EMD is topologically equivalent to the standard EMD with a geodesic ground distance. Diffusion EMD can be computed in $tilde{O}(n)$ time and is more accurate than similarly fast algorithms such as tree-based EMDs. We also show Diffusion EMD is fully differentiable, making it amenable to future uses in gradient-descent frameworks such as deep neural networks. Finally, we demonstrate an application of Diffusion EMD to single cell data collected from 210 COVID-19 patient samples at Yale New Haven Hospital. Here, Diffusion EMD can derive distances between patients on the manifold of cells at least two orders of magnitude faster than equally accurate methods. This distance matrix between patients can be embedded into a higher level patient manifold which uncovers structure and heterogeneity in patients. More generally, Diffusion EMD is applicable to all datasets that are massively collected in parallel in many medical and biological systems. With the profusion of modern high dimensional, high throughput data, the next challenge is the integration and analysis of collections of related datasets. Examples of this are particularly prevalent in single cell measurement modalities where data (such as mass cytometry, or single cell RNA sequencing data) can be collected in a multitude of patients, or in thousands of perturbation conditions (Shifrut et al., 2018) . These situations motivate the organization and embedding of datasets, similar to how we now organize data points into low dimensional embeddings, e.g., with PHATE (Moon et al., 2019) , tSNE (van der Maaten & Hinton, 2008) , or diffusion maps (Coifman & Lafon, 2006) ). The advantage of such organization is that we can use the datasets as rich high dimensional features to characterize and group the patients or perturbations themselves. In order to extend embedding techniques to entire datasets, we have to define a distance between datasets, which for our purposes are essentially high dimensional point clouds. For this we propose a new form of Earth Mover's Distance (EMD), which we call Diffusion EMD 1 , where we model the datasets as distributions supported on a common data affinity graph. We provide two extremely fast methods for computing Diffusion EMD based on an approximate multiscale kernel density estimation on a graph. Optimal transport is uniquely suited to the formulation of distances between entire datasets (each of which is a collection of data points) as it generalizes the notion of the shortest path between two points to the shortest set of paths between distributions. Recent works have applied optimal transport in the single-cell domain to interpolate lineages (Schiebinger et al., 2019; Yang & Uhler, 2019; , interpolate patient states (Tong & Krishnaswamy, 2020) , integrate multiple domains (Demetci et al., 2020) , or similar to this work build a manifold of perturbations (Chen et al., 2020) . All of these approaches use the standard primal formulation of the Wasserstein distance. Using either entropic regularization approximation and the Sinkhorn algorithm (Cuturi, 2013) to solve the discrete distribution case or a neural network based approach in the continuous formulation (Ar-jovsky et al., 2017) . We will instead use the dual formulation through the well-known Kantorovich-Rubinstein dual to efficiently compute optimal transport between many distributions lying on a common low-dimensional manifold in a high-dimensional measurement space. This presents both theoretical and computational challenges, which are the focus of this work. Specifically, we will first describe a new Diffusion EMD that is an L 1 distance between density estimates computed using multiple scales of diffusion kernels over a graph. Using theory on the Hölder-Lipschitz dual norm on continuous manifolds (Leeb & Coifman, 2016) , we show that as the number of samples increases, Diffusion EMD is equivalent to the Wasserstein distance on the manifold. This formulation reduces the computational complexity of computing K-nearest Wasserstein-neighbors between m distributions over n points from O(m 2 n 3 ) for the exact computation tõ O(mn) with reasonable assumptions on the data. Finally, we will show how this can be applied to embed large sets of distributions that arise from a common graph, for instance single cell datasets collected on large patient cohorts. Our contributions include: 1. A new method for computing EMD for distributions over graphs called Diffusion EMD. 2. Theoretical analysis of the relationship between Diffusion EMD and the snowflake of a standard EMD. 3. Fast algorithms for approximating Diffusion EMD. 4. Demonstration of the differentiability of this framework. 5. Application of Diffusion EMD to embedding massive multi-sample biomedical datasets. We now briefly review optimal transport definitions and classic results from diffusion geometry. Notation. We say that two elements A and B are equivalent if there exist c, C > 0 such that cA ≤ B ≤ CA, and we denote A B. The definition of A and B will be clear depending on the context. Optimal Transport. Let µ, ν be two probability distributions on a measurable space Ω with metric d(·, ·), Π(µ, ν) be the set of joint probability distributions π on the space Ω × Ω, where for any subset ω ⊂ Ω, π(ω × Ω) = µ(ω) and π(Ω × ω) = ν(ω). The 1-Wasserstein distance W d also known as the earth mover's distance (EMD) is defined as: When µ, ν are discrete distributions with n points, then Eq. 1 is computable in O(n 3 ) with a network-flow based algorithm (Peyré & Cuturi, 2019) . Let · L d denote the Lipschitz norm w.r.t. d, then the dual of Eq. 1 is: This formulation is known as the Kantorovich dual with f as the witness function. Since it is in general difficult to optimize over the entire space of 1-Lipschitz functions, many works optimize the cost over a modified family of functions such as functions parameterized by clipped neural networks (Arjovsky et al., 2017) , functions defined over trees (Le et al., 2019) , or functions defined over Haar wavelet bases (Gavish et al., 2010) . Data Diffusion Geometry Let (M, d M ) be a connected Riemannian manifold, we can assign to M sigma-algebras and look at M as a "metric measure space". We denote by ∆ the Laplace-Beltrami operator on M. For all x, y ∈ M let h t (x, y) be the heat kernel, which is the minimal solution of the heat equation: with initial condition lim t→0 h t (x, y) = δ y (x), where x → δ y (x) is the Dirac function centered at y, and ∆ x is taken with respect to the x argument of h t . Note that as shown in Grigor'yan et al. (2014) , the heat kernel captures the local intrinsic geometry of M in the sense that as t → 0, log h t (x, y) −d 2 M (x, y)/4t. Here, in Sec. 4.2 (Theorem 1) we discuss another topological equivalent of the geodesic distance with a diffusion distance derived from the heat operator H t := e −t∆ that characterizes the solutions of the heat equation (Eq. 3), and is related to the heat kernel via H t f = h t (·, y)f (y)dy (see Lafferty & Lebanon, 2005; Coifman & Lafon, 2006; Grigor'yan et al., 2014 , for further details). It is often useful (particularly in high dimensional data) to consider data as sampled from a lower dimensional manifold embedding in the ambient dimension. This manifold can be characterized by its local structure and in particular, how heat propagates along it. Coifman & Lafon (2006) showed how to build such a propagation structure over discrete data by first building a graph with affinities (K ) ij := e − xi−xj 2 2 / (4) then considering the density normalized operator M := Q −1 K Q −1 , where Q ii := j (K ) ij . Lastly, a Markov diffusion operator is defined by Both D and Q are diagonal matrices. By the law of large numbers, the operator P admits a natural continuous equiv-alentP , i.e., for n i.i.d. points, the sums modulo n converge to the integrals. Moreover, in Coifman & Lafon (2006, Prop. 3) it is shown that lim →0P t/ = e −t∆ = H t . In conclusion, the operator P converges toP as the sample size increases andP provide an approximation of the Heat kernel on the manifold. Henceforth, we drop the subscript of P to lighten the notation, further we will use the notation P for the operator as well as for the matrix (it will be clear in the context). 3. EMD through the L 1 metric between multiscale density estimates The method of efficiently approximating EMD that we will consider here is the approximation of EMD through density estimates at multiple scales. Previous work has considered densities using multiscale histograms over images (Indyk & Thaper, 2003) , wavelets over images and trees (Shirdhonkar & Jacobs, 2008; Gavish et al., 2010) and densities over hierarchical clusters (Le et al., 2019) . Diffusion EMD also uses a hierarchical set of bins at multiple scales but with smooth bins determined by the heat kernel, which allows us to show equivalence to EMD with a ground distance of the manifold geodesic. These methods are part of a family that we call multiscale earth mover's distances that first compute a set of multiscale density estimates or histograms where L 1 differences between density estimates realizes an effective witness function and have (varying) equivalence to the Wasserstein distance between the distributions. This class of multiscale EMDs are particularly useful in computing embeddings of distributions where the Wasserstein distance is the ground distance, as they are amenable to fast nearest neighbor queries. We explore this application further in Sec. 4.6 and these related methods in Sec. B.1 of the Appendix. We now present the Diffusion EMD, a new Earth Mover's distance based on multiscale diffusion kernels as depicted in Fig. 1 . We first show how to model multiple datasets as distributions on a common data graph and perform multiscale density estimates on this graph. Let X = {X 1 , X 2 , . . . , X m }, ∪ m j=1 X j ⊆ M ⊂ R d , be a collection of datasets with n i = |X i | and n = i n i . Assume that the X i 's are independently sampled from a common underlying manifold (M, d M ) which is a Riemannian closed manifold (compact and without boundary) immersed in a (high dimensional) ambient space R d , with geodesic distance d M . Further, assume that while the underlying manifold is common, each dataset is sampled from a different distribution over it, as discussed below. Such collections of datasets arise from several related samples of data, for instance single cell data collected on a cohort of patients with a similar condition. Here, we consider the datasets in X as representing distributions over the common data manifold, which we represent in the finite setting as a common data graph G X = (V, E, w) with V = ∪ m j=1 X j and edge weights determined by the Gaussian kernel (see Eq. 4), where we identify edge existence with nonzero weights. Then, we associate each X i with a density measure µ (t) i : V → [0, 1], over the entire data graph. To compute such measures, we first create indicator vectors for the individual datasets on it, let 1 Xi ∈ {0, 1} n be a vector where for each v ∈ V, 1 Xi (v) = 1 if and only if v ∈ X i . We then derive a kernel density estimate by applying the diffusion operator constructed via Eq. 5 over the graph G to these indicator functions to get scale-dependent estimators where the scale t is the diffusion time, which can be considered as a meta-parameter (e.g., as used in Burkhardt et al., 2021) but can also be leveraged in multiscale estimation of distances between distributions as discussed here. Indeed, as shown in Burkhardt et al. (2021) , at an appropriately tuned single scale, this density construction yields a discrete version of kernel density estimation. We define the Diffusion Earth Mover's Distance between two datasets X i , X j ∈ X as where 0 < α < 1/2 is a meta-parameter used to balance long-and short-range distances, which in practice is set close to 1/2, K is the maximum scale considered here, and Further, to set K, we note that if the Markov process governed by P converges (i.e., to its stationary steady state) in polynomial time w.r.t. |V |, then one can ensure that beyond K = O(log |V |), all density estimates would be essentially indistinguishable as shown by the following lemma, whose proof appears in the Appendix: Figure 1 . Diffusion EMD first embeds datasets into a common data graph G, then takes multiscale diffusion KDEs for each of the datasets. These multiscale KDEs are then used to compute the Diffusion Earth Mover's Distance between the datasets that can be used in turn to create graphs and embeddings (PHATE (Moon et al., 2018) shown here) of the datasets. where φ 0 is the trivial eigenvector of P associated with the eigenvalue λ 0 = 1. To compute the Diffusion EMD W α,K in Eq. 7 involves first calculating diffusions of the datasets µ, second calculating differences and weighting them in relation to their scale, this results in a vector per distribution of length O(nK), and finally computing the L 1 norm between them. The most computationally intensive step, computing the diffusions, is discussed in further detail in Sec. 4.4. We now provide a theoretical justification of the Diffusion EMD defined via Eq. 7 by following the relation established in Leeb & Coifman (2016) between heat kernels and the EMD on manifolds. Leeb & Coifman (2016) define the following ground distance for EMD over M by leveraging the geometric information gathered from the L 1 distances between kernels at different scales. Definition 1. The diffusion ground distance between x, y ∈ M is defined as for α ∈ (0, 1/2), the scale parameter K ≥ 0 and h t (·, ·) the heat kernel on M. Note that D α (·, ·) is similar to the diffusion distance defined in Coifman & Lafon (2006) , which was based on L 2 notions rather than L 1 here. Further, the following result from Leeb & Coifman (2016, see Sec. 3.3) shows that the distance D α (·, ·) is closely related to the intrinsic geodesic one d M (·, ·). Theorem 1. Let (M, d M ) be a closed manifold with geodesic distance d M , and let α ∈ (0, 1/2). The metric D α (·, ·) defined via Def. 1 is equivalent to d M (·, ·) 2α . The previous theorem justifies why in practice we let α close to 1/2, because we want the snowflake distance d 2α M (·, ·) to approximate the geodesic distance of M. The notion of equivalence established by this result is such that D α (x, ·) d(x, ·) 2α . It is easy to verify that two equivalent metrics induce the same topology. We note that while here we only consider the Heat kernel, a similar result holds (see Theorem 4 in the Appendix) for a more general family of kernels, as long as they satisfy certain regularity conditions. For a family of operators (A t ) t∈R + we define the following metric on distributions; let µ and ν be two distributions: The following result shows that applying this metric for the family of operators H t to get W Ht yields an equivalent of the EMD with respect to the diffusion ground distance D α (·, ·). Theorem 2. The EMD between two distributions µ, ν on a closed Riemannian manifold (M, d M ) w.r.t. the diffusion ground distance D α (·, ·), defined via Def. 1, given by is equivalent to W Ht . That is W Dα W Ht , where H t is the Heat operator on M. Proof. In Proposition 15 of Leeb & Coifman (2016) , it is shown that M is separable w.r.t. D α (·, ·), hence we can use the Kantorovich-Rubinstein theorem. We let Λ α , the space of functions that are Lipschitz w.r.t. D α (·, ·) and · Λ * α , the norm of its dual space Λ * α . The norm is defined by In Theorem 4 of Leeb & Coifman (2016) , it is shown that both W Dα and W Ht are equivalent to the norm · Λ * α . We consider the family of operators (P t/ ) t∈R + , which is related to the continuous equivalent of the stochastic matrix defined in Eq. 5. In practice, we use this family of operators to approximate the heat operator H t . Indeed, when we take a small value of , as discussed in Sec. 2, we have from Coifman & Lafon (2006) that this is a valid approximation. Corollary 2.1. Let P be the continuous equivalent of the stochastic matrix in Eq. 5. For small enough, we have: Eq. 11 motivates our use of Eq. 7 to compute the Diffusion EMD. The idea is to take only the first K terms in the infinite sum W P t/ and then choosing := 2 −K would give us exactly Eq. 7. We remark that the summation order of Eq. 7 is inverted compared to Eq. 11, but in both cases the largest scale has the largest weight. Finally, we state one last theorem that brings our distance closer to the Wasserstein w.r.t. d M (·, ·); we refer the reader to the Appendix for its proof. Theorem 3. Let α ∈ (0, 1/2) and (M, d M ) be a closed manifold with geodesic d M . The Wasserstein distance w.r.t. the diffusion ground distance D α (·, ·) is equivalent to the Wasserstein distance w.r.t the snowflake distance d M (·, ·) 2α on M, that is W Dα W d 2α M . Corollary 3.1. For each 1 ≤ i, j ≤ m, let X i , X j ∈ X be two datasets with size n i and n j respectively, and let µ i and µ j be the continuous distributions corresponding to the ones of X i and X j , let K be the largest scale and put N = min(K, n i , n j ). Then, for sufficiently big N → ∞ (implying sufficiently small = 2 −K → 0): for all α ∈ (0, 1/2). In fact we can summarize our chain of thought as follows where the approximation (a) is due to the fact that the discrete distributions on X i and X j converge respectively to µ i and µ j when min(n i , n j ) → ∞. Further, W α,K (X i , X j ) approximate the infinite series W P t/ (µ i , µ j ) as in Eq. 9 when K → ∞, note also that we take = 2 −K so that the largest scale in Eq. 7 is exactly 2 K . The approximation in (b) comes from the approximation of the heat operator as in Coifman & Lafon (2006) , (c) comes from Theorem 2 and (d) comes from Theorem 1. Input: n × n graph kernel K, n × m distributions µ, maximum scale K, and snowflake constant α. The most computationally intensive step of Diffusion EMD requires computing dyadic scales of the diffusion operator P times µ to estimate the density of µ (t) at multiple scales which we will call b. After this embedding, Diffusion EMD between two embeddings b i , b j is computed as |b i − b j |, i.e. the L 1 norm of the difference. Computing the embedding b naively by first powering P then right multiplying µ, may take up to 2 K matrix multiplications which is infeasible for even moderately sized graphs. We assume two properties of P that makes this computation efficient in practice. First, that P is sparse with orderÕ(n) non-zero entries. This applies when thresholding K or when using a K-nearest neighbors graph to approximate the manifold (van der Maaten & Hinton, 2008; Moon et al., 2019) . Second, that P t is low rank for large powers of t. While there are many ways to approximate dyadic scales of P , we choose from two methods depending on the number of distributions m compared to the number of points in the graph n. When m n, we use a method based on Chebyshev approximation of polynomials of the eigenspectrum of P as shown in Alg. 1. This method is efficient for sparse P and a small number of distributions (Shuman et al., 2011) . For more detail on the error incurred by using Chebyshev polynomials we refer the reader to Trefethen (2013, Chap. 3). In practice, this requires for the approximating polynomial of J terms, computation of mJ (sparse) matrix vector multiplications for a worst case time complexity of O(Jmn 3 ), but in practice isÕ(Jmn) where J is a small constant (see Fig. 5 (e)). However, while asymptotically efficient, when m n in practice this can be inefficient as it requires many multiplications of the form P µ. In the case where m n, approximating powers of P and applying these to the n × m collection of distributions µ once is faster. This method is also useful when the full set of distributions is not known and can be applied to new distributions one at a time in a data streaming model. A naive approach for computing P 2 K would require K dense n × n matrix multiplications. However, as noted in Coifman & Maggioni (2006) , for higher powers of P , we can use a much smaller basis. We use an algorithm based on interpolative decomposition (Liberty et al., 2007; Bermanis et al., 2013) to reduce the size of the basis, and subsequently the computation time, at higher scales. In this algorithm we first determine the approximate rank of P 2 k using an estimate of the density of the eigenspectrum of P as in Dong et al. (2019) . We then alternate steps of downsampling the basis to the specified rank of P 2 k with (randomized) interpolative decomposition with steps of powering P on these bases. Informally, the interpolative decomposition selects a representative set of points that approximate the basis well. In the worst case this algorithm can takeÕ(mn 3 ) time to compute the diffusion density estimates, nevertheless with sparse P with a rapidly decaying spectrum, this algorithm isÕ(mn) in practice. For more details see Alg. 4 and Sec. C of the Appendix. The density estimates created for each distribution are both large and redundant with each distribution represented by a vector of (K + 1) × n densities. However, as noted in the previous section, P 2 k can be represented on a smaller basis, especially for larger scales. Intuitively, the long time diffusions of nodes that are close to each other are extremely similar. Interpolative decomposition (Liberty et al., 2007; Bermanis et al., 2013) allows us to pick a set of points to center our diffusions kernels such that they approximately cover the graph up to some threshold on the rank. In contrast, in other multiscale EMD methods the bin centers or clusters are determined randomly, making it difficult to select the number of centers necessary. Furthermore, the relative number of centers at every scale is fixed, for example, Quadtree or Haar wavelet based methods (Indyk & Thaper, 2003; Gavish et al., 2010) use 2 d k centers at every scale, and a clustering based method (Le et al., 2019) selects C k clusters at every scale for some constant C. Conversely, in Diffusion EMD, by analyzing the behavior of P 2 k , we intelligently select the number of centers needed at each scale based on the approximate rank of P 2 k up to some tolerance at each scale. This does away with the necessity of a fixed ratio of bins at every scale, allowing adaptation depending on the structure of the manifold and can drastically reduce the size representations (see Fig. 5 (c)). For the Chebyshev polynomials method, this subsampling is done post computation of diffusion scales, and for the method based on approximating P 2 k directly the subsampling happens during computation. To this point, we have described a method to embed distributions on a graph into a set of density estimates whose size depends on the data, and the spectrum decay of P . We will now explore how to use these estimates for exploring the Diffusion EMD metric between distributions. Our main motivation for a fast EMD computed on related datasets is to examine the space of the samples or datasets themselves, i.e., the higher level manifold of distributions. In terms of the clinical data, on which we show this method, this would be the relationship between patients themselves, as determined by the EMD between their respective singlecell peripheral blood datasets. Essentially, we create a kernel matrix K X and diffusion operator P X between datasets where the samples are nodes on the associated graph. This diffusion operator P X can be embedded using diffusion maps (Coifman & Lafon, 2006) or visualized with a method like PHATE (Moon et al., 2018) that collects the information into two dimensions as shown in Sec. 5. We note this higher level graph can be a sparse KNN graph, particularly given that a diffusion operator on the graph can allow for global connections to be reformed via t-step path probabilities. Multiscale formulations of EMD as in Eq. 8 are especially effective when searching for nearest neighbor distributions under the Wasserstein metric (Indyk & Thaper, 2003; Backurs et al., 2020) as this distance forms a normed space, i.e., a space where the metric is induced by the L 1 norm of the distribution vectors and their differences. Data structures such as kd-trees, ball trees, locality sensitive hashing, are able to take advantage of such normed spaces for sub-linear neighbor queries. This is in contrast to network-flow or Sinkhorn type approximations that require a scan through all datapoints for each nearest neighbor query as this metric is not derived from a norm. One of the hindrances in the use of optimal transport-based distances has been the fact that it cannot be easily incorporated into deep learning frameworks. Gradients with respect to the EMD are usually found using a trained Lipschitz discriminator network as in Wasserstein-GANs (Arjovsky et al., 2017) , which requires unstable adversarial training, or by taking derivatives through a small number of iterations of the Sinkhorn algorithm (Frogner et al., 2015; Bonneel et al., 2016; Genevay et al., 2018; Liu et al., 2020) , which scales with O(n 2 ) in the number of points. Tree based methods that are linear in the number of points do not admit useful gradients due to their hard binning over space, giving a gradient of zero norm almost everywhere. We note that, given the data diffusion operator, the com-putation of Diffusion EMD is differentiable and, unlike Tree-based EMD, has smooth bins and therefore a non-zero gradient norm near the data (as visible in Fig. 2 ). Further, computation of the gradient only requires powers of the diffusion operator multiplied by the indicator vector describing the distribution on the graph. In fact, as mentioned in the supplementary material (Sec. C.1) for each v ∈ V , the gradient of the Diffusion EMD ∂W α,K (X i , X j )/∂v depends mainly on the gradients ∂P 2 k /∂v for 0 ≤ K which can be expressed in terms of the gradient of the Gaussian kernel ∂K /∂v. This last quantity is easy to compute. In Sec. C.1 of the Appendix, we give an exact process on computing the gradient of the Diffusion EMD. In this section, we first evaluate the Diffusion EMD on two manifolds where the ground truth EMD with a geodesic ground distance is known, a swiss roll dataset and spherical MNIST (Cohen et al., 2017) . On these datasets where we have access to the ground truth geodesic distance we show that Diffusion EMD is both faster and closer to the ground truth than comparable methods. Then, we show an application to a large single cell dataset of COVID-19 patients where the underlying metric between cells is thought to be a manifold (Moon et al., 2018; Kuchroo et al., 2020) . We show that the manifold of patients based on Diffusion EMD by capturing the graph structure, better captures the disease state of the patients. Experimental Setup. We consider four baseline methods for approximating EMD: QuadTree(D) (Backurs et al., 2020) which partitions the dataspace in half in each dimension up to some specified depth D, ClusterTree(C, D) (Le et al., 2019) which recursively clusters the data with C clusters up to depth D using the distance between clusters to weight the tree edges, and the convolutional Sinkhorn distance (Solomon et al., 2015) with the same graph as used in Diffusion EMD. QuadTree and ClusterTree are fast to compute. However, because they operate in the ambient space, they do not represent the geodesic distances on the manifold in an accurate way. The convolutional Sinkhorn method represents the manifold well but is significantly slower even when using a single iteration. For more details on related work and the experimental setup see Sections B and D of the Appendix respectively. 1D data. We first illustrate the advantages of using Diffusion EMD over tree-based methods (ClusterTree and QuadTree) on a line graph with 500 points spaced in the interval [0, 1]. In Fig. 2 we depict the Wasserrstein distance between an indicator function at each of these 500 points (1 x ) and an indicator function at x = 0.5. In the case of indicator functions the Wasserstein distance is exactly the ground distance. We show three approximations of each method, varying the number of trees and the Chebyshev polynomial order respectively. It is clear that Diffusion EMD achieves a much better approximation of the ground truth primarily due to its use of smooth bins. Swiss roll data. The next application we explore is to a dataset where we have distributions on a manifold for which the geodesic distance is easily computable. In this way we can compare to the ground truth EMD between distributions. We generate m = 100 Gaussians on the swiss roll with 100 points each for a total of n = 10, 000 points on the graph. We compare each method on two metrics, the 10-nearest neighbor accuracy measured (P@10) where a P@10 of 1 means that the 10-nearest neighbors are the same as the ground truth. We compare the rankings of nearest neighbors over the entire dataset using the Spearmanρ correlation coefficient, which measures the similarity of nearest neighbor rankings. This coefficient ranges between -1 for inversely ranked lists and 1 for the same ranks. This measures rankings over the entire dataset equally rather Spearman coefficient (right), against their (log scaled) computation time in seconds on the swiss roll dataset. Variations of methods are over Chebyshev approximation order for Diffusion EMD, # of trees for tree methods, and number of iterations for conv. Sinkhorn. Diffusion EMD is more accurate than tree methods and orders of magnitude faster than conv. Sinkhorn even with a single iteration. than only considering the nearest neighbors. Visually, we show embeddings of the swiss roll in Fig. 3 , where the 2D manifold between distributions is best captured by Diffusion EMD given a similar amount of time. In Fig. 4 , we investigate the time vs. accuracy tradeoff of a number of fast EMD methods on the swiss roll. We compare against the ground truth EMD which is calculated with the exact EMD on the "unrolled" swiss roll in 2D. We find that Diffusion EMD is more accurate than tree methods for a given amount of time and is much faster than the convolutional Sinkhorn method and only slightly less accurate. To generate multiple models for each dataset we vary the number of trees for tree methods, the Chebyshev order for Diffusion EMD, and the number of iterations for convolutional Sinkhorn. We search over and fix other parameters using a grid search as detailed in Sec. D of the Appendix. In Fig. 5 , we vary parameters of the Chebyshev polynomial algorithm of Diffusion EMD. Regarding performance, we find Diffusion EMD is stable to the number of scales chosen after a certain minimum maximum scale K, Chebyshev polynomial order, and the number of scales used. By performing interpolative decomposition with a specified rank threshold on P 2 k we can substantially reduce the embedding size at a small cost to performance Fig. 5(b,c) . Spherical MNIST. Next, we use the Spherical MNIST dataset to demonstrate the efficacy of the interpolative decomposition based approximation to Diffusion EMD, as here m n. Each image is treated as a distribution (of pixel intensities) over the sphere. To evaluate the fidelity of each embedding, we evaluate the 1-NN classification on the embedding vectors in addition to P@10 and Spearman coefficient in Tab. 1. Diffusion EMD creates embeddings that better approximate true EMD over the sphere than tree methods in a similar amount of time, which in this case also gives better classification accuracy. Single cell COVID-19 patient data. The COVID-19 pandemic has driven biologists to generate vast amounts of cellular data on hospitalized patients suffering from severe disease. A major question in clinicians minds is determining a priori which patients may be at risk for worse outcomes, including requiring increased ventilatory support and increased risk of mortality. Certain cell types found in the blood, such as CD16 + Neutrophils, T cells and non-classical monocytes, have been associated with and predictive of mortality outcome. Ideally, a manifold of patients would find these cellular populations to occupy a region of high mortality for CD16 + Neutrophils and non-classical monocytes and low mortality for T cells. In order to construct a manifold of patients suffering from COVID-19, we analyzed 210 blood samples from 168 patients infected with SARS-CoV-2 measured on a myeloid-specific flow cytometry panel, an expanded iteration of a previously published dataset . We embedded 22 million cells from these patients into a common combined cell-cell graph with 27,000 nodes as defined in Sec. 4.1. We then computed Diffusion EMD and other methods on these datasets. Diffusion EMD is computed by using indicator vectors for each patient converted to density estimates as in Eq. 6. On an informative embedding of patients, similar patients, with similar features (such as mortality) would localize on the manifold and thus the important features should be smooth over the manifold. Furthermore, cell types which are correlated with outcome either positively or negatively should also be smooth and either correlated or anticorrelated with outcome. To quantify this, we compute the smoothness with respect to the patient manifold by using a Laplacian quadratic form with respect to the 10-NN graph between patients. Convolutional Sinkhorn does not scale to this data, so we compare a patient manifold created with Diffusion EMD to ones created with QuadTree and ClusterTree. Diffusion EMD is able to use the manifold of cells where QuadTree and ClusterTree are built in the ambient space. In Fig. 6 we visualize relevant signals over the patients using PHATE (Moon et al., 2019) overlayed with the quadratic smoothness of the signal over the graph. While the mortality signal appeared enriched in the right branches of both the Diffusion EMD and QuadTree manifolds, it did not localize as well on the ClusterTree manifold. Both CD16 + Neutrophils and non-classical monocytes appeared smoother over the Diffusion EMD manifold than the comparison manifolds. Since both cell types are associated with mortality, it was interesting to see them both enriched in high mortality region of the Diffusion EMD manifold but not the others. Finally, T cells, which are negatively correlated with mortality appeared smoothly enriched in the Diffusion EMD manifold in a region with no mortality. In QuadTree and ClusterTree constructions, T cells appeared enriched throughout the manifold, no localizing smoothly to a region with low mortality. These experiments show that the patient manifold constructed with Diffusion EMD is more informative, smoothly localizing key signals, such as patient outcome and predictive cell types. For a more details see Sec. D of the Appendix. In this work we have introduced Diffusion EMD, a multiscale distance that uses heat kernel diffusions to approximate the earth mover's distance over a data manifold. We showed how Diffusion EMD can efficiently embed many samples on a graph into a manifold of samples inÕ(mn) time more accurately than similarly efficient methods. This is useful in the biomedical domain as we show how to embed COVID-19 patient samples into a higher level patient manifold that more accurately represents the disease structure between patients. Finally, we also show how to compute gradients with respect to Diffusion EMD, which opens the possibility of gradients of the earth mover's distance that scale linearly with the dataset size. Tong, A. and Krishnaswamy, S. Interpolating optimal transport barycenters of patient manifolds, 2020. Tong, A., Huang, J., Wolf, G., van Dijk, D., and Krishnaswamy, S. TrajectoryNet: A dynamic optimal transport network for modeling cellular dynamics. We first analyze the theoretical framework of Diffusion EMD in Appendix A. Next we discuss related worki n Appendix B, with a particular focus on multiscale methods for EMD. In Appendix C we provide further detail on the two algorithms for computing Diffusion EMD, the first based on Chebyshev approximation, and the second based on directly powering the diffusion operator while reducing the basis. We discuss gradients of Diffusion EMD in section C.1. Finally we provide further experimental details in Appendix D. A.1. General framework We now recall some useful results from Leeb & Coifman (2016) . We will only present an overview, for a rigorous exposition, we suggest Leeb (2015), Leeb & Coifman (2016) and Grigor'yan & Liu (2015). We let Z be a sigma-finite measure space of dimension n, d(·, ·) its intrinsic distance and ξ its associated sigma-finite measure. In order to evaluate the EMD between two measures on Z, one would need to know d(·, ·). Either directly, by using equation 1, or implicitly, to define the space of Lipschitz functions with respect to d(·, ·) in equation 2. One of the contributions of this paper is to define a kernel based metric D α (·, ·), where the kernel depends on α ∈ (0, 1), and show that the metrics D α (·, ·) and d(·, ·) are closely related. Next, the objective is to use D α (·, ·) as the ground distance to compute the EMD in its dual form. That is a norm between two distributions acting on the space of Lipschitz functions with respect to D α (·, ·). To do so, the authors define Λ α the space of functions that are Lipschitz with respect to D α (·, ·), and its dual Λ * α ; the space of measures acting on f ∈ Λ α . Further, in order to use the Kantorovich-Rubinstein theorem, they show that the space Z is separable with respect to the metric D α (·, ·). Thus, the norm of the dual space · Λ * α can be used to compute the EMD with D α (·, ·) as the ground distance. Lastly, they define two norms · (1) Λ * α and · (2) Λ * α that are equivalent to · Λ * α on Λ * . In practice, these norms are much faster to compute. The metric D α (·, ·) is defined using a family of kernels {a t (·, ·)} t∈R + on Z. For each kernel, we define an operator A t as (A t f )(x) = Z a t (x, y)f (y)dξ(y). These kernels must respect some properties: • The semigroup property: for all s, t > 0, A t A s = A t+s ; • The conservation property: Z a t (x, y)dξ(y) = 1; • The integrability property: there exists C > 0 such that Z |a t (x, y)|dξ(y) < C, for all t > 0 and x ∈ Z. Considering only the dyadic times, that is t = 2 −k , we define the kernel p k (·, ·) := a 2 −k (·, ·) and the operator P k = A 2 −k , for all k ∈ N. By leveraging the local geometric information gathered from the L 1 distance between two measures D k (x, y) := p k (x, ·) − p k (y, ·) 1 , the authors define the following multiscale metric To interpret D k (·, ·) in an intuitive way, consider the case where a t (·, ·) defines a random walk on Z. As a consequence, a t (x, B(y, r)) is the probability to move from x to a point in B(y, r) in t steps. Moreover, for any x ∈ Z, a t (x, ·) defines a distribution, therefore D k (x, y) is the L 1 distance between two distributions induced by the points x and y. Since these distributions depend on the number of steps t, it is clever to consider a distance that includes many scales, just like D α (·, ·). Another property needs to be verified by the kernel a t (·, ·), this property depends on the distance D α (·, ·). Namely, the geometric property: there exist C > 0 and α ∈ (0, 1) such that for all k ∈ N and x ∈ Z Z |p k (x, y)|D α (x, y)dξ(y) ≤ C2 −kα . We need to add three stronger regularity conditions on the kernel a t (·, ·), for D α (·, ·) to be closely related to the intrinsic distance d(·, ·): 1. An upper bound on the kernel: there exist a non-negative, monotonic decreasing function φ : R + → R and β > 0 such that for any γ < β, the function verifies 2. Hölder continuity estimate: there exist Θ > 0 sufficiently small, such that, for all t ∈ (0, 1] and all x, y ∈ Z, the distance verifies d(x, y) ≤ t 1/β and for all u ∈ Z the difference between kernels is bounded 3. A local lower bound: there exist a monotonic decreasing function ψ : R + → R and R > 0, such that, for all t ∈ (0, 1] and all x, y where d(x, y) < R, we have It is shown that the heat kernel on a closed Riemannian manifold respects all these conditions (Leeb & Coifman, 2016, see Sec. 3 .3). Definition 2. A distance d(·, ·) b is a snowflake of a distance d(·, ·) if b ∈ (0, 1). Moreover, the Hölder space is the space of functions that are Lipschitz continuous w.r.t. a snowflake distance, hence the terminology used in Leeb & Coifman (2016) . We now state an important theorem from Leeb & Coifman (2016, Thm. 2) . Theorem 4. Consider a sigma-finite measure space Z of dimension n with metric d(·, ·) and a measure ξ such that ξ(B(x, r)) r n . If the family of kernels {a t (·, ·)} t∈R + respect the condition 1,2 and 3, then, for 0 < α < min(1, Θ/β), the distance D α (·, ·) is equivalent to the thresholded snowflake distance min[1, d(·, ·) αβ ]. Remark. In our case, because we used the heat kernel on a closed Riemannian manifold M, we had D α (·, ·) d M (·, ·) 2α (Thm. 1). This can be justified from Leeb & Coifman (2016, Cor. 2). Using the same notation as in the corollary, we define C := max x,y d M (x, y) 2α (which is finite due to the assumptions on M), thus we can bound the constant B The previous theorem closely links the two considered distances. It also motivated the goal to compute the EMD w.r.t. D α (·, ·). In Leeb & Coifman (2016)[Prop. 15] , it is shown that Z is a separable space with respect to the metric D α (·, ·). Hence, we can use the Kantorovich-Rubinstein theorem to express the EMD in its dual form. First, we need to define the space of functions that are Lipschitz with respect to D α (·, ·). For a fix α ∈ (0, 1), we note this space by Λ α . It corresponds to the set of functions f on Z such that the norm is finite. Next, we need to define the space dual to Λ α , which is noted by Λ * α . For a function f ∈ Λ α and a L 1 measure T , we define f, T := Z f dT . The space Λ * α is the space of L 1 measure with the norm In practice, this norm would still be computationally expensive. However, the authors show that the norms (Leeb & Coifman, 2016, Thm. 4) . Where P * k is the adjoint of P k . Finally, using the Kantorovich-Rubinstein theorem, we get where D α (·, ·) min[1, d(·, ·) βα ] and d(·, ·) is the ground distance of Z. In conclusion, using · (1) Λ * α or · (2) Λ * α yields a norm equivalent to · Λ * α . The norm µ − ν Λ * α is equal to the Wasserstein distance between the distributions µ and ν with respect to the ground distance D α (·, ·). A.2. Proofs of section 4.2 Lemma 1. Assuming that P converges (i.e. to its stationary distribution) in polynomial time w.r.t. |V |, then there exists . . , n, where φ 0 is the trivial eigenvector of P associated with the eigenvalue λ 0 = 1. Proof. First we notice that P is reversible with respect to π i = D ii / i j D ij . Since P is ergodic it converges to its unique stationary distribution π. Moreover, we assumed this convergence to be in polynomial time w.r.t. |V |, i.e. we define Then, by our assumption, there exist Intuitively, for all k ≥ K, each row of the matrix P 2 k is approximately equal to π (w.r.t. ), as a consequence P 2 k 1 Xi ≈ φ 0 . Theorem 3. Let α ∈ (0, 1/2) and (M, d M ) be a closed manifold with geodesic d M . The Wasserstein distance w.r.t. the diffusion ground distance D α (·, ·) is equivalent to the Wasserstein distance w.r.t the snowflake distance d M (·, ·) 2α on M, Proof. We will prove this for a more general framework, we let two metrics d 1 (·, ·) and d 2 (·, ·) on a sigma-finite measure space Z, and let µ and ν be two measures. We assume d 1 d 2 , that is there exist two constants c, C > 0 such that for all x, y ∈ Z we have c d 1 (x, y) ≤ d 2 (x, y) ≤ C d 1 (x, y). Using the same notation as in Eq. 1, for all π ∈ Π(µ, ν) we have which is the same as cW d1 (µ, ν) ≤ W d2 (µ, ν) ≤ CW d1 (µ, ν), this proves that whenever d 1 and d 2 are equivalent then W d1 and W d2 are equivalent as well. Corollary 2.1. Let P be the continuous equivalent of the stochastic matrix in Eq. 5. For small enough, we have: → 0 as → 0. Generally speaking, convergence in L 2 implies convergence in L 1 when the manifold is closed. Now put D ,t = P t/ − H t , then we have D ,t L 1 (M) → 0 as → 0. Let µ and ν be two measures and let δ > 0, choose > 0 small enough so that D t, γ 1 < δ γ 1 for all t > 0, where γ = µ − ν, This proves that for all t > 0, for all δ > 0 there exists an > 0 sufficiently small such that Note also that using the reverse triangle inequality we can easily show that according to (Wang et al., 1997) we can get lower bounds of the heat kernel (which implies lower bounds for the heat operator), we have W Ht (µ, ν) ≥ H 1 (µ − ν) 1 ≥ C µ − ν 1 for some C > 0. If δ < C/2 then for sufficiently small > 0, which completes the proof. Corollary 3.1. For each 1 ≤ i, j ≤ m let X i , X j ∈ X be two datasets with size n i and n j respectively, and let µ i and µ j be the continuous distributions corresponding to the ones of X i and X j , let K be the largest scale and put N = min(K, n i , n j ). Then, for sufficiently big N → ∞ (implying sufficiently small = 2 −K → 0): for all α ∈ (0, 1/2). Proof. First define note that for = 2 −K we have W α,K, (X i , X j ) = W α,K (X i , X j ). Since n i → ∞ and n j → ∞, then using Monte-Carlo integration we get lim where W α,K, (µ i , µ j ) has the same expression as in Eq. 15 (replacing the discrete measure µ i 's with the continuous measures µ i 's), note that by definition of W P t/ we have lim K→∞ W α,K, (µ i , µ j ) = W P t/ (µ i , µ j ) and thus combining this result with Corollary 2.1 yields W α,K, (X i , X j ) W Dα (µ i , µ j ) for N large enough and small enough. Now if we take = 2 −K and apply Theorem 3 we get W α,K (X i , X j ) W d 2α M (µ i , µ j ) as required. Wavelet-based linear time approximations of the earth mover's distance have been investigated by Indyk & Thaper (2003) who used randomly shifted multi-scale grids to compute EMD between images. Shirdhonkar & Jacobs (2008) expanded this to wavelets over R n showing the benefit of using smooth bins over the data, and the relationship to the dual problem. Leeb & Coifman (2016) investigated using wavelets over more general metric spaces. We build off of this theory and efficiently approximate this distance on discrete manifolds represented by sparse kernels. Another line of work in approximations to the Wasserstein distance instead works with tree metrics over the data (Leeb, 2018; Le et al., 2019; Backurs et al., 2020; Sato et al., 2020) . These tree methods also linear time however are built in the ambient dimension and do not represent graph distances as well as graph based methods as shown in Sec. 5. Averaging over certain tree families can linked to approximating the EMD with a Euclidean ground distance in R n (Indyk & Thaper, 2003) . Many trees may be necessary to smooth out the effects of binning over a continuous space as is evident in Figure 2 . Diffusion EMD uses multiple scales of smooth bins to reduce this effect, thus giving a smoother distance which is approximately equivalent to a snowflake of the L 2 distance on the manifold. A third line of work considers entropy regularized Wasserstein distances (Cuturi, 2013; Benamou et al., 2015; Solomon et al., 2015) . These methods show that the transportation plan of the regularized 2-Wasserstein distance is a rescaling of the heat kernel accomplished by the iterative matrix rescaling algorithm known as the Sinkhorn algorithm. In particular, Solomon et al. (2015) links the time parameter t in the heat kernel H t to the entropy regularization parameter and performs Sinkhorn scaling with H t . While applying the heat kernel is efficient, to embed m distributions with the entropy regularized Wasserstein distance is O(m 2 ) as all pairwise distances must be considered. Next we explore the linear time algorithms based on multiscale smoothing for compute the earth mover's distance. Let a (possibly randomized) transformation T map distributions on Ω to a set of multiscale bins b, T : µ(Ω) → b, these methods define a T such that Where the approximation, the randomness, and the transform depend on the exact implementation. All of these methods have two things in common, they smooth over multiple scales Indyk & Thaper (2003) presented one of the early methods of this type. They showed that by computing averages over a set of randomly placed grids at dyadic scales. These grids work well for images where the pixels form a discrete set of coordinates. Shirdhonkar & Jacobs (2008) showed how to generalize this work over images to more general bin types, more specifically they showed how wavelets placed centered on a grid could replace the averages in Indyk & Thaper (2003) . This work linked the earth mover's distance in the continuous domain to that of the discrete through wavelets in R d , showing that for some wavelets the EMD approximation was better than the previous grid method. In Diffusion EMD we generalize these wavelets to the graph domain using diffusion wavelets on the graph allowing Wasserstein distances with a geodesic ground distance. Kolouri et al. (2016) in Sliced Wasserstein distances showed how the Wasserstein distance could be quickly approximated by taking the Wasserstein distance along many one dimensional slices of the data. This can be thought of binning along one dimension where the cumulative distribution function is represented along n bins (one for each point) with each bin encompassing one more point than the last. Le et al. (2019) generalized the Sliced Wasserstein distance back to trees, giving a new method based on multi-level clustering where the data is partitioned into smaller and smaller clusters where the bins are the clusters. They demonstrated this method on high dimensional spaces where previous methods that had d 2 k bins at scale k were inefficiently using bins. By clustering based on the data, their ClusterTree performs well in high dimensions. However, by clustering they lose the convergence to the Wasserstein distance with Euclidean ground distance of previous methods for efficiency in high dimensions. Diffusion EMD uses smooth diffusion wavelets over the graph which can give geodesic distances unlike previous multiscale methods. Furthermore, Diffusion EMD selects a number of bins that depends on the rank of the data at each dyadic scale, which can lead to smaller representations depending on the data. This is summarized in Table 2 , which details these differences. In this section we present two algorithms for computing the Diffusion EMD, the Chebyshev approximation method and the interpolative decomposition method. The Chebyshev method is more effective when the number of distributions is relatively small. The interpolative decomposition method is more effective when the number of distributions is large relative to the size of the manifold. We also detail how to subsample the normed space based on the spectrum of P which can be approximated quickly. First we define the approximate rank up to precision δ of a matrix A ∈ R n×n as: Where σ i (A) is the ith largest singular value of the matrix A. The approximate ranks of dyadic powers of P 2 k are useful for determining the amount of subsampling to do at each scale either after an application of the Chebyshev method or during the computation of (approximate) dyadic powers of P . We note that based on the density of singular values of P , which is quickly computable using the algorithm presented in Dong et al. (2019) , the approximate rank of all dyadic powers of P can be calculated without computing powers of P . Chebyshev Approximation of Diffusion EMD We first note that P 2 k can be computed spectrally using a filter on its eigenvalues. Let M = D 1/2 P D −1/2 be the symmetric conjugate of P . Then M is symmetric and has eigenvalues lying in the range −1 ≤ λ 0 ≤ λ 1 ≤ . . . ≤ λ n ≤ 1. M can be decomposed into U ΣU T for orthonormal U and Σ a diagonal matrix of [λ 0 , λ 1 , . . . , λ n ]. We then express P 2 k as a filter on the eigenvalues of the M : We compute the first J Chebyshev polynomials of P j µ then use the polynomial filter on the eigenvalues h(σ) = σ 2 k to compute diffusions of the distributions, and reweight these diffusion bins as specified in Algorithm 1. This algorithm requires J sparse matrix multiplications of P and µ. For a total time complexity ofÕ(Jmn) when P is sparse. Interpolative Decomposition Approximation of Diffusion EMD Our second method proceeds by directly approximating multiplication by the matrix P 2 k . The naive solution of computing dyadic powers of P on the original basis quickly leads to a dense n × n matrix. Coifman & Maggioni (2006) observed that P 2 k is of low rank, and therefore multiplication by P 2 k can be approximated on a smaller basis. In that work they introduced a method of iteratively reducing the size of the basis using rank-revealing pivoted sparse QR decomposition, then computing P 2 k+1 = P 2 k P 2 k . By reducing the size of Where Q and R are decomposed as the following with Q 1 is a m × k matrix, Q 2 is m × (n − k), etc. the basis and orthogonalizing the basis is kept small and P 2 k is kept sparse on its corresponding basis. For sparse P this gives an algorithm that is O(n log 2 n) for computing dyadic powers of a sparse matrix. Our algorithm follows the same lines as theirs however we use interpolative decomposition to reduce the size of the basis, and do not do so at early stages where the size of the basis may still be a significant fraction of n. Interpolative decomposition allows for an easy interpretation: we are selecting a well spaced set of points that acts as a basis for higher levels of P 2 k . For more details on interpolative decomposition we refer the reader to Liberty et al. (2007) ; Bermanis et al. (2013) . The algorithm in Coifman & Maggioni (2006) also computes powers of P 2 k in the original basis by projecting the multiplication in the reduced basis back to the original. We do not bother computing P 2 k µ in the original basis, but instead use the reduced basis directly to compare distributions against. Denote the basis at level k as φ k , which is really a subset of φ k−1 , then φ k can be thought of as the centers of the bins at level k of our multiscale embedding. We only compare distributions at these representative centers rather than at all datapoints. This is summarized in Algorithm 4, creating an embedding for each distribution whose length is dependent on the rank of P . Sampling the Diffusions at Larger Scales Interpolative decomposition is an integral part of Algorithm 4. However, it can also be useful in reducing the size of the distribution representations of Algorithm 1 which uses the Chebyshev approximation. Without sampling the Chebyshev algorithm centers a bin at every datapoint at every scale. However, for larger scales this is unnecessary, and we can take advantage of the low rank of P 2 k with interpolative decomposition. In practice, we use the top 6 largest scales, in fact the distance shows very little sensitivity to small scales (see Fig. 5(d) ). In fact, this representation can be compressed without a significant loss in performance as shown in Fig. 5(b,c) . We apply the rank threshold to the spectrum of P 2 k , which can be computed from the density of states algorithm described above, to determine the number and location of centers to keep. With δ = 10 −6 this reduces the number of centers from 60, 000 to < 4, 000 with a small loss in performance. Rather than selecting a fixed number of centers per scale as in previous methods, this allows us to vary the number of centers per scale as necessary to preserve the rank at each scale. Time Complexity Here, we assume that P has at mostÕ(n) = O(n log c n) nonzero entries. In this case, interpolative decomposition in Algorithm 2 has complexity O(k · m · n log n), and the randomized version in Algorithm 3 has complexity O(km + k 2 n) (Liberty et al., 2007) . Algorithm 1 has complexity O(Jmn log c n) which is dominated by calculation of the J powers of P j µ. The computation time of Algorithm 4 is dominated by the first randomized interpolative decomposition step. When we set γ = O(log c n), which controls when the application of RandID occurs, this gives a total time complexity for Algorithm 4 ofÕ(mn) = O(n log 2c n + mn log b n), where b and c are constants controlled by γ. As γ increases, b increases and c decreases. The first term is dominated by the first interpolative decomposition setup, and the second is the calculation of µ (2 k ) . It follows that when m n, it is helpful to set γ larger for a better tradeoff of b and c. In practice we set γ = n/10 which seems to offer a reasonable tradeoff between the two steps. For small m in practice Algorithm 1 is significantly faster than Algorithm 4. In this section we will develop the computation of the gradient of the Diffusion EMD W α,K (X i , X j ), we will show that its gradient depend only on the gradient of the Gaussian kernel matrix K which is easy to compute. We start by recalling Algorithm 3 Randomized Interpolative Decomposition: RandID(A, j, k, l) Input: m × n matrix A, random sample sizes j, l, and number of subsamples k s.t. min{m, n} > j > k > l. Find the k indices ofB 2 that are approximately equal to columns in W 2 , [i 1 , i 2 , . . . , some basic facts about the gradient of matrices, 1. For a matrix P and a vector µ we have ∇ P µ 1 = sign(P µ)µ T ∇P . 2. For an invertible matrix P , the gradient of the inverse of P is ∇(P −1 ) = −P −1 ∇(P )P −1 . 3. Let n ∈ N * the gradient of P n is Using the definition of the Diffusion EMD, we have The last formula tells us that the gradient of W α,K (X i , X j ) depends only on the gradient of the powers of P which in turn can be expressed in terms of the gradients P . We have: therefore in order to compute ∇P we need to compute ∇M , Diffusion Earth Mover's Distance and Distribution Embedding Algorithm 4 Interpolative Decomposition diffusion densities T id (K, µ, K, α, δ, γ) Input: n × n graph kernel K, n × m distributions µ, maximum scale K, snowflake constant α, rank threshold δ, and basis recomputation threshold γ. hence the gradient of M can be written only in terms of the gradient of K . This process shows that the gradient of the EMD diffusion can be expressed in terms of the gradient of the Gaussian kernel only, which may make it useful in future applications where fast gradients of the earth mover's distance are necessary. For example, in distribution matching where previous methods use gradients with respect to the Sinkhorn algorithm (Frogner et al., 2015) , which scales with O(n 2 ). In this application, computation of gradients of Diffusion EMD would be O(n). What makes this possible is a smooth kernel and smooth density bins over the graph. In contrast to Diffusion EMD, multiscale methods that use non-smooth bins have non-smooth gradients with respect to K , and so are not useful for gradient descent type algorithms. We leave further investigation into the gradients of Diffusion EMD to future work. We ran all experiments on a 36 core Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz with 512GB of RAM. We note that our method is extremely parallelizable, consisting of only matrix operations, thus a GPU implementation could potentially speed up computation. For fair comparison we stick to CPU implementations and leave GPU acceleration to future work. We provide custom implementations of all methods in the DiffusionEMD package, which is available on Github. We base our Sinkhorn implementation on that in the python optimal transport package (POT) (Flamary et al., 2021) , and our tree methods off of the sklearn nearest neighbors trees (Pedregosa et al., 2011). P@10. P@10 is a metric that measures the overlap in the 10-nearest neighbors between the nearest neighbors of the EMD method and the ground truth nearest neighbors, which is calculated using exact OT on the geodesic ground distance. P@10 is useful for distinguishing the quality of distance calculations locally, whereas the next metric measures the quality more globally. This may be particularly important in applications where only the quality of nearest neighbors matters as in Sect 4.6, or in applications of Wasserstein nearest neighbor classification (Indyk & Thaper, 2003; Backurs et al., 2020) . Spearman-ρ. The Spearman's rank correlation coefficient (also known as Spearman-ρ) measures the similarity in the order of two rankings. This is useful in our application because while the 1-Wasserstein and 2-Wasserstein may have large distortion as metrics, they will have a high Spearman-ρ in all cases. Spearman-ρ is defined for two rankings of the data where d i is the difference in the rankings of each for each datapoint i as This is useful for quantifying the global ranking of distributions in the graph, which shows the more global behavior of EMD methods. Since locally manifolds with low curvature look very similar to R d , where d is the intrinsic dimension of the manifold, for distributions very close together on the manifold using a Euclidean ground distance may be acceptable, however for distributions far away from each other according to a geodesic ground distance, approximation with a Euclidean ground distance may perform poorly. In Fig 2 we used a cluster tree with 4 levels of 4 clusters for the cluster tree, and a quad tree of depth 4. These are mostly for illustrative purposes so we there was no parameter search for ClusterTree or QuadTree. As the number of trees gets larger ClusterTree and QuadTree start to converge to some smooth solution. ClusterTree has a number of bumps caused by the number of clusters parameter that do not disappear even with many trees. Quadtree converges to a smooth solution as the number of trees gets large in this low dimensional example. On the swiss roll example we compared Diffusion EMD to ClusterTree, QuadTree, and the convolutional Sinkhorn method. Here we describe the parameter setup in each of these methods for Figure 3 , 4, and 5. For Fig. 3 we chose parameters that approximately matched the amount of time between methods on this dataset to the Diffusion EMD with default settings using the Chebyshev algorithm. This was 150 cluster trees of depth 8 with 3 cluster, and 20 quad trees of depth 4. We noticed that the speed of quadtree diminishes exponentially with dimension. For Fig. 4 we did a grid search over parameters for QuadTree and Cluster Tree to find the best performing settings of depth and number of clusters. We searched over depth d ∈ [2 . . . 10] and number of clusters C ∈ [2 . . . 5] for ClusterTree. We found a depth of 4 for QuadTree and 3 Clusters of depth 5 for ClusterTree gave the best tradeoff of performance vs. time on this task. These parameters were tried while fixing the number of trees to 10. We fixed this depth and number of clusters for subsequent experiments varying the number of trees in the range [1 . . . 100] for QuadTree and in the range [1 . . . 15] for ClusterTree. The maximum of this range exceeded the time allotment of Diffusion EMD in both cases. For the convolutional Sinkhorn method we fixed parameter t = 50 for H t , and fixed the maximum number of iterations in the range [1 . . . 100]. This took orders of magnitude longer than the multiscale methods in all cases, with comparable performance to Diffusion EMD on the P@10 metric and better performance using the Spearman-ρ metric. We note that a similar multi-step refinement scheme as implemented in Backurs et al. (2020) could be used here. To construct the Spherical MNIST dataset (Cohen et al., 2017), we projected the 70,000 images in classical MNIST onto a spherical graph containing 25,088 nodes. Every pixel in each MNIST image was projected onto four nodes on the northern hemisphere. These projections were treated as signals over a spherical graph. We ran Diffusion EMD, ClusterTree, and QuadTree on this dataset to generate embeddings in L1 space. Diffusion EMD was run with the diffusion wavelet approximations described in the paper, using an epsilon value of 0.001. To best compare accuracy, we chose parameters for QuadTree and ClusterTree that approximately matched the runtime of Diffusion EMD. For QuadTree, we used a depth of 20 and 10 clusters. For ClusterTree, we used a depth of 5. However, we note that both ClusterTree and QuadTree experienced a variance of 10-30 minutes between subsequent runtimes with the same parameters, probably due to differences of random initialization. We did not run Convolutional Sinkhorn on this dataset, as the projected runtime exceeded 24 hours. We calculated a ground truth EMD between 100 of the projected MNIST digits using the exact EMD implementation of Python Optimal Transport. More accurate comparisons might be obtained using more points, but computation of this 100 by 100 distance matrix took over 24 hours, necessitating severe subsampling, and once again highlighting the need for fast and accurate approximations of EMD. For each algorithm, we obtained an accuracy score by running a kNN classifier on the embeddings using an even train/test split considering only the nearest neighbors. We then computed the Spearman ρ and P@10 score between each method and the exact EMD ground truth. One hundred sixty eight patients with moderate to severe COVID-19 (Marshall et al., 2020) were admitted to YNHH and recruited to the Yale Implementing Medical and Public Health Action Against Coronavirus CT (IMPACT) study. From each patient, blood samples were collected to characterize patient cellular responses across timepoints to capture the spectrum of disease. In total, the composition of peripheral blood mononuclear cell (PBMC) was measured by a myeloid-centric flow cytometry on 210 samples. Finally, clinical data was extracted from the electronic health record corresponding to each biosample timepoint to allow for clinical correlation of findings as shown previously. In the main analysis, poor or adverse outcomes were defined by patient death, while good outcomes were defined by patient survived. In order to analyze the 22 million cells generated in this study, we performed k-means clustering, setting k to 27,000, and took the resultant cluster centroids as a summarization of the cellular state space as done previously in (Kuchroo et al., 2020) . To test the quality of the organization of patients using EMD methods, we test the Laplacian smoothness on the 10-nearest neighbors graph created for each method. The Laplacian smoothness of a function on the nodes is denoted f T Lf where L = D − A is the (combinatorial) Laplacian of the adjacency matrix A. This is equivalent to computing the following sum: where E is the edge set of the kNN graph. This measures the sum of squared differences of f along the edges of the kNN graph, which measures the smoothness of f over the graph. Various cell types have been shown to be associated with mortality from COVID-19 infection, including CD16 + Neutrophils, T cells and non-classical monocytes. Previously, T cells-to-CD16 + Neutrophils ratios have been reported to be predict of outcome, with low ratios predicting mortality and high ratios predicting survival (Li et al., 2020) . Even outside of ratios, the absolute counts to T cells (Chen & Wherry, 2020) and CD16 + Neutrophils (Meizlish et al.) have been shown to be associated with mortality. Finally, non-classical monocytes have also been shown to be enriched in patients with more severe outcomes (Pence, 2020) , further supporting our findings. Wasserstein generative adversarial networks Scalable nearest neighbor search for optimal transport Iterative Bregman projections for regularized transportation problems Multiscale data sampling and function extension Wasserstein barycentric coordinates: Histogram regression using optimal transport Quantifying the effect of experimental perturbations at single-cell resolution Uncovering axes of Multiscale PHATE exploration of SARS-CoV-2 data reveals multimodal signatures of disease Diffusion kernels on statistical manifolds Treesliced variants of Wasserstein distances Topics in Metric Approximation The mixed lipschitz space and its dual for tree metrics Hölder-Lipschitz norms and their duals on spaces with semigroups, with applications to Earth mover's distance Randomized algorithms for the low-rank approximation of matrices Learning transport cost from subset correspondence Longitudinal analyses reveal immunological misfiring in severe COVID-19 T cell responses in patients with COVID-19 POT: Python optimal transport Sliced Wasserstein kernels for probability distributions Predictive values of neutrophil-to-lymphocyte ratio on disease severity and mortality in COVID-19 patients: a systematic review and meta-analysis A neutrophil activation signature predicts critical illness and mortality in COVID-19 Severe COVID-19 and aging: are monocytes the key? Sharp explicit lower bounds of heat kernels