key: cord-0231678-d5fehuv2 authors: Tang, Francesca; Feng, Yang; Chiheb, Hamza; Fan, Jianqing title: The Interplay of Demographic Variables and Social Distancing Scores in Deep Prediction of U.S. COVID-19 Cases date: 2021-01-06 journal: nan DOI: nan sha: 46b053c7126c1603101f46e4bb6e411f790a45fc doc_id: 231678 cord_uid: d5fehuv2 With the severity of the COVID-19 outbreak, we characterize the nature of the growth trajectories of counties in the United States using a novel combination of spectral clustering and the correlation matrix. As the U.S. and the rest of the world are experiencing a severe second wave of infections, the importance of assigning growth membership to counties and understanding the determinants of the growth are increasingly evident. Subsequently, we select the demographic features that are most statistically significant in distinguishing the communities. Lastly, we effectively predict the future growth of a given county with an LSTM using three social distancing scores. This comprehensive study captures the nature of counties' growth in cases at a very micro-level using growth communities, demographic factors, and social distancing performance to help government agencies utilize known information to make appropriate decisions regarding which potential counties to target resources and funding to. The recent infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has overtaken the world as the largest pandemic we have seen in decades. The World Health Organization (WHO) labeled it a pandemic on 03/11/2020, with a total of more than 85 million cases and more than 1.84 million deaths worldwide as of 01/04/2021. Forecasting the growth of confirmed cases and the locations of future outbreaks has been a persistent challenge in the public health and statistical fields. With the gravity and urgency of the global health crisis, many recent works including Kucharski et al. (2020) and have attempted to model the growth in cases in various countries. Most of the literature on statistical modeling of the data focuses on the reproduction number. However, this value is constantly evolving and is not always a valuable measurement to build prediction models with. Hong and Li (2020) proposed a Poisson model with time-dependent transmission and removal rates to estimate a time-dependent disease reproduction number. Betensky and Feng (2020) studied the impact of incomplete testing on the estimation of dynamic doubling time. Ultimately, we need to examine the underlying features contained in the time series data in order to extract valuable insights into the unique nature of the spread of COVID-19. As the number of deaths is at least a two-week lagging indicator compared to the number of confirmed cases, we only look at the latter. More importantly, the matrix of the number of deaths per county would be very sparse at the initial stage, making any analysis more difficult. Our goal is to characterize and project the disease progression given the limitations of public data from a theoretical yet practical perspective. Stochastic block models (SBMs), first developed by Holland et al. (1983) , has long been studied as a powerful statistical tool in community detection, where the nodes or members are partitioned into latent groups. SBMs have been employed to study social networks Wasserman and Anderson (1987) , brain connectivity Rajapakse et al. (2017) , protein signaling networks Chen and Yuan (2006) , and many others. Under an SBM, the nodes within the same group usually have a higher probability of being connected versus those from different groups. The difficult task is to recover these connectivities and the communities based on one observation, which in our case, is a snapshot of the changes in the number of cases up to the most recent time point. In more recent years, spectral clustering (Balakrishnan et al., 2011; Rohe et al., 2011; Jin, 2015a; Lei and Rinaldo, 2015) has arisen as one of the most popular and widely studied approaches to recover these communities. Conventional spectral clustering algorithms mostly involve two steps: eigen-decompose the adjacency or Laplacian matrix of the data and then apply a clustering algorithm, such as k-means, to the eigenvectors that correspond to the largest or smallest eigenvalues. There is extensive literature on such procedures, for instance von Luxburg (2007), Ng et al. (2001) , Abbe (2017) , etc. In this study, we introduce the unique procedure of conducting spectral clustering on the sample Pearson correlation coefficient matrix directly and compare its clusters to the standard Laplacian embedding. This complements Brownlees et al. (2020+) 's approach based on a latent covariance model on financial return data. Gilbert et al. (2020) used agglomerative clustering, an unsupervised learning method, on preparedness and vulnerability data in African countries using self-reported reports of capacity and indicators. While a comprehensive study, it only considers the possible exposures to travelers from China. Using a different dataset, Hu et al. (2020) clustered the data from China by implementing a simple k-means clustering directly on various features of the provinces/cities and not on the eigenvectors of the correlation matrix. It also doesn't take into account possible explanatory features that aren't directly related to the number of cases and fails to predict provinces that have yet to 3 have cases. The data processing of some existing approaches also do not standardize and shift the data in a way that aligns with the nature of COVID-19. Once the communities are found, the subsequent part uncovers the statistically significant demographic features, pre-existing in the counties, that could largely explain a county's community membership. Most of the existing research on salient demographic information focus on age-related features and the presence of co-morbidities or underlying health conditions e.g. Dowd et al. (2020) and Lippi et al. (2020) . In reality, what influences how the disease progresses in a county is most likely a confluence of variables, and not one or two prevailing ones. Some studies also examine how various demographic determinants affect how well a county carries out social distancing (Im et al., 2020) , but offers little or no connection to the nature of the growth curve. The extracted variables from the feature analysis part are then used in conjunction with time series of social distancing scores from Unacast (2020) to fit a recurrent neural network, ultimately predicting the progression of confirmed cases in a given county. It is important to note that for this prediction section, we use the period from the start of the pandemic until 07/20/2020 as this traces the first large spike in cases in the U.S. and a subsequent plateau. This gives a long enough time series sample and to include much more recent data would include the second large wave of the pandemic, which is counter to the objective of capturing the growth trajectory of a county's peak and fall. Unacast has created a scoreboard of social distancing measures with mobile device tracking data, where a device is assigned to a specific county based on the location the device spent the most amount of time in. The neural network prediction takes these static, inherent county variables, community membership (the clustering results), and social distancing data to predict the future growth of confirmed cases. There have been several studies that predict, estimate, or model the growth curve of the disease, including Fanelli and Piazza (2020) on the cases in Italy, Roda et al. (2020) on the cases in Wuhan, and on the cases in China. In addition, deep learning has Figure 1: Pipeline of this study's three-part analysis of COVID-19. COVID-19 time series data is first used to perform community detection, clustering counties into several communities. Then, demographic features are incorporated to extract the most significant features that distinguish the growth communities. Finally, social distancing metric time series are added to the results to the previous two parts to carry out the prediction of COVID-19 cases for new counties. been applied to COVID-19 research, such as Wang and Wong (2020) that detect positive cases through chest scans. Other studies such as Zheng et al. (2020b) investigate when patients are most infectious by using a deep learning hybrid model and similarly combines the epidemiological SIR model with an LSTM network. However, the literature on COVID-19 still lacks any comprehensive approach on a county-level that creates a throughline of the pandemic: historical growth curve of confirmed cases, characterization of this growth via clustering, the significant explanatory demographic features, and finally, social distancing measures that give insight into the nature of the future growth trajectory, as displayed in Figure 1 . Table 1 also contains the specific time period, the number of counties N , and the data source used for each part of the paper (Part I: community detection, Part II: extraction of significant features, Part III: prediction) as outlined in Figure 1 . Table 1 : Time period, number of counties, and data source used for each part of the paper. The first part of this paper finds potential communities among the counties, in which clusters share similar growth patterns. To accomplish this, two fundamental concepts are necessary: the stochastic block model and spectral clustering. The former is a generative model through which community memberships were formed and the latter is a methodology often utilized to recover these memberships. Compared to traditional clustering methods, spectral clustering has shown to be effective in both statistical and computational efficiency (Abbe, 2017; Abbe et al., 2020) . Our approach applies spectral clustering to the correlation matrix, instead of the commonly used adjacency matrix or Laplacian matrix. The goal is to recover the county membership matrix embedded in the correlations of each county's logarithmic daily cumulative number of cases. For each county, consider a daily time-series of the cumulative number of confirmed cases, where we use curve registration (the time origin is set as the day on which the number of cases exceeds 12 for a particular county). This curve registration is important as it takes into account the fact that counties may have different COVID-19 outbreak starting times. We denote w i,t = log(x i,t ) as the logarithmic cumulative number of cases of county i on the t-th day since the county hit 12 or more cases. Then, we use the Pearson correlation as a similarity measure, defined as where T ij = min(T i , T j ), with T i and T j being the number of days county i and county j has 12 or more cases, respectively. The sample correlation R ∈ R n×n would then contain the pairwise correlations among all n counties. The logarithmic cumulative case counts are used to align with the exponential growth pattern implied by popular epidemic models. For example, we could distinguish between a faster exponential growth function such as exp (2t) and a slower growth function exp(t/2). Another commonly used network representation is the adjacency matrix A, which shows whether two counties are connected and is often constructed based on a similarity measure like Pearson correlation or a mutual information score. If the graph is undirected, where each edge that connects two nodes is bidirectional, A is symmetric. The two most common types of similarity graphs are the -neighborhood graph and the k−nearest neighbor graph. As we're using sample correlation as the similarity measure, an -neighborhood adjacency A 1 is defined as follows: A k−nearest neighbor adjacency A 2 is defined as follows: where the nearest neighbors are found with respect to R ij . Depending on the parameters and k one chooses for A 1 and A 2 , respectively, a significant 7 amount of information could be lost in the process because of the thresholding operation. However, this operation also filters out many spurious correlations. Unlike the sparse A 1 and A 2 , R retains all of the pairwise similarities between counties, which would shed more light on the within-group and between-group relationships. The matrices R, A 1 , and A 2 are critical because they can help us recover Θ, an n × K membership matrix that reflects which community each county belongs to, where K is the number of communities. Letting Z i ∈ {1, ..., K} be the community that county i belongs to, the i th row of Θ has exactly one 1 in column Z i (the community that county i belongs to) and 0 elsewhere. We estimate Θ under an SBM, where the probability two counties are connected only depends on the membership of these two counties. An SBM denoted by G(n, B, Θ) as n nodes, K communities, and is parameterized by Θ and B, the K ×K symmetric connectivity matrix. Essentially, B contains the inter-and intra-community connection probabilities: the probability of an edge between counties i and j is B Z i Z j . The objective is to obtain an accurate estimation Θ of Θ from an observed adjacency matrix A that is modeled as G(n, B, Θ). This yields an recovery of the partitions G k := {i : Z i = k, i = 1, ..., n} by G k = {i : Z i = k, i = 1, ..., n}, k = 1, ..., K, with an ambiguity of permutation of clusters, where Z i indicates the location of 1 in the i th row of Θ. The population matrix P ∈ R n×n , where P ij is the probability that counties i and j are connected, is naturally expressed as P = ΘBΘ T . Spectral clustering has been a popular choice for community detection (Rohe et al., 2011; Jin, 2015a; Lei and Rinaldo, 2015) . The central idea is to relate the eigenvectors of the observable adjacency matrix A to those of P = ΘBΘ T , which is not observed. This is accomplished 8 by expressing A as a perturbation of its expected value: , then columns of U span the same linear space as those spanned by the columns of P (ignoring diag(P )). Additionally, P has the same column space as Θ. Now, letting U be the eigenspace corresponding to the K largest absolute eigenvalues of A, then U is a consistent estimate of U or the column space of Θ, under some mild conditions. To resolve the ambiguity created by rotation, the k-mean algorithm is applied to the normalized row of U to identify membership of communities (Rohe et al., 2011; Lei and Rinaldo, 2015) . Instead of examining the eigenvalues of A, spectral graph theory has long studied graph Laplacian matrices as a tool of spectral clustering. The symmetric Laplacian matrix is defined as follows: letting D = diag(d 1 , ..., d n ) be the diagonal degree matrix where d i = n j=1 A ij , then a popular definition of a normalized, symmetric Laplacian matrix is L = I − D −1/2 AD −1/2 . When clustering with L, one takes the eigenvectors corresponding to the smallest eigenvalues in absolute value. In our context, A can be taken as either A 1 or A 2 as outlined in subsection 2.1. As there are no exact rules in choosing the parameters and k of A 1 and A 2 , respectively, clustering with L, which depends on the adjacency matrix, maybe less than ideal. It is also an added, often computationally cumbersome step. Instead, we cluster directly on the similarity matrix R, the sample correlation matrix. Algorithm 1 delineates the detailed steps of this approach. The classic spectral clustering procedure with L used as a benchmark is outlined in the Supplementary Material. There are several methods for choosing the number of spiked eigenvalues in the context of factor models: scree-plot, eigen-gap, eigen-ratio, adjusted correlation thresholding. As our method involves correlations, we apply the adjusted correlation method in . Input Sample correlation matrix R ∈ R n×n and the number of clusters K. 1: Compute the largest K eigenvectors in absolute value u 1 , ..., u K of R and construct U ∈ R n×K be the matrix with the eigenvectors as columns. 2: Normalize rows of U to have unit norm to get U norm . 3: Cluster the rows of U norm with k-means. return Partition G 1 , ..., G K of the nodes. This method leads to K = 2, which roughly divides the counties into faster or slower growth communities. It also agrees with the choice where we maximize the eigen-gap. From now on, all clustering analysis will be based on the unit-norm normalization of the eigenvectors. For future analysis (section 2.7), it is useful to define the clusters that contain the counties with the fastest and slowest growth. After the clusters are produced with Algorithm 1, for every community k, we calculate the average exponential growth rates of the counties in that community. This is done by fitting the total number of cases of each county i on day t, through nonlinear least squares and obtaining the approximated growth rate r i for county i. Then, we compare the average fitted growth rate r k = 1/| G k | i∈ G k r i and standard error for clusters k = 1, ..., K. The fastest growth cluster is defined as argmax k r k and the slowest growth cluster is defined as argmin k r k . We use the COVID-19 (2019-nCoV) Data Repository by the Johns Hopkins Center for Systems Science and Engineering (CSSE) that contains data on the number of confirmed cases and deaths in the United States and around the world, broken down by counties in the U.S. The public database is updated daily and the virtual dashboard is also used widely around the world. Data sources of the database include the World Health Organization (WHO), US Center for Disease Control (CDC), BNO News, WorldoMeters, and 1point3acres. We take all counties that have 12 or more cumulative cases in the time frame of 01/22/2020 to 04/17/2020. We treat the day a county reaches 12 or more confirmed cases as day one and Table 2 : Average growth rates and the number of counties in each cluster for K = 2. Model R corresponds to Algorithm 1 where we use the sample correlation matrix. Model A 1 corresponds to Algorithm 4 where we use the k-nearest neighbors graph (k = 7). Model A 2 corresponds to Algorithm 4 where we use the -neighborhood graph ( = 0.007). Groups 1 and 2 are the obtained partitions G 1 and G 2 , respectively. Growth Rate is the approximated exponential growth rate, calculated as in subsection 2.5. Presented are the averages of these growth rates and their associated SEs for the counties in two groups, clustered by different methods. R, second phase is for the clusters obtained for the period 05/10/2020 -07/10/2020. then discard all counties that have a time series of fewer than 14 days after processing. This way we shift each county to a similar starting point in terms number of cases and a long enough period to do a meaningful analysis with. We also remove unassigned cases and U.S. territories, which ultimately results in a total of 950 counties. As mentioned before, we use w i,t = log(x i,t ) to represent the logarithmic cumulative cases for county i on day t. We also repeat the community detection process with more recent data from 05/10/2020, when many states started to reopen, to 07/10/2020. The bulk of this part of the study concentrates on the beginning phase of the pandemic given that health and government intervention to minimize the number of future cases should be executed as early as possible. However, we compare the resulting communities with more recent data that captures the second phase of the pandemic in the U.S. States experienced a significant drop in cases when lockdown was enforced and businesses were closed but as they began to reopen, the number of cases saw an uptick once again. Since this second phase comes months after the initial outbreak, there may be meaningful differences worthy of analysis. We can see from Table 2 that for the clusters obtained by Algorithm 1 (R), the difference between the growth rates of Group 1 and Group 2 is the largest. This is validated by the discernible difference in trajectories, shown in Figures 5 and 6 . The standard error bands in Figure 5 underscores that the two groups become more distinct in their growth trajectory as time goes on: the overlap between the bands of the two groups decreases over time. For A 1 and A 2 , the growth rates are much closer together between the two communities. Furthermore, the right panel of Figure 6 is a plot of the average cases for the period after community detection was performed: 04/17/2020 -09/03/2020. Evidently, the separation between the two groups becomes much more distinct as time goes on (much larger number of cases). As for community detection of the subsequent phase of the pandemic in the U.S. (from 05/10/2020 -07/10/2020), the last row of Table 2 again shows a larger average growth rate for Group 2, albeit much smaller in magnitude since cases increased at a slower rate once the country learned how to deal with the pandemic. Table 3 contains information on the average intra-and inter-group correlations, a sample reflection of B. Evidently, the intra-community correlations are higher than the intercommunity correlations. Group 1's intra-correlation of 96.1% and Group 2's 96.8% are greater than 94.0%, the inter-group correlation between the two groups. As we only took counties with significant outbreaks as of 04/17/2020, it is logical to observe high correlations across the board. These results are also mirrored in Figure 4 , a heatmap of the block correlations. Some notable counties that are partitioned to Group 2, the fast growth community, include In addition, Figure 8 shows the same plots as those in Figure 6 but for a later phase. The curves are clearly much flatter in both groups, which is likely due to the increase in the number of cases plateauing in many counties. Furthermore, the distinction between the curves of Group 1 and 2 is also considerably bigger than those of the earlier data. This can be explained by the confluence of additional factors that separate each county's experience with the virus, including the nature of local government intervention, degree, and timing of re-openings, travel restrictions, etc. An important and subsequent question that arises once the communities are obtained is what underlying factors play a role in which growth cluster a county belongs to. Since the growth of COVID-19 cases is also related to static, inherent factors that aren't a consequence of the disease, we examine a variety of county demographic variables and how they differ among communities. In order to select the variables that are most statistically significant, or are most relevant to the community assignment of a county, we perform independent two-sample t-tests on the fastest and slowest growth groups (subsection 2.5) with respect to various demographic variables. The null and alternative hypotheses for this t-test for the d-th feature are as follows: where µ d,1 is the mean value of the d-th feature of cluster 1 and µ d,2 is the mean value of the d-th feature of cluster 2. We then compute the two-sample test statistic with pooled estimate of the variance. After finding the p-values, we rank the features from lowest p-value (most significantly different between the two groups) to highest (least significantly different between the two groups). Furthermore, we repeat Algorithm 1 for K = 3, 4, 5, select the 'fastest' and 'slowest' growth clusters in each case, and carry out the independent two-sample t-tests as described above for the same demographic features. This sensitivity analysis tests whether the demographic variables that are the most significantly different between the two groups are consistent when we have a larger number of communities. Ultimately, we present the most statistically significant demographic features. For this section, we use data from Data Planet, a social science research database that compiles 12.6 billion U.S. and international datasets from over 80 sources. For our purposes, we look at the 2017 American Community Survey (ACS), the largest household survey in the U.S., conducted by the U.S. Census Bureau. We select 17 relevant features on a county-level, which are displayed and summarized in Table 4 . Note that not all 950 counties from Johns Hopkins CCSE data that were used in subsection 2.6 is available on Data Planet, thus the analysis is done on 633 counties for this section. Now, we are left with 301 counties in Group Figure 9 : The left panel is a geographical representation of counties according to median household income. Blue dots are counties with less than $50,000 median annual household income and purple dots are counties with more than $50,000 median annual household income. The right panel is a geographical representation of counties according to population density. Blue dots are counties with less than 150 persons per sq mile and purple dots counties with more than 150 persons per sq mile. 1 and 332 counties in Group 2, which is still a close split like that of R seen in Table 2 . corresponds to Algorithm 1 where we use the sample correlation matrix. Group 1 and 2 are the obtained partitions G 1 and G 2 , respectively. Population Density is the number of people per sq mile; median household income is in US dollars; % Poverty is the poverty rate: % 1-person households is the percentage of one-person households; % 5 or more person households is the percentage of five or more person households; % households w 60 y/o and older is the percentage of households that have one or more members who are 60 years old or older; low access to stores is defined as living more than one mile (urban areas) or 10 miles (rural areas) from the nearest supermarket, supercenter, or large grocery store; /1,000 is per 1,000 persons; % take public transportation is the percentage of all persons who work in a county and take public transportation to work every day. All feature information is as of 2017. It is evident from work is around three times greater for Group 2 than Group 1. These observations do not hold for A 1 and A 2 (see Supplementary Material for Tables 10 and 11 ) and in fact, the differences in mean and medians between the two groups are significantly smaller for almost all features. For instance, the differences between population density means for the two groups of A 1 and A 2 are 191.609 and 228.966 people per sq mile, respectively -much smaller differences than that of R's clusters: 637.407. Furthermore, there isn't consistency in these trends for most features; for example, Group 2 of A 1 has the higher population density and median income averages but also has the lower average number of bars, grocery stores, and restaurants. On the other hand, unlike what one would expect in terms of the relationship between the number of one-person households and the spread of COVID-19, there is not much differentiation between the number of people in a household. Figure 10 displays the individual bar plots of average values for six features for R, A 1 , and A 2 . It is evident that the orange bars of R's clusters are plainly contrasted between Group 1 and 2, unlike those of A 1 and A 2 's clusters. It is very likely that too much information was lost in A 1 and A 2 during the truncation process. to convene at bars without much social distancing (before stricter lockdowns took place). After finding the growth communities and conducting t-tests to ascertain the significant features for the latter phase of the pandemic in the U.S. (05/10/2020 -07/10/2020), the features with the lowest p-values diverge from those of earlier data, as presented in the right table of Table 5 . Population density and median income are still among the most meaningful, with income being more significant, but variables such as the number of grocery stores, percentage of people with low access to stores, and the percentage living in poverty have become much more significant. This suggests that at later stages of the pandemic, poverty and other income-related measures become more indicative and responsible for the differences in case growth among counties. Thus, the seven features for d i for this latter phase are the top seven variables in Table 5 : number of grocery stores, % low income with low access to stores, median household income, % poverty, % white, population density, and % 1-person households. The final section of our COVID-19 methodology is to predict a county's growth trajectory a few days into the future. We propose a prediction methodology with the objective that given a new county, the new county's key demographic features, and social distancing measures, we implement an algorithm that projects the new county's future growth. Before going in-depth on the prediction models, it's necessary to first define some important variables. Let l be the number of the days forward to be projected for a new county. To build such a predictive model, let y i,t+l = log(x i,t+l ) − log(x i,t ) be county i's l-day forward log-growth rate, which is close to the growth rate by Taylor's expansion and numerical verification, for t = 1, ..., T i . Here, T i + l is the total number of days where county i has 12 or more cases. Recall the obtained partitions from Algorithm 1 (set of indices of counties that belong to group k): G k = {i| Z i = k, i = 1, ..., n}, where Z ∈ R n is the recovered community label vector. For a community k, and a county i ∈ G k , let d i ∈ R q be county i's significant feature vectors obtained from section 2.7, and y i = [y i,1+l , · · · , y i,T i +l ] T ∈ R T i be its l−day forward log case difference. Note that each row of S i , s i t ∈ R 3 , has three different social distancing metrics at time t. In summary, we have data {S i , y i : i ∈ G k } for training an l-day ahead predictive model for the k th community. To enhance the effectiveness of the model, we take advantage of a special type of recurrent neural network (RNN): long short-term memory (LSTM) networks, which are designed for time-series forecasting. Unlike feedforward neural networks (FNNs), RNNs produce an output that depends on a "hidden" state vector that contains information based on prior inputs and outputs. LSTMs builds on a simple, vanilla RNN to include a forget gate, input gate, and output gate for each module. Hence, it is able to "remember" information for longer time periods (lags). The output for an LSTM module at time t is as follows: The components of h t are broken down as follows: , is the forget gate output and W f and b f are its weights and biases, respectively , is its input gate output and W i and b i are its weights and biases, respectively. The cell state vector then gets updated by forgetting the previous memory through the forget gate and adding new memory through the input gate: b o are its weights and biases, respectively. Here, σ is the sigmoid activation function. We also compare the LSTM's performance with that of an FNN, namely an MLP (multilayer perceptron). MLPs are a type of fully connected FNN first introduced and popularized by Rumelhart et al. (1986) , consisting of an input layer, output layer, and hidden layers in between, where the training process is done through backpropagation. The total input x s+1 i of a neuron i of layer s + 1 takes the form of where h s ij is the weight for neuron j of the previous layer s to neuron i of layer s + 1 and b s+1 i is the threshold of layer s + 1. The first prediction model, Algorithm 2 (which we will refer to as SD-LSTM), is a prediction procedure that solely uses a nonlinear model (a neural network) to fit the data. The idea is to first train an LSTM for each of the K communities, and then given a new county, we select the corresponding fitted model for prediction from our repertoire with respect to its nearest neighbor county (in demographic variables, not geographical distance). That is, we apply the nearest neighborhood in demographic variables to classify the new county's community, and use the model for that community to forecast the county's cases. Specifically, for each community k ∈ {1, ..., K}, we train an LSTM with the data {(s i t , y i,t+l ) T i t=1 , ∀i ∈ G k } and this depends on the numbers of steps forward, l, we are trying to forecast. For simplicity of notation, for community k, we denote all such data items for all counties i ∈ G k by {(s t , y t+l ), t ∈ G l k } and the fitted function by f l k (·). Now the second part, the prediction, is that given a new county i 's demographic data d i and social distancing information ] ∈ R T i ×3 , we first find its nearest neighbor county j = argmin j d i − d j 2 and its associated community Z j and use its associated prediction model to predict y i ,t+l = f l k (s i t ), t = 1, ..., T i with k = Z j . Algorithm 2 summarizes this method of prediction. To predict a future event, the above procedure gives a number of prediction methods. For example, to predict tomorrow's outcome, we can use today's social distancing data with l = 1, or yesterday's social distancing data with l = 2, or the day before yesterday's social distancing data with l = 3, and so on. As verified later in Figure 12 , it turns out that l = 4 is the best choice of lead, which align with the incubation period of the disease. Train LSTM f l k (·) using the data {(s t , y t+l ), t ∈ G l k }. 3: end for 4: return fitted LSTMs f l k (·), k = 1, ..., K. Part II: Prediction , Z and f l k (·), k = 1, ..., K from Part I. . 5: end for 6: return y i = [ y i ,1+l , y i ,2+l , ..., y i ,T i +l ] T ∈ R T i . Algorithm 3 takes SD-LSTM a step further to include a linear component, namely, fitting the linear model for each county first with residuals from each community then further modeled by an LSTM. This idea is related to boosting or nonparametric estimation using a parametric start Fan et al. (2009) , resulting in a semi-parametric fit. Again, the objective of the training part is to obtain K fitted models, one for each community, using semi-parametric regression techniques. More specifically, for county i with lead l, we first fit the following linear regression models After fitting the linear regression models for every county i ∈ G l k , we obtain the residuals { i,t+l , t ∈ G l k } and save all the coefficients α l i , β l i for i ∈ G l k , k = 1, ..., K. We then extract the information further from {(s t , t+l ), t ∈ G l k } by fitting an LSTM to obtain the fitted g l k (·). Then, for the prediction of the new county i , we follow the same steps as those in SD-LSTM but the final prediction is instead adding the linear fit of the nearest neighbor county and the LSTM fit of the community corresponding to the nearest neighbor county: where k = Z j . We will refer to this model as SD-SP. The idea is summarized in Algorithm 3. We also include three other algorithms for comparison purposes. The first replaces the LSTM fit f l k (·) of community k in SD-LSTM with a linear model. This corresponds to fitting (7) without further boosting by an LSTM. For simplicity, we shall refer to this approach as the SD-LM (social distancing linear model). The second one is to use both demographic and social distancing data to fit an LSTM. This approach is identical to SD-LSTM, but includes the q = 7 significant demographic variables in Table 5 Fit the regression models (7) for i ∈ G l k and obtain the residuals { t+l , t ∈ G l k }. Train LSTM using {(s t , t+l ), t ∈ G l k } 4: end for 5: return fitted LSTMs g l k (·) and all α l i and β l i for i ∈ G l k , k = 1, ..., K. Part II: Prediction ] ∈ R T i ×3 , Z and g l k (·), α l i and β l i for i ∈ G l k , k = 1, ..., K from Part I. 1: Find county i 's nearest neighbor j = argmin j d i − d j 2 . 2: Select α l j and β l j for county j. to SD-LSTM but instead of an LSTM, we use an MLP with two hidden layers (we will refer to this model as SD-MLP). For the LSTM, the optimization algorithm used is Adam with a learning rate of 0.01. We also test the performance of various lags to see which yields the highest out-of-sample R 2 , defined as follows for a given new county i and lead l: where y i ,t+l is the observed value, y i ,t+l is the predicted value, andȳ i ,t+l = 1/T i Table 7 . No. of hidden layers Type and no of nodes Input shape SD-LSTM 1 LSTM, 10 N ×T × 3 SD-SP 1 LSTM, 10 N ×T × 3 DSD-LSTM 1 LSTM, 10 N ×T × 10 SD-MLP 2 Dense, 10, 10 Social distancing data is courtesy of Unacast and its COVID-19 Social Distancing Scoreboard. The scoreboard tracks mobile device movement and has three metrics that quantify the level of social distancing people in a particular county are practicing. The first metric is the percentage change in total distance traveled, averaged across all devices, compared to a pre-Corona baseline. The second is the percentage change in the number of visitations to nonessential places compared to a pre-Corona baseline. For these two metrics, the pre-Corona baseline of a county on a particular day is defined as the average of the four corresponding pre-weekdays (at least four weeks before the day). For example, for Monday 3/30, the pre-Corona baseline of the first metric is the average of the first metric for the four Mondays: 27 2/10, 2/17, 2/24, and 3/2. The final metric is the rate of human encounters as a fraction of the pre-Corona national baseline. The pre-Corona national baseline for this metric is the average of the metric taken over four weeks that immediately precede the COVID-19 outbreak (02/10/2020 -03/08/2020) as defined by Unacast. Since this data starts at 02/25/2020 which is after the start of the Coronavirus cases data (01/22/2020), we perform prediction on the period 02/25/2020 -7/10/2020, which is the start of the "initial phase" until the end of the "recent phase". Also note that not all counties from Johns Hopkins CCSE data and Data Planet are available at Unacast's database so out of the 633 counties from subsection 3.1, this section is performed on 627 counties. Table 8 : 02/25/2020 -7/10/2020 in-sample and out-of-sample R 2 for Model 1 (SD-LSTM), Model 2 (SD-SP), Model 3 (SD-LM), Model 4 (DSD-LSTM), and Model 5 (SD-MLP) for K = 1, 2, 3, 4, 5. The average values for mean, median and standard deviation are taken for each of the 5 folds. For K = 1, we assume that all counties belong to one group so we take all counties in the training data to train the neural network. The results are based on l = 4 and a five-fold cross-validation. 501 of the total 627 counties are used as training data (in-sample) and 126 counties are used as testing data (out-of-sample). Among the four prediction models we implemented using the county-level social distancing measures (see subsection 4.4), for K = 1, 2, 3, Model 4 (DSD-LSTM) slightly outperforms Table 9 : 02/25/2020 -7/10/2020 random assignment in-sample and out-of-sample R 2 for Model 1 (SD-LSTM), K = 2. Each trial is completed via randomly assigning each test county of one of the train-test splits to either community 1 or community 2. Model 1 due to the use of the seven additional demographic variables. Model 1 (SD-LSTM) proves to result in the highest average and median out-of-sample R 2 for K = 4, 5. Models 2 (SD-SP) and 3 (SD-LM) have much poorer performance across the board, which implies that these two models are worse than a horizontal line fit. It is also worth mentioning that the neural network correction part of Model 2 is incredibly hard to tune to be able to outperform the linear model Model 3 on its own. In this case, not only was it not able to enhance Model 3's results, Model 2's correction actually worsened the model's predictive ability. Other nonparametric methods other than a neural network were also used (such as support vector regression) but all had a similar lackluster effect, implying that boosting or enhancing the linear estimator with a nonlinear estimator is not beneficial in this case. Model 1's and Model 4's superiority suggests a nonlinear effect that the LSTM was able to extract, but the linear, semi-parametric, and MLP were unable to do so. For Models 1 and 4, stratifying the communities through our method does make a difference in-sample since increasing K improves the models' mean and median in-sample R 2 . However, this is not the case for out-of-sample as K = 1 produces the best results (no heterogeneity) and the out-of-sample R 2 continues to drop from K = 2 to 5. It is reasonable to conclude that the decrease in sample size for each community training (e.g. K = 1 uses all 501 counties to train while K = 5 uses on average 1/5th of that number to train each commu-nity) is hurting the model's ability to take advantage of the heterogeneity embedded in the communities. Thus, since neural networks have an advantage in large sample size settings, the effect of the reduction in sample size for larger Ks outweighs the community difference captured by community detection (Algorithm 1). We also include Model 5 (an FNN with two hidden layers, each with 50% dropout) to contrast the LSTM with. The performance is similar to Model 3 in that it is no better than a constant fit. The advantage of the LSTM is highlighted here since the output is dependent on previous computations, unlike the FNN that assumes the inputs (as well as outputs) are independent of each other. As COVID-19 cases are sequential information, the LSTM is clearly preferable to predict with. See Table 8 for the detailed breakdown by model and by the number of clusters K. Figure 11 contains the out-of-sample R 2 box plots for the four models with K = 1, 2, 3, 4, 5. The left panel is the average out-of-sample R 2 for Model 1, K = 1 and Model 4, K = 1 for l = 1, 2, 3, 4, 5, 6, 7 for one train-test split. The right panel is average out-of-sample R 2 for the same models, where one social distancing feature is left out each time. Both panels are of the phase 02/25/2020 -7/10/2020 and based on five-fold cross-validation. To ascertain whether using information from community detection still plays a role despite K = 1 being the best setting for out-of-sample prediction, we randomly assign each testing county to an existing community instead of using the nearest neighbor method. As shown in Table 9 , after repeating this five times for Model 1, K = 2, the median in-sample R 2 values are much lower compared to that of the same model in Table 8 (median of 0.5569 vs 0.6372, respectively). Albeit a smaller difference, the out-of-sample median 0.4980 is also smaller than the 0.5437 in Table 8 . This demonstrates that community detection can still categorize the nature of different counties' growth trajectories but this effect is likely outweighed by the diminishing sample size as K increases. Also note that before obtaining the prediction results for each algorithm, the hyperparameter of the appropriate lead was chosen by comparing the average R 2 values for each lead. The left panel of Figure 12 presents the median out-of-sample R 2 vs l = 1, ..., 7 for the two best models Model 1, K = 1, and Model 4, K = 1 as examples. Since out-of-sample R 2 is plateaus after a four-day lead, we fixed l = 4 as a larger lead would decrease precision and it is important to be consistent with studies that show the median incubation period of COVID-19 is 4-5 days Lauer et al., 2020) . Furthermore, anything longer than a week or so is rarely used in epidemiological and sociological studies. In addition, we quantify the social distancing feature importance by averaging the out-of-sample R 2 when we leave each feature out one at a time. Evidently, the right panel of Figure 12 suggests that although there is no distinct drop in performance, leaving out feature 1 (percent change in total distance traveled) results in the largest decline in R 2 whereas leaving out feature 2 (percent change in the number of visitations to non-essential places) results in the smallest decline. By utilizing spectral clustering to recover communities, we develop a framework to detect COVID-19 communities and discover meaningful interpretations of the clusters. We use the correlation matrix instead of the canonical Laplacian as it offers more meaningful insight and more distinct clusters. The resulting communities are distinct in the nature of their respective growth trajectories and there are several demographic variables that further distinguish these growth communities. Singling out the significant demographic features that have explanatory power of a county's growth community membership, we discover that not all of these variables are intuitive when it comes to their role in impacting COVID-19 cases. After modeling and interpreting historical disease progression, we turn to study future growth trajectories by incorporating social distancing information. We are able to reliably predict the logarithmic trends in case growth through the use of LSTMs and also verify that the counties are far from homogeneous -the obtained communities contain crucial information necessary for prediction in-sample. As for the LSTM's out-of-sample predictive power, the effect of the decline in sample size when increasing the stratification of counties into more communities dominates the heterogeneity between counties' growth curves that community detection uncovers. However, after comparing results to randomly assigning counties to different communities, the method we propose still demonstrates that using the community detection results boosts the models' predictive performance. We do, therefore, acknowledge that there could be other latent features that we did not capture in this study and that the three social distancing metrics used here may not paint the complete picture. Furthermore, we do not address the effect of government intervention at given time points that may have altered the disease progression. These could all be points that can be further investigated. Despite these potential shortcomings, the analysis conducted on the first phase of the disease here can also be compared to the second phase, which we are currently experiencing. As the U.S. and many other countries are witnessing an even more extraordinary uptick in cases again, we foresee several possible future applications of our study, including to other contagious disease outbreaks. Another interesting future work is to utilize the confidence distribution framework Xie et al. (2011) to combine studies from independent data sources from different countries. Community detection and stochastic block models: recent developments Entrywise eigenvector analysis of random matrices with low expected rank Noise thresholds for spectral clustering Accounting for incomplete testing in the estimation of epidemic parameters Connectivity of the mutual k-nearest-neighbor graph in clustering and outlier detection Community detection in partial correlation network models Detecting functional modules in the yeast protein? protein interaction network Demographic science aids in understanding the spread and fatality rates of covid-19 Local quasi-likelihood with a parametric guide Estimating number of factors by adjusted eigenvalues thresholding Analysis and forecast of covid-19 spreading in china, italy and france Consistent adjacency-spectral partitioning for the stochastic block model when the model parameters are unknown Preparedness and vulnerability of african countries against importations of covid-19: a modelling study Clinical characteristics of coronavirus disease 2019 in china Stochastic blockmodels: First steps Estimation of time-varying reproduction numbers underlying epidemiological processes: A new statistical tool for the covid-19 pandemic Artificial intelligence forecasting of covid-19 in china An early examination: Psychological, health, and economic correlates and determinants of social distancing amidst covid-19 Fast community detection by score Fast community detection by score Early dynamics of transmission and control of covid-19: a mathematical modelling study. The Lancet Infectious Diseases The incubation period of 2019-ncov from publicly reported confirmed cases: estimation and application Consistency of spectral clustering in stochastic block models Clinical and demographic characteristics of patients dying from covid-19 in italy vs china Predicting the cumulative number of cases for the covid-19 epidemic in china from early data On spectral clustering: Analysis and an algorithm Epidemic analysis of covid-19 in china by dynamical modeling. medRxiv Fitting networks models for functional brain connectivity Why is it difficult to accurately predict the covid-19 epidemic? Spectral clustering and the high-dimensional stochastic blockmodel Learning representations by back-propagating errors i fusion: Individualized fusion learning Normalized cuts and image segmentation Unacast social distancing dataset A tutorial on spectral clustering Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Stochastic a posteriori blockmodels: Construction and assessment Confidence distributions and a unifying framework for meta-analysis Nanshan Zhong, and Jianxing He. Modified seir and ai prediction of the epidemics trend of covid-19 in china under public health interventions Deep learning-based detection for covid-19 from chest ct using weak label. medRxiv Feng Ye, and Jingmin. Xin. Predicting covid-19 in china using hybrid ai model We would like to thank Unacast Inc. for providing us with their extensive social distancing data. The work was in part supported by NSF Grants DMS-2013789, DMS-1712591 and DMS-2034022, and NIH grant 2R01-GM072611-15. Algorithm 4 outlines the spectral clustering procedure with adjacency matrices A 1 and A 2 and Tables 10 and 11 present Groups 1 and 2's mean, median, and standard deviation of the 17 features.Algorithm 4 Normalized Laplacian spectral clustering Input Similarity matrix S ∈ R n×n and K clusters to obtain.1: Obtain the adjacency matrix A. 2: Compute the normalized, symmetric Laplacian matrix L = I − D −1/2 AD −1/2 . 3: Compute the smallest K eigenvectors in absolute value u 1 , ..., u K of L and construct U ∈ R n×K be the matrix with the eigenvectors as columns. 4: Normalize rows of U to have unit norm 1 to get U norm . 5: Clusters the rows of U norm with k-means. return Partition G 1 , ..., G K of the nodes. A 2 corresponds to Algorithm 4 where we use the -neighborhood graph ( = 0.007). Group 1 and 2 are the obtained partitions G 1 and G 2 , respectively. key: cord-0726182-p221ckle authors: Tang, Francesca; Feng, Yang; Chiheb, Hamza; Fan, Jianqing title: The Interplay of Demographic Variables and Social Distancing Scores in Deep Prediction of U.S. COVID-19 Cases date: 2021-01-06 journal: ArXiv DOI: nan sha: 46b053c7126c1603101f46e4bb6e411f790a45fc doc_id: 726182 cord_uid: p221ckle With the severity of the COVID-19 outbreak, we characterize the nature of the growth trajectories of counties in the United States using a novel combination of spectral clustering and the correlation matrix. As the U.S. and the rest of the world are experiencing a severe second wave of infections, the importance of assigning growth membership to counties and understanding the determinants of the growth are increasingly evident. Subsequently, we select the demographic features that are most statistically significant in distinguishing the communities. Lastly, we effectively predict the future growth of a given county with an LSTM using three social distancing scores. This comprehensive study captures the nature of counties' growth in cases at a very micro-level using growth communities, demographic factors, and social distancing performance to help government agencies utilize known information to make appropriate decisions regarding which potential counties to target resources and funding to. The recent infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has overtaken the world as the largest pandemic we have seen in decades. The World Health Organization (WHO) labeled it a pandemic on 03/11/2020, with a total of more than 85 million cases and more than 1.84 million deaths worldwide as of 01/04/2021. Forecasting the growth of confirmed cases and the locations of future outbreaks has been a persistent challenge in the public health and statistical fields. With the gravity and urgency of the global health crisis, many recent works including Kucharski et al. (2020) and have attempted to model the growth in cases in various countries. Most of the literature on statistical modeling of the data focuses on the reproduction number. However, this value is constantly evolving and is not always a valuable measurement to build prediction models with. Hong and Li (2020) proposed a Poisson model with time-dependent transmission and removal rates to estimate a time-dependent disease reproduction number. Betensky and Feng (2020) studied the impact of incomplete testing on the estimation of dynamic doubling time. Ultimately, we need to examine the underlying features contained in the time series data in order to extract valuable insights into the unique nature of the spread of COVID-19. As the number of deaths is at least a two-week lagging indicator compared to the number of confirmed cases, we only look at the latter. More importantly, the matrix of the number of deaths per county would be very sparse at the initial stage, making any analysis more difficult. Our goal is to characterize and project the disease progression given the limitations of public data from a theoretical yet practical perspective. Stochastic block models (SBMs), first developed by Holland et al. (1983) , has long been studied as a powerful statistical tool in community detection, where the nodes or members are partitioned into latent groups. SBMs have been employed to study social networks Wasserman and Anderson (1987) , brain connectivity Rajapakse et al. (2017) , protein signaling networks Chen and Yuan (2006) , and many others. Under an SBM, the nodes within the same group usually have a higher probability of being connected versus those from different groups. The difficult task is to recover these connectivities and the communities based on one observation, which in our case, is a snapshot of the changes in the number of cases up to the most recent time point. In more recent years, spectral clustering (Balakrishnan et al., 2011; Rohe et al., 2011; Jin, 2015a; Lei and Rinaldo, 2015) has arisen as one of the most popular and widely studied approaches to recover these communities. Conventional spectral clustering algorithms mostly involve two steps: eigen-decompose the adjacency or Laplacian matrix of the data and then apply a clustering algorithm, such as k-means, to the eigenvectors that correspond to the largest or smallest eigenvalues. There is extensive literature on such procedures, for instance von Luxburg (2007), Ng et al. (2001) , Abbe (2017) , etc. In this study, we introduce the unique procedure of conducting spectral clustering on the sample Pearson correlation coefficient matrix directly and compare its clusters to the standard Laplacian embedding. This complements Brownlees et al. (2020+) 's approach based on a latent covariance model on financial return data. Gilbert et al. (2020) used agglomerative clustering, an unsupervised learning method, on preparedness and vulnerability data in African countries using self-reported reports of capacity and indicators. While a comprehensive study, it only considers the possible exposures to travelers from China. Using a different dataset, Hu et al. (2020) clustered the data from China by implementing a simple k-means clustering directly on various features of the provinces/cities and not on the eigenvectors of the correlation matrix. It also doesn't take into account possible explanatory features that aren't directly related to the number of cases and fails to predict provinces that have yet to 3 have cases. The data processing of some existing approaches also do not standardize and shift the data in a way that aligns with the nature of COVID-19. Once the communities are found, the subsequent part uncovers the statistically significant demographic features, pre-existing in the counties, that could largely explain a county's community membership. Most of the existing research on salient demographic information focus on age-related features and the presence of co-morbidities or underlying health conditions e.g. Dowd et al. (2020) and Lippi et al. (2020) . In reality, what influences how the disease progresses in a county is most likely a confluence of variables, and not one or two prevailing ones. Some studies also examine how various demographic determinants affect how well a county carries out social distancing (Im et al., 2020) , but offers little or no connection to the nature of the growth curve. The extracted variables from the feature analysis part are then used in conjunction with time series of social distancing scores from Unacast (2020) to fit a recurrent neural network, ultimately predicting the progression of confirmed cases in a given county. It is important to note that for this prediction section, we use the period from the start of the pandemic until 07/20/2020 as this traces the first large spike in cases in the U.S. and a subsequent plateau. This gives a long enough time series sample and to include much more recent data would include the second large wave of the pandemic, which is counter to the objective of capturing the growth trajectory of a county's peak and fall. Unacast has created a scoreboard of social distancing measures with mobile device tracking data, where a device is assigned to a specific county based on the location the device spent the most amount of time in. The neural network prediction takes these static, inherent county variables, community membership (the clustering results), and social distancing data to predict the future growth of confirmed cases. There have been several studies that predict, estimate, or model the growth curve of the disease, including Fanelli and Piazza (2020) on the cases in Italy, Roda et al. (2020) on the cases in Wuhan, and on the cases in China. In addition, deep learning has Figure 1: Pipeline of this study's three-part analysis of COVID-19. COVID-19 time series data is first used to perform community detection, clustering counties into several communities. Then, demographic features are incorporated to extract the most significant features that distinguish the growth communities. Finally, social distancing metric time series are added to the results to the previous two parts to carry out the prediction of COVID-19 cases for new counties. been applied to COVID-19 research, such as Wang and Wong (2020) that detect positive cases through chest scans. Other studies such as Zheng et al. (2020b) investigate when patients are most infectious by using a deep learning hybrid model and similarly combines the epidemiological SIR model with an LSTM network. However, the literature on COVID-19 still lacks any comprehensive approach on a county-level that creates a throughline of the pandemic: historical growth curve of confirmed cases, characterization of this growth via clustering, the significant explanatory demographic features, and finally, social distancing measures that give insight into the nature of the future growth trajectory, as displayed in Figure 1 . Table 1 also contains the specific time period, the number of counties N , and the data source used for each part of the paper (Part I: community detection, Part II: extraction of significant features, Part III: prediction) as outlined in Figure 1 . Table 1 : Time period, number of counties, and data source used for each part of the paper. The first part of this paper finds potential communities among the counties, in which clusters share similar growth patterns. To accomplish this, two fundamental concepts are necessary: the stochastic block model and spectral clustering. The former is a generative model through which community memberships were formed and the latter is a methodology often utilized to recover these memberships. Compared to traditional clustering methods, spectral clustering has shown to be effective in both statistical and computational efficiency (Abbe, 2017; Abbe et al., 2020) . Our approach applies spectral clustering to the correlation matrix, instead of the commonly used adjacency matrix or Laplacian matrix. The goal is to recover the county membership matrix embedded in the correlations of each county's logarithmic daily cumulative number of cases. For each county, consider a daily time-series of the cumulative number of confirmed cases, where we use curve registration (the time origin is set as the day on which the number of cases exceeds 12 for a particular county). This curve registration is important as it takes into account the fact that counties may have different COVID-19 outbreak starting times. We denote w i,t = log(x i,t ) as the logarithmic cumulative number of cases of county i on the t-th day since the county hit 12 or more cases. Then, we use the Pearson correlation as a similarity measure, defined as where T ij = min(T i , T j ), with T i and T j being the number of days county i and county j has 12 or more cases, respectively. The sample correlation R ∈ R n×n would then contain the pairwise correlations among all n counties. The logarithmic cumulative case counts are used to align with the exponential growth pattern implied by popular epidemic models. For example, we could distinguish between a faster exponential growth function such as exp (2t) and a slower growth function exp(t/2). Another commonly used network representation is the adjacency matrix A, which shows whether two counties are connected and is often constructed based on a similarity measure like Pearson correlation or a mutual information score. If the graph is undirected, where each edge that connects two nodes is bidirectional, A is symmetric. The two most common types of similarity graphs are the -neighborhood graph and the k−nearest neighbor graph. As we're using sample correlation as the similarity measure, an -neighborhood adjacency A 1 is defined as follows: A k−nearest neighbor adjacency A 2 is defined as follows: where the nearest neighbors are found with respect to R ij . Depending on the parameters and k one chooses for A 1 and A 2 , respectively, a significant 7 amount of information could be lost in the process because of the thresholding operation. However, this operation also filters out many spurious correlations. Unlike the sparse A 1 and A 2 , R retains all of the pairwise similarities between counties, which would shed more light on the within-group and between-group relationships. The matrices R, A 1 , and A 2 are critical because they can help us recover Θ, an n × K membership matrix that reflects which community each county belongs to, where K is the number of communities. Letting Z i ∈ {1, ..., K} be the community that county i belongs to, the i th row of Θ has exactly one 1 in column Z i (the community that county i belongs to) and 0 elsewhere. We estimate Θ under an SBM, where the probability two counties are connected only depends on the membership of these two counties. An SBM denoted by G(n, B, Θ) as n nodes, K communities, and is parameterized by Θ and B, the K ×K symmetric connectivity matrix. Essentially, B contains the inter-and intra-community connection probabilities: the probability of an edge between counties i and j is B Z i Z j . The objective is to obtain an accurate estimation Θ of Θ from an observed adjacency matrix A that is modeled as G(n, B, Θ). This yields an recovery of the partitions G k := {i : Z i = k, i = 1, ..., n} by G k = {i : Z i = k, i = 1, ..., n}, k = 1, ..., K, with an ambiguity of permutation of clusters, where Z i indicates the location of 1 in the i th row of Θ. The population matrix P ∈ R n×n , where P ij is the probability that counties i and j are connected, is naturally expressed as P = ΘBΘ T . Spectral clustering has been a popular choice for community detection (Rohe et al., 2011; Jin, 2015a; Lei and Rinaldo, 2015) . The central idea is to relate the eigenvectors of the observable adjacency matrix A to those of P = ΘBΘ T , which is not observed. This is accomplished 8 by expressing A as a perturbation of its expected value: , then columns of U span the same linear space as those spanned by the columns of P (ignoring diag(P )). Additionally, P has the same column space as Θ. Now, letting U be the eigenspace corresponding to the K largest absolute eigenvalues of A, then U is a consistent estimate of U or the column space of Θ, under some mild conditions. To resolve the ambiguity created by rotation, the k-mean algorithm is applied to the normalized row of U to identify membership of communities (Rohe et al., 2011; Lei and Rinaldo, 2015) . Instead of examining the eigenvalues of A, spectral graph theory has long studied graph Laplacian matrices as a tool of spectral clustering. The symmetric Laplacian matrix is defined as follows: letting D = diag(d 1 , ..., d n ) be the diagonal degree matrix where d i = n j=1 A ij , then a popular definition of a normalized, symmetric Laplacian matrix is L = I − D −1/2 AD −1/2 . When clustering with L, one takes the eigenvectors corresponding to the smallest eigenvalues in absolute value. In our context, A can be taken as either A 1 or A 2 as outlined in subsection 2.1. As there are no exact rules in choosing the parameters and k of A 1 and A 2 , respectively, clustering with L, which depends on the adjacency matrix, maybe less than ideal. It is also an added, often computationally cumbersome step. Instead, we cluster directly on the similarity matrix R, the sample correlation matrix. Algorithm 1 delineates the detailed steps of this approach. The classic spectral clustering procedure with L used as a benchmark is outlined in the Supplementary Material. There are several methods for choosing the number of spiked eigenvalues in the context of factor models: scree-plot, eigen-gap, eigen-ratio, adjusted correlation thresholding. As our method involves correlations, we apply the adjusted correlation method in . Input Sample correlation matrix R ∈ R n×n and the number of clusters K. 1: Compute the largest K eigenvectors in absolute value u 1 , ..., u K of R and construct U ∈ R n×K be the matrix with the eigenvectors as columns. 2: Normalize rows of U to have unit norm to get U norm . 3: Cluster the rows of U norm with k-means. return Partition G 1 , ..., G K of the nodes. This method leads to K = 2, which roughly divides the counties into faster or slower growth communities. It also agrees with the choice where we maximize the eigen-gap. From now on, all clustering analysis will be based on the unit-norm normalization of the eigenvectors. For future analysis (section 2.7), it is useful to define the clusters that contain the counties with the fastest and slowest growth. After the clusters are produced with Algorithm 1, for every community k, we calculate the average exponential growth rates of the counties in that community. This is done by fitting the total number of cases of each county i on day t, through nonlinear least squares and obtaining the approximated growth rate r i for county i. Then, we compare the average fitted growth rate r k = 1/| G k | i∈ G k r i and standard error for clusters k = 1, ..., K. The fastest growth cluster is defined as argmax k r k and the slowest growth cluster is defined as argmin k r k . We use the COVID-19 (2019-nCoV) Data Repository by the Johns Hopkins Center for Systems Science and Engineering (CSSE) that contains data on the number of confirmed cases and deaths in the United States and around the world, broken down by counties in the U.S. The public database is updated daily and the virtual dashboard is also used widely around the world. Data sources of the database include the World Health Organization (WHO), US Center for Disease Control (CDC), BNO News, WorldoMeters, and 1point3acres. We take all counties that have 12 or more cumulative cases in the time frame of 01/22/2020 to 04/17/2020. We treat the day a county reaches 12 or more confirmed cases as day one and Table 2 : Average growth rates and the number of counties in each cluster for K = 2. Model R corresponds to Algorithm 1 where we use the sample correlation matrix. Model A 1 corresponds to Algorithm 4 where we use the k-nearest neighbors graph (k = 7). Model A 2 corresponds to Algorithm 4 where we use the -neighborhood graph ( = 0.007). Groups 1 and 2 are the obtained partitions G 1 and G 2 , respectively. Growth Rate is the approximated exponential growth rate, calculated as in subsection 2.5. Presented are the averages of these growth rates and their associated SEs for the counties in two groups, clustered by different methods. R, second phase is for the clusters obtained for the period 05/10/2020 -07/10/2020. then discard all counties that have a time series of fewer than 14 days after processing. This way we shift each county to a similar starting point in terms number of cases and a long enough period to do a meaningful analysis with. We also remove unassigned cases and U.S. territories, which ultimately results in a total of 950 counties. As mentioned before, we use w i,t = log(x i,t ) to represent the logarithmic cumulative cases for county i on day t. We also repeat the community detection process with more recent data from 05/10/2020, when many states started to reopen, to 07/10/2020. The bulk of this part of the study concentrates on the beginning phase of the pandemic given that health and government intervention to minimize the number of future cases should be executed as early as possible. However, we compare the resulting communities with more recent data that captures the second phase of the pandemic in the U.S. States experienced a significant drop in cases when lockdown was enforced and businesses were closed but as they began to reopen, the number of cases saw an uptick once again. Since this second phase comes months after the initial outbreak, there may be meaningful differences worthy of analysis. We can see from Table 2 that for the clusters obtained by Algorithm 1 (R), the difference between the growth rates of Group 1 and Group 2 is the largest. This is validated by the discernible difference in trajectories, shown in Figures 5 and 6 . The standard error bands in Figure 5 underscores that the two groups become more distinct in their growth trajectory as time goes on: the overlap between the bands of the two groups decreases over time. For A 1 and A 2 , the growth rates are much closer together between the two communities. Furthermore, the right panel of Figure 6 is a plot of the average cases for the period after community detection was performed: 04/17/2020 -09/03/2020. Evidently, the separation between the two groups becomes much more distinct as time goes on (much larger number of cases). As for community detection of the subsequent phase of the pandemic in the U.S. (from 05/10/2020 -07/10/2020), the last row of Table 2 again shows a larger average growth rate for Group 2, albeit much smaller in magnitude since cases increased at a slower rate once the country learned how to deal with the pandemic. Table 3 contains information on the average intra-and inter-group correlations, a sample reflection of B. Evidently, the intra-community correlations are higher than the intercommunity correlations. Group 1's intra-correlation of 96.1% and Group 2's 96.8% are greater than 94.0%, the inter-group correlation between the two groups. As we only took counties with significant outbreaks as of 04/17/2020, it is logical to observe high correlations across the board. These results are also mirrored in Figure 4 , a heatmap of the block correlations. Some notable counties that are partitioned to Group 2, the fast growth community, include In addition, Figure 8 shows the same plots as those in Figure 6 but for a later phase. The curves are clearly much flatter in both groups, which is likely due to the increase in the number of cases plateauing in many counties. Furthermore, the distinction between the curves of Group 1 and 2 is also considerably bigger than those of the earlier data. This can be explained by the confluence of additional factors that separate each county's experience with the virus, including the nature of local government intervention, degree, and timing of re-openings, travel restrictions, etc. An important and subsequent question that arises once the communities are obtained is what underlying factors play a role in which growth cluster a county belongs to. Since the growth of COVID-19 cases is also related to static, inherent factors that aren't a consequence of the disease, we examine a variety of county demographic variables and how they differ among communities. In order to select the variables that are most statistically significant, or are most relevant to the community assignment of a county, we perform independent two-sample t-tests on the fastest and slowest growth groups (subsection 2.5) with respect to various demographic variables. The null and alternative hypotheses for this t-test for the d-th feature are as follows: where µ d,1 is the mean value of the d-th feature of cluster 1 and µ d,2 is the mean value of the d-th feature of cluster 2. We then compute the two-sample test statistic with pooled estimate of the variance. After finding the p-values, we rank the features from lowest p-value (most significantly different between the two groups) to highest (least significantly different between the two groups). Furthermore, we repeat Algorithm 1 for K = 3, 4, 5, select the 'fastest' and 'slowest' growth clusters in each case, and carry out the independent two-sample t-tests as described above for the same demographic features. This sensitivity analysis tests whether the demographic variables that are the most significantly different between the two groups are consistent when we have a larger number of communities. Ultimately, we present the most statistically significant demographic features. For this section, we use data from Data Planet, a social science research database that compiles 12.6 billion U.S. and international datasets from over 80 sources. For our purposes, we look at the 2017 American Community Survey (ACS), the largest household survey in the U.S., conducted by the U.S. Census Bureau. We select 17 relevant features on a county-level, which are displayed and summarized in Table 4 . Note that not all 950 counties from Johns Hopkins CCSE data that were used in subsection 2.6 is available on Data Planet, thus the analysis is done on 633 counties for this section. Now, we are left with 301 counties in Group Figure 9 : The left panel is a geographical representation of counties according to median household income. Blue dots are counties with less than $50,000 median annual household income and purple dots are counties with more than $50,000 median annual household income. The right panel is a geographical representation of counties according to population density. Blue dots are counties with less than 150 persons per sq mile and purple dots counties with more than 150 persons per sq mile. 1 and 332 counties in Group 2, which is still a close split like that of R seen in Table 2 . corresponds to Algorithm 1 where we use the sample correlation matrix. Group 1 and 2 are the obtained partitions G 1 and G 2 , respectively. Population Density is the number of people per sq mile; median household income is in US dollars; % Poverty is the poverty rate: % 1-person households is the percentage of one-person households; % 5 or more person households is the percentage of five or more person households; % households w 60 y/o and older is the percentage of households that have one or more members who are 60 years old or older; low access to stores is defined as living more than one mile (urban areas) or 10 miles (rural areas) from the nearest supermarket, supercenter, or large grocery store; /1,000 is per 1,000 persons; % take public transportation is the percentage of all persons who work in a county and take public transportation to work every day. All feature information is as of 2017. It is evident from work is around three times greater for Group 2 than Group 1. These observations do not hold for A 1 and A 2 (see Supplementary Material for Tables 10 and 11 ) and in fact, the differences in mean and medians between the two groups are significantly smaller for almost all features. For instance, the differences between population density means for the two groups of A 1 and A 2 are 191.609 and 228.966 people per sq mile, respectively -much smaller differences than that of R's clusters: 637.407. Furthermore, there isn't consistency in these trends for most features; for example, Group 2 of A 1 has the higher population density and median income averages but also has the lower average number of bars, grocery stores, and restaurants. On the other hand, unlike what one would expect in terms of the relationship between the number of one-person households and the spread of COVID-19, there is not much differentiation between the number of people in a household. Figure 10 displays the individual bar plots of average values for six features for R, A 1 , and A 2 . It is evident that the orange bars of R's clusters are plainly contrasted between Group 1 and 2, unlike those of A 1 and A 2 's clusters. It is very likely that too much information was lost in A 1 and A 2 during the truncation process. to convene at bars without much social distancing (before stricter lockdowns took place). After finding the growth communities and conducting t-tests to ascertain the significant features for the latter phase of the pandemic in the U.S. (05/10/2020 -07/10/2020), the features with the lowest p-values diverge from those of earlier data, as presented in the right table of Table 5 . Population density and median income are still among the most meaningful, with income being more significant, but variables such as the number of grocery stores, percentage of people with low access to stores, and the percentage living in poverty have become much more significant. This suggests that at later stages of the pandemic, poverty and other income-related measures become more indicative and responsible for the differences in case growth among counties. Thus, the seven features for d i for this latter phase are the top seven variables in Table 5 : number of grocery stores, % low income with low access to stores, median household income, % poverty, % white, population density, and % 1-person households. The final section of our COVID-19 methodology is to predict a county's growth trajectory a few days into the future. We propose a prediction methodology with the objective that given a new county, the new county's key demographic features, and social distancing measures, we implement an algorithm that projects the new county's future growth. Before going in-depth on the prediction models, it's necessary to first define some important variables. Let l be the number of the days forward to be projected for a new county. To build such a predictive model, let y i,t+l = log(x i,t+l ) − log(x i,t ) be county i's l-day forward log-growth rate, which is close to the growth rate by Taylor's expansion and numerical verification, for t = 1, ..., T i . Here, T i + l is the total number of days where county i has 12 or more cases. Recall the obtained partitions from Algorithm 1 (set of indices of counties that belong to group k): G k = {i| Z i = k, i = 1, ..., n}, where Z ∈ R n is the recovered community label vector. For a community k, and a county i ∈ G k , let d i ∈ R q be county i's significant feature vectors obtained from section 2.7, and y i = [y i,1+l , · · · , y i,T i +l ] T ∈ R T i be its l−day forward log case difference. Note that each row of S i , s i t ∈ R 3 , has three different social distancing metrics at time t. In summary, we have data {S i , y i : i ∈ G k } for training an l-day ahead predictive model for the k th community. To enhance the effectiveness of the model, we take advantage of a special type of recurrent neural network (RNN): long short-term memory (LSTM) networks, which are designed for time-series forecasting. Unlike feedforward neural networks (FNNs), RNNs produce an output that depends on a "hidden" state vector that contains information based on prior inputs and outputs. LSTMs builds on a simple, vanilla RNN to include a forget gate, input gate, and output gate for each module. Hence, it is able to "remember" information for longer time periods (lags). The output for an LSTM module at time t is as follows: The components of h t are broken down as follows: , is the forget gate output and W f and b f are its weights and biases, respectively , is its input gate output and W i and b i are its weights and biases, respectively. The cell state vector then gets updated by forgetting the previous memory through the forget gate and adding new memory through the input gate: b o are its weights and biases, respectively. Here, σ is the sigmoid activation function. We also compare the LSTM's performance with that of an FNN, namely an MLP (multilayer perceptron). MLPs are a type of fully connected FNN first introduced and popularized by Rumelhart et al. (1986) , consisting of an input layer, output layer, and hidden layers in between, where the training process is done through backpropagation. The total input x s+1 i of a neuron i of layer s + 1 takes the form of where h s ij is the weight for neuron j of the previous layer s to neuron i of layer s + 1 and b s+1 i is the threshold of layer s + 1. The first prediction model, Algorithm 2 (which we will refer to as SD-LSTM), is a prediction procedure that solely uses a nonlinear model (a neural network) to fit the data. The idea is to first train an LSTM for each of the K communities, and then given a new county, we select the corresponding fitted model for prediction from our repertoire with respect to its nearest neighbor county (in demographic variables, not geographical distance). That is, we apply the nearest neighborhood in demographic variables to classify the new county's community, and use the model for that community to forecast the county's cases. Specifically, for each community k ∈ {1, ..., K}, we train an LSTM with the data {(s i t , y i,t+l ) T i t=1 , ∀i ∈ G k } and this depends on the numbers of steps forward, l, we are trying to forecast. For simplicity of notation, for community k, we denote all such data items for all counties i ∈ G k by {(s t , y t+l ), t ∈ G l k } and the fitted function by f l k (·). Now the second part, the prediction, is that given a new county i 's demographic data d i and social distancing information ] ∈ R T i ×3 , we first find its nearest neighbor county j = argmin j d i − d j 2 and its associated community Z j and use its associated prediction model to predict y i ,t+l = f l k (s i t ), t = 1, ..., T i with k = Z j . Algorithm 2 summarizes this method of prediction. To predict a future event, the above procedure gives a number of prediction methods. For example, to predict tomorrow's outcome, we can use today's social distancing data with l = 1, or yesterday's social distancing data with l = 2, or the day before yesterday's social distancing data with l = 3, and so on. As verified later in Figure 12 , it turns out that l = 4 is the best choice of lead, which align with the incubation period of the disease. Train LSTM f l k (·) using the data {(s t , y t+l ), t ∈ G l k }. 3: end for 4: return fitted LSTMs f l k (·), k = 1, ..., K. Part II: Prediction , Z and f l k (·), k = 1, ..., K from Part I. . 5: end for 6: return y i = [ y i ,1+l , y i ,2+l , ..., y i ,T i +l ] T ∈ R T i . Algorithm 3 takes SD-LSTM a step further to include a linear component, namely, fitting the linear model for each county first with residuals from each community then further modeled by an LSTM. This idea is related to boosting or nonparametric estimation using a parametric start Fan et al. (2009) , resulting in a semi-parametric fit. Again, the objective of the training part is to obtain K fitted models, one for each community, using semi-parametric regression techniques. More specifically, for county i with lead l, we first fit the following linear regression models After fitting the linear regression models for every county i ∈ G l k , we obtain the residuals { i,t+l , t ∈ G l k } and save all the coefficients α l i , β l i for i ∈ G l k , k = 1, ..., K. We then extract the information further from {(s t , t+l ), t ∈ G l k } by fitting an LSTM to obtain the fitted g l k (·). Then, for the prediction of the new county i , we follow the same steps as those in SD-LSTM but the final prediction is instead adding the linear fit of the nearest neighbor county and the LSTM fit of the community corresponding to the nearest neighbor county: where k = Z j . We will refer to this model as SD-SP. The idea is summarized in Algorithm 3. We also include three other algorithms for comparison purposes. The first replaces the LSTM fit f l k (·) of community k in SD-LSTM with a linear model. This corresponds to fitting (7) without further boosting by an LSTM. For simplicity, we shall refer to this approach as the SD-LM (social distancing linear model). The second one is to use both demographic and social distancing data to fit an LSTM. This approach is identical to SD-LSTM, but includes the q = 7 significant demographic variables in Table 5 Fit the regression models (7) for i ∈ G l k and obtain the residuals { t+l , t ∈ G l k }. Train LSTM using {(s t , t+l ), t ∈ G l k } 4: end for 5: return fitted LSTMs g l k (·) and all α l i and β l i for i ∈ G l k , k = 1, ..., K. Part II: Prediction ] ∈ R T i ×3 , Z and g l k (·), α l i and β l i for i ∈ G l k , k = 1, ..., K from Part I. 1: Find county i 's nearest neighbor j = argmin j d i − d j 2 . 2: Select α l j and β l j for county j. to SD-LSTM but instead of an LSTM, we use an MLP with two hidden layers (we will refer to this model as SD-MLP). For the LSTM, the optimization algorithm used is Adam with a learning rate of 0.01. We also test the performance of various lags to see which yields the highest out-of-sample R 2 , defined as follows for a given new county i and lead l: where y i ,t+l is the observed value, y i ,t+l is the predicted value, andȳ i ,t+l = 1/T i Table 7 . No. of hidden layers Type and no of nodes Input shape SD-LSTM 1 LSTM, 10 N ×T × 3 SD-SP 1 LSTM, 10 N ×T × 3 DSD-LSTM 1 LSTM, 10 N ×T × 10 SD-MLP 2 Dense, 10, 10 Social distancing data is courtesy of Unacast and its COVID-19 Social Distancing Scoreboard. The scoreboard tracks mobile device movement and has three metrics that quantify the level of social distancing people in a particular county are practicing. The first metric is the percentage change in total distance traveled, averaged across all devices, compared to a pre-Corona baseline. The second is the percentage change in the number of visitations to nonessential places compared to a pre-Corona baseline. For these two metrics, the pre-Corona baseline of a county on a particular day is defined as the average of the four corresponding pre-weekdays (at least four weeks before the day). For example, for Monday 3/30, the pre-Corona baseline of the first metric is the average of the first metric for the four Mondays: 27 2/10, 2/17, 2/24, and 3/2. The final metric is the rate of human encounters as a fraction of the pre-Corona national baseline. The pre-Corona national baseline for this metric is the average of the metric taken over four weeks that immediately precede the COVID-19 outbreak (02/10/2020 -03/08/2020) as defined by Unacast. Since this data starts at 02/25/2020 which is after the start of the Coronavirus cases data (01/22/2020), we perform prediction on the period 02/25/2020 -7/10/2020, which is the start of the "initial phase" until the end of the "recent phase". Also note that not all counties from Johns Hopkins CCSE data and Data Planet are available at Unacast's database so out of the 633 counties from subsection 3.1, this section is performed on 627 counties. Table 8 : 02/25/2020 -7/10/2020 in-sample and out-of-sample R 2 for Model 1 (SD-LSTM), Model 2 (SD-SP), Model 3 (SD-LM), Model 4 (DSD-LSTM), and Model 5 (SD-MLP) for K = 1, 2, 3, 4, 5. The average values for mean, median and standard deviation are taken for each of the 5 folds. For K = 1, we assume that all counties belong to one group so we take all counties in the training data to train the neural network. The results are based on l = 4 and a five-fold cross-validation. 501 of the total 627 counties are used as training data (in-sample) and 126 counties are used as testing data (out-of-sample). Among the four prediction models we implemented using the county-level social distancing measures (see subsection 4.4), for K = 1, 2, 3, Model 4 (DSD-LSTM) slightly outperforms Table 9 : 02/25/2020 -7/10/2020 random assignment in-sample and out-of-sample R 2 for Model 1 (SD-LSTM), K = 2. Each trial is completed via randomly assigning each test county of one of the train-test splits to either community 1 or community 2. Model 1 due to the use of the seven additional demographic variables. Model 1 (SD-LSTM) proves to result in the highest average and median out-of-sample R 2 for K = 4, 5. Models 2 (SD-SP) and 3 (SD-LM) have much poorer performance across the board, which implies that these two models are worse than a horizontal line fit. It is also worth mentioning that the neural network correction part of Model 2 is incredibly hard to tune to be able to outperform the linear model Model 3 on its own. In this case, not only was it not able to enhance Model 3's results, Model 2's correction actually worsened the model's predictive ability. Other nonparametric methods other than a neural network were also used (such as support vector regression) but all had a similar lackluster effect, implying that boosting or enhancing the linear estimator with a nonlinear estimator is not beneficial in this case. Model 1's and Model 4's superiority suggests a nonlinear effect that the LSTM was able to extract, but the linear, semi-parametric, and MLP were unable to do so. For Models 1 and 4, stratifying the communities through our method does make a difference in-sample since increasing K improves the models' mean and median in-sample R 2 . However, this is not the case for out-of-sample as K = 1 produces the best results (no heterogeneity) and the out-of-sample R 2 continues to drop from K = 2 to 5. It is reasonable to conclude that the decrease in sample size for each community training (e.g. K = 1 uses all 501 counties to train while K = 5 uses on average 1/5th of that number to train each commu-nity) is hurting the model's ability to take advantage of the heterogeneity embedded in the communities. Thus, since neural networks have an advantage in large sample size settings, the effect of the reduction in sample size for larger Ks outweighs the community difference captured by community detection (Algorithm 1). We also include Model 5 (an FNN with two hidden layers, each with 50% dropout) to contrast the LSTM with. The performance is similar to Model 3 in that it is no better than a constant fit. The advantage of the LSTM is highlighted here since the output is dependent on previous computations, unlike the FNN that assumes the inputs (as well as outputs) are independent of each other. As COVID-19 cases are sequential information, the LSTM is clearly preferable to predict with. See Table 8 for the detailed breakdown by model and by the number of clusters K. Figure 11 contains the out-of-sample R 2 box plots for the four models with K = 1, 2, 3, 4, 5. The left panel is the average out-of-sample R 2 for Model 1, K = 1 and Model 4, K = 1 for l = 1, 2, 3, 4, 5, 6, 7 for one train-test split. The right panel is average out-of-sample R 2 for the same models, where one social distancing feature is left out each time. Both panels are of the phase 02/25/2020 -7/10/2020 and based on five-fold cross-validation. To ascertain whether using information from community detection still plays a role despite K = 1 being the best setting for out-of-sample prediction, we randomly assign each testing county to an existing community instead of using the nearest neighbor method. As shown in Table 9 , after repeating this five times for Model 1, K = 2, the median in-sample R 2 values are much lower compared to that of the same model in Table 8 (median of 0.5569 vs 0.6372, respectively). Albeit a smaller difference, the out-of-sample median 0.4980 is also smaller than the 0.5437 in Table 8 . This demonstrates that community detection can still categorize the nature of different counties' growth trajectories but this effect is likely outweighed by the diminishing sample size as K increases. Also note that before obtaining the prediction results for each algorithm, the hyperparameter of the appropriate lead was chosen by comparing the average R 2 values for each lead. The left panel of Figure 12 presents the median out-of-sample R 2 vs l = 1, ..., 7 for the two best models Model 1, K = 1, and Model 4, K = 1 as examples. Since out-of-sample R 2 is plateaus after a four-day lead, we fixed l = 4 as a larger lead would decrease precision and it is important to be consistent with studies that show the median incubation period of COVID-19 is 4-5 days Lauer et al., 2020) . Furthermore, anything longer than a week or so is rarely used in epidemiological and sociological studies. In addition, we quantify the social distancing feature importance by averaging the out-of-sample R 2 when we leave each feature out one at a time. Evidently, the right panel of Figure 12 suggests that although there is no distinct drop in performance, leaving out feature 1 (percent change in total distance traveled) results in the largest decline in R 2 whereas leaving out feature 2 (percent change in the number of visitations to non-essential places) results in the smallest decline. By utilizing spectral clustering to recover communities, we develop a framework to detect COVID-19 communities and discover meaningful interpretations of the clusters. We use the correlation matrix instead of the canonical Laplacian as it offers more meaningful insight and more distinct clusters. The resulting communities are distinct in the nature of their respective growth trajectories and there are several demographic variables that further distinguish these growth communities. Singling out the significant demographic features that have explanatory power of a county's growth community membership, we discover that not all of these variables are intuitive when it comes to their role in impacting COVID-19 cases. After modeling and interpreting historical disease progression, we turn to study future growth trajectories by incorporating social distancing information. We are able to reliably predict the logarithmic trends in case growth through the use of LSTMs and also verify that the counties are far from homogeneous -the obtained communities contain crucial information necessary for prediction in-sample. As for the LSTM's out-of-sample predictive power, the effect of the decline in sample size when increasing the stratification of counties into more communities dominates the heterogeneity between counties' growth curves that community detection uncovers. However, after comparing results to randomly assigning counties to different communities, the method we propose still demonstrates that using the community detection results boosts the models' predictive performance. We do, therefore, acknowledge that there could be other latent features that we did not capture in this study and that the three social distancing metrics used here may not paint the complete picture. Furthermore, we do not address the effect of government intervention at given time points that may have altered the disease progression. These could all be points that can be further investigated. Despite these potential shortcomings, the analysis conducted on the first phase of the disease here can also be compared to the second phase, which we are currently experiencing. As the U.S. and many other countries are witnessing an even more extraordinary uptick in cases again, we foresee several possible future applications of our study, including to other contagious disease outbreaks. Another interesting future work is to utilize the confidence distribution framework Xie et al. (2011) to combine studies from independent data sources from different countries. Community detection and stochastic block models: recent developments Entrywise eigenvector analysis of random matrices with low expected rank Noise thresholds for spectral clustering Accounting for incomplete testing in the estimation of epidemic parameters Connectivity of the mutual k-nearest-neighbor graph in clustering and outlier detection Community detection in partial correlation network models Detecting functional modules in the yeast protein? protein interaction network Demographic science aids in understanding the spread and fatality rates of covid-19 Local quasi-likelihood with a parametric guide Estimating number of factors by adjusted eigenvalues thresholding Analysis and forecast of covid-19 spreading in china, italy and france Consistent adjacency-spectral partitioning for the stochastic block model when the model parameters are unknown Preparedness and vulnerability of african countries against importations of covid-19: a modelling study Clinical characteristics of coronavirus disease 2019 in china Stochastic blockmodels: First steps Estimation of time-varying reproduction numbers underlying epidemiological processes: A new statistical tool for the covid-19 pandemic Artificial intelligence forecasting of covid-19 in china An early examination: Psychological, health, and economic correlates and determinants of social distancing amidst covid-19 Fast community detection by score Fast community detection by score Early dynamics of transmission and control of covid-19: a mathematical modelling study. The Lancet Infectious Diseases The incubation period of 2019-ncov from publicly reported confirmed cases: estimation and application Consistency of spectral clustering in stochastic block models Clinical and demographic characteristics of patients dying from covid-19 in italy vs china Predicting the cumulative number of cases for the covid-19 epidemic in china from early data On spectral clustering: Analysis and an algorithm Epidemic analysis of covid-19 in china by dynamical modeling. medRxiv Fitting networks models for functional brain connectivity Why is it difficult to accurately predict the covid-19 epidemic? Spectral clustering and the high-dimensional stochastic blockmodel Learning representations by back-propagating errors i fusion: Individualized fusion learning Normalized cuts and image segmentation Unacast social distancing dataset A tutorial on spectral clustering Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Stochastic a posteriori blockmodels: Construction and assessment Confidence distributions and a unifying framework for meta-analysis Nanshan Zhong, and Jianxing He. Modified seir and ai prediction of the epidemics trend of covid-19 in china under public health interventions Deep learning-based detection for covid-19 from chest ct using weak label. medRxiv Feng Ye, and Jingmin. Xin. Predicting covid-19 in china using hybrid ai model We would like to thank Unacast Inc. for providing us with their extensive social distancing data. The work was in part supported by NSF Grants DMS-2013789, DMS-1712591 and DMS-2034022, and NIH grant 2R01-GM072611-15. Algorithm 4 outlines the spectral clustering procedure with adjacency matrices A 1 and A 2 and Tables 10 and 11 present Groups 1 and 2's mean, median, and standard deviation of the 17 features.Algorithm 4 Normalized Laplacian spectral clustering Input Similarity matrix S ∈ R n×n and K clusters to obtain.1: Obtain the adjacency matrix A. 2: Compute the normalized, symmetric Laplacian matrix L = I − D −1/2 AD −1/2 . 3: Compute the smallest K eigenvectors in absolute value u 1 , ..., u K of L and construct U ∈ R n×K be the matrix with the eigenvectors as columns. 4: Normalize rows of U to have unit norm 1 to get U norm . 5: Clusters the rows of U norm with k-means. return Partition G 1 , ..., G K of the nodes. A 2 corresponds to Algorithm 4 where we use the -neighborhood graph ( = 0.007). Group 1 and 2 are the obtained partitions G 1 and G 2 , respectively.