key: cord-0213731-56qv2762 authors: Karavias, Yiannis; Narayan, Paresh; Birmingham, Joakim Westerlund University of; University, Monash; University, Lund; University, Deakin title: Structural Breaks in Interactive Effects Panels and the Stock Market Reaction to COVID-19 date: 2021-11-04 journal: nan DOI: nan sha: 2ee279b798cfba3c824ecd6565d5d40694f0db8f doc_id: 213731 cord_uid: 56qv2762 Dealing with structural breaks is an important step in most, if not all, empirical economic research. This is particularly true in panel data comprised of many cross-sectional units, such as individuals, firms or countries, which are all affected by major events. The COVID-19 pandemic has affected most sectors of the global economy, and there is by now plenty of evidence to support this. The impact on stock markets is, however, still unclear. The fact that most markets seem to have partly recovered while the pandemic is still ongoing suggests that the relationship between stock returns and COVID-19 has been subject to structural change. It is therefore important to know if a structural break has occurred and, if it has, to infer the date of the break. In the present paper we take this last observation as a source of motivation to develop a new break detection toolbox that is applicable to different sized panels, easy to implement and robust to general forms of unobserved heterogeneity. The toolbox, which is the first of its kind, includes a test for structural change, a break date estimator, and a break date confidence interval. Application to a panel covering 61 countries from January 3 to September 25, 2020, leads to the detection of a structural break that is dated to the first week of April. The effect of COVID-19 is negative before the break and zero thereafter, implying that while markets did react, the reaction was short-lived. A possible explanation for this is the quantitative easing programs announced by central banks all over the world in the second half of March. This paper considers what we believe to be a very common scenario in practice. We have in mind a researcher that seeks to infer a linear relationship between a dependent variable and a set of regressors. The data set has a panel structure, in which there are a large number of crosssectional units, N, that are observed over T time periods. This scenario is relevant because while the number of time periods is always limited and cannot be increased other than by the passage of time, statistical agencies keep publishing time series data for individuals, firms and countries. Thus, while N is usually quite large, T need not be. One of the concerns here is therefore that T might not be large enough for many econometric approaches to work properly. Another concern is the presence of unobserved heterogeneity and the detrimental effect that this may have if said heterogeneity is correlated with the regressors. The main worry, however, is that the coefficients of some or indeed all of the regressors may be subject to structural change, because of some major events that may have caused the relationship to change over time. The present paper develops a toolbox that enables the researcher to test for the presence of a common structural break and, if a break is detected, to also infer the date of the break. The tools are extremely easy to implement, accommodate general forms of unobserved heterogeneity and they can be used under quite relaxed conditions on T, provided that N is large. Accounting for structural change has always been an important issue in economics and elsewhere. Panel data are particularly susceptible to such change, because of the large number of time series that they contain. For example, a credit crunch or debt crisis may affect the returns of many firms and households, an oil price shock may affect the output of many countries, and a fad or fashion is likely to influence the livelihood of many individuals. In our empirical application, we consider stock returns for 61 countries, which all plummeted around March 11, 2020, when COVID-19 was declared a global pandemic by the World Health Organisation (WHO). Fortunately, the panel data structure not only makes breaks likely, but it also makes for relatively easy detection. As is well known, with time series data consistent estimation of the breakpoint is not possible, but only consistent estimation of the break fraction. By contrast, in panels consistency is usually possible. Bai (2010) was among the first to make this point. He considered a highly stylized model with a breaking constant as the only regressor. The main finding is that the popular ordinary least squares (OLS) breakpoint estimator based on minimizing the sum of squared residuals is consistent as N → ∞ regardless of whether T is fixed or going to infinity. The accuracy of the procedure is therefore greatly enhanced when compared to the time series case. The increased estimation accuracy is one of the advantages of using panel data. Another major advantage is the ability to deal with unobserved heterogeneity. Such heterogeneity is important in general, and it is particularly relevant in the type of noisy panels that we have in mind where typically the regressors explain only a small fraction of the variation in the dependent variable (see Capelle-Blanchard and Desroziers, 2020 , in the context of COVID-19 and stock returns). This consideration motivated Horváth and Hušková (2012), Kim (2014) , and Westerlund (2019) to extend the work of Bai (2010) to the case when the stochastic component of the data admits to a common factor, or "interactive effects", representation. 1 Such representations have been shown to be very effective at capturing unobserved heterogeneity, and they are therefore very popular. It is, however, still the same breaking constant-only model that is being considered, and many models of interest involve more general regressors. Antoch et al. (2019) , Baltagi et al. (2016) , Boldea et al. (2020) , Hidalgo and Schafgans (2017) , and Li et al. (2016) Hušková (2012), Kim (2014) , and Westerlund (2019) do for the constant-only model. In particular, while Antoch et al. (2019) , and Hidalgo and Schafgans (2017) propose tests for the presence of a structural break, Baltagi et al. (2016) , Boldea et al. (2020) , and Li et al. (2016) take the existence of a break as given and focus instead on the breakpoint estimation problem. But while highly complementary in terms of the methods they propose, the assumptions employed are materially different. Antoch et al. (2019) only require N to be large. However, they assume instead that the factor loadings are negligible, which means that strong forms of cross-section dependence are not permitted. 2 It also means that there is no need to account for the factors, and that their effect on the breakpoint estimation problem is in this sense trivial. The weak cross-sectional dependence condition is maintained also in Hidalgo and Schafgans (2017) , who in addition require that N, T → ∞ with N/T 2 → 0, which in practice means that T >> N. Baltagi et al. (2016) do not require negligible loadings and are therefore more general in this regard. The way they do this is by applying the common correlated effects (CCE) approach of Pesaran (2006) , which enables consistent estimation of (the space spanned by) the unknown factors. But then Baltagi et al. (2016) require that both N and T are large, which is again rarely the case in practice. Moreover, the authors only provide a consistency result and they do not consider the asymptotic distribution of the estimated breakpoint, which is necessary for the construction of confidence intervals with correct asymptotic coverage. The same critique applies to the paper of Li et al. (2016) , which uses the principal components method instead of CCE to estimate factors. Boldea et al. (2020) also do not consider the asymptotic distribution of their estimated breakpoint, although in their paper T is fixed. Their approach is similar to the one of Antoch et al. (2019) in the sense that the estimation is carried out while ignoring the factors. This simplicity does, however, come at a cost in terms additional restrictive assumptions. Boldea et al. (2020) go as far as Antoch et al. (2019) and require negligible loadings, but they do assume that the omitted variables bias caused by the factors is time-invariant, up to the breakpoint, which limits the type of factors that can be permitted. Motivated by the above discussion, the present paper develops tools that enable researchers to both test for the presence of a common structural break, and to infer the breakpoint of an existing break. The unobserved heterogeneity is assumed to have a common factor structure that is handled by using a version of the CCE approach, which is similar yet clearly distinct from the one employed by Baltagi et al. (2016) . The reason for focusing on CCE as opposed to the otherwise so popular principal components method is in part because of the extreme simplicity with which the factors are estimated in CCE, in part because CCE is valid even if T is fixed (see Westerlund et al., 2019) . Needless to say, this last feature, which is not exploited by Baltagi et al. (2016) , is an important advantage when wanting to entertain the possibility that T might not be large. The idea, which is laid out along with our model and assumptions in Section 2, is to use the cross-sectional averages of the regressors to estimate the unknown common factors, and to simply augment the regression model with these averages. We begin by considering the problem of estimating the unknown breakpoint given that a break has occurred. This is done in Section 3. Most papers in the literature are based on the OLS breakpoint estimator, and so is this paper. However, instead of minimizing the OLS residuals, which will generally lead to inconsistency because of the unattended factors (Kim, 2011) , we minimize the CCE residuals. We focus on the results for the case when the magnitude of the break is bounded from above and below, although we also allow diverging and shrinking breaks. Moreover, T can be fixed or tending to infinity. According to the results, the proposed breakpoint estimator is consistent as N → ∞ with T fixed or as N, T → ∞ with T/N → 0, and the rate of convergence is given by 1/N. The asymptotic distribution of the breakpoint estimator is obtained under the same set of conditions on N and T, and is used to construct confidence intervals for the true breakpoint. As far as we are aware, this paper is the first to provide the rate of convergence and asymptotic distribution in the presence of common factors, and it is the first to establish consistency when T is fixed. While in Section 3 we assume that a break has occurred, in Section 4 we instead consider the problem of testing for the presence of a break. While very common in the literature, CUSUMbased test statistics like the one of Hidalgo and Schafgans (2017) can suffer from low power in certain directions (see, for example, Andrews, 1993) . In this paper, we therefore follow the recommendation of Andrews (1993) , and consider two Wald-type test statistics. One is designed to test the null hypothesis of no break against the alternative of a known breakpoint, while in the other the same null is tested against the alternative of a break at some unknown date. To the best of our knowledge, these tests are the first that enable break testing in the presence of common factors. The asymptotic analysis reveals that, while the consistency and asymptotic distribution of the breakpoint estimator only require N → ∞, unless N, T → ∞ with T/N → 0, the asymptotic distribution of the Wald test statistics are generally not free of nuisance parameters. Hence, in terms of the size of T, testing for the presence of a structural break is more demanding than estimating the breakpoint. This is what theory tells us. According to the Monte Carlo results reported in the online appendix, however, the new toolbox tends to perform well even if T is as small as 10, provided that N is large enough. Hence, even if in theory the Wald tests require T to be large, in small samples this requirement does not seem very critical. Section 5 is concerned with our empirical application to the relationship between stock returns and COVID-19, which is motivated in part by the many recent calls for econometric research into the effects of the pandemic (see, for example, the recent special issue of Journal of Econometrics), in part by existing empirical research. By the end of February, 2020, COVID-19 had led to a world-wide drop in demand, which in turn brought down investment and employment. While stock markets initially reacted to news of the pandemic by losing substantial value, they quickly regained the vast majority of this loss. The fact that this rebound took place even though the number of new cases and deaths were still rising is suggestive of structural change. Most studies of the stock market reaction to COVID-19 either ignore this possibility altogether or split their sample into subperiods based on major events (see, for example, Capelle-Blanchard and Desroziers, 2020, and Ramelli and Wagner, 2020) . This means that the breaks are treated as known, if treated at all, which is risky, as misplaced breaks are just as problematic as omitted breaks. In the empirical application of the present paper, we offer a more general treatment. This is done by applying the new toolbox to a sample covering 61 countries across 38 weeks, from January 3 to September 25, 2020, which means that T is relatively small. According to the results, the COVID-19-stock return relationship has been affected by the presence a structural break in the first week of April, at about the same time as most central banks announced that they were going to intervene to save the global economy from collapse. While before the break stock markets reacted significantly to news about the pandemic, after the break stock markets became insensitive to such news. This suggests that central banks play a central role in shaping stock market behaviour in pandemics. Section 6 concludes the paper. All proofs are provided in the online appendix. In this paper, we consider the following linear panel data model with a structural break at time b: where i = 1, ..., N and t = 1, ..., T index the cross-sectional units and time periods, respectively. The k × 1 vector x i,t contains the regressors and the r × 1 vector z i,t is defined as where I(t > b) is the indicator function taking the value one when t > b and zero otherwise, and R is an k × r selection matrix of zeros and ones with full column rank r that picks out the elements of x i,t whose coefficients are subject to structural change. For example, if k > r and R = (0 r×(k−r) , I r ) , then (2.1) is a partial structural change model in which only the r last regressors in x i,t appear in z i,t (b). If, on the other hand, k = r, then R = I r , and so the model is one of pure structural change. In the empirical application of Section 5, y i,t is stock returns for country i in week t, and x i,t is comprised of controls and COVID-19 related variables, where the coefficients of the COVID-19 variables may be breaking. As we explain in that section, the model can easily be generalized to include multiple structural changes. The coefficient vectors β and δ are of dimension k × 1 and r × 1, respectively. The error e i,t is assumed to admit to a factor structure, which means that it is allowed to be correlated across i. Specifically, where f t and γ i are m × 1 vectors of common factors and factor loadings, respectively, and ε i,t is an idiosyncratic error term. In our empirical application, the presence of f t in (2.3) is just natural because many well-known models in finance, like the capital asset pricing (CAPM) and Fama-French (FF) three factor models, imply that returns should have a linear factor structure. In this section and the next, we assume that all the factors are unknown. In Section 5, we demonstrate how the toolbox can be implemented when some of the factors are observed, as when CAPM holds and one has data on (world) market returns. We want to entertain the possibility that the factors are correlated with the regressors. We therefore follow Pesaran (2006) and assume that where Γ i is a m × k factor loading matrix and v i,t is a k × 1 vector of idiosyncratic errors. For later use, it is convenient to write the above model in matrix form by stacking the time series observations for each cross-section. The stacked version of (2.1) is given by where y i = (y i,1 , ..., y i,T ) and e i = (e i,1 , ..., e i,T ) are T × 1, Also, where F = ( f 1 , ..., f T ) and ε i = (ε i,1 , ..., ε i,T ) are T × m and T × 1, respectively. The stacked version of (2.4) is given by The model assumptions depend to a large extent on whether we are estimating the breakpoint or if we are testing for its existence. Assumptions 2.1 and 2.2 will, however, be maintained throughout this paper. Assumption 2.1. (a) v i,t is a covariance stationary process that is independent across i with absolutely summable is the Frobenius norm of any matrix A. (b) ε i,t is a covariance stationary process that is independent across i with absolutely summable (c) ε i,t and v j,s are independent for all i, j, s and t. (a) T −1 F F is positive definite with probability approaching one (w.p.a.1) for all T. (c) f t is independent of ε i,s and v i,s for all i, s and t. Assumption 2.1 is standard in the interactive effects literature (see, for example, Baltagi et al., 2016) . The only exception known to us is Baltagi et al. (2017) . They do not allow for crosssection dependence, but they do allow x i,t and ε i,t to be unit root non-stationary. We allow for serial correlation and possibly even unit roots in f t (more later) and hence in x i,t (and y i,t ), but not in v i,t and ε i,t . Bai (1997a) has shown that the existence of both serially correlated errors and lagged dependent variables leads to inconsistent estimation of the break date. Assumption 2.1 (c) therefore assumes that x i,t is strictly exogenous. Without unit roots Assumption 2.1 (a)-(c) are the same as Assumptions 1-3 in Baltagi et al. (2017) . Assumption 2.2 (a) and (b) are met if f t is stationary and not collinear, which is again a standard requirement in the literature (see Baltagi et al., 2016, Assumption 8) . Stationarity is not necessary, though. Note in particular how stationarity is not required if T is fixed. In fact, f t does not even have to be stochastic but can also be deterministic. Assumption 2.2 (c) is an identifying condition that is not particularly restrictive. It ensures that f t is the only source of cross-section dependence. Let us denote by b 0 the true value of b. The purpose of this section is to make inference regarding this parameter. Assumption 3.1 requires only that each regime contains at least as many observations as the number of free parameters. It is therefore very general. Baltagi et al. (2016) and Westerlund (2019) allow for common factors in very much the same way as we do. However, they require that the loadings follow certain probability laws, and that they are independent of all other random elements of the model. In this section, we treat the loadings as fixed, which means that we do not make any assumption regarding their distribution or their correlation with the other random elements of the model. The main restriction is that x i,t must load on the same factors as y i,t , and that the number of regressors must be at least as large as the number of factors. This ensures that the factors can be estimated by applying CCE to x i,t , as we will now explain. Unlike in Antoch et al. (2019) , where the factor loadings are assumed to be negligible, under our conditions valid inference on b 0 is not possible without proper accounting for f t . The reason is that the factors make x i,t correlated with e i,t , which means that (2.1) cannot be estimated consistently using OLS. However, we note that x i,t has a pure factor model representation, suggesting that the factors can be estimated using methods designed for such models. In this paper, we follow Baltagi et al. (2016) , and use the CCE approach of Pesaran (2006) , which is based on using the cross-sectional average of the observables to estimate the space spanned by f t . The difference is that we do not include the cross-sectional average of y i,t , which in the current context is uninformative regarding f t . This is shown in the online appendix. Hence, in contrast to Baltagi et al. (2016) , in the current paper we only usex t , whereĀ t = N −1 ∑ N i=1 A i,t is the cross-sectional average of any variable A i,t . In view of (2.4), this average can be written as (3.1) Let A + denote the Moore-Penrose inverse of any matrix A. If Assumption 3.2 is true, so thatΓ has full row rank, the Moore-Penrose inverse ofΓ is given byΓ + =Γ (ΓΓ ) −1 . Hence,ΓΓ + = I m , which in turn means that (3.1) can be solved for f t by left-multiplication byΓ + . It follows that if Assumption 2.1 is also true, so that v t = o p (1), then We say thatx t is "rotationally consistent" for f t , because it is consistent up to an invertible rotation matrix. Hence, by augmenting (2.1) withx t , provided that N is large, we can control for f t , and in this way break the correlation between the regressors and the error term. .,x T ) , and letà i = MX A i for any T-rowed matrix A i . The augmented model to be estimated can now be written as This model can be stacked also over the cross-section, giving known, the CCE estimator of δ, which is identically the OLS estimator obtained from (3.4), and the associated sum of squared residuals are given bŷ Of course, in many scenarios of empirical relevance, b 0 is not known. The estimator that we will use in its stead is obtained by minimizing SSR(b) over all possible values of b; We begin by showing thatb is consistent. For this to be possible, however, in addition to Assumptions 2.1-3.2, we need to ensure that the inverse appearing inδ(b) is well-behaved. This is where Assumption 3.3 comes in. It demands that the regressors in x i,t have enough variation across both i and t after projecting out all variation that can be explained by f t . This rules out cross-section-invariant regressors in x i,t . (a) (NT) −1X X is positive definite w.p.a.1 for all N and T. We are now ready to state our first main result. Theorem 3.1. Suppose that Assumptions 2.1, 2.2 and 3.1-3.3 are met. Then, the following results hold: Theorem 3.1 states thatb is consistent and that the rate of convergence is δ −2 N −1 or better. The fact that consistency is possible even if T is fixed is very useful in practice, because it means that breaks can be detected very quickly. When T → ∞ it matters whether m < k or m = k. Note in particular how the rate of convergence is faster when m < k then when m = k, and that this is true even if T/(N δ 2 ) → 0 under m = k, so that the conditions for (a) and (b) are the same. The reason is that when m < k, unlike what one would expect based on standard theory for regressions in stationary variables, the effect of the redundant cross-section averages contained inX are not negligible but impact the asymptotic theory in very much the same way as unit root regressors do in a spurious regression. Moreover, the redundant averages are correlated with the breaking regressors in Z i (b), and this increases the signal coming fromZ i (b). As far as we are aware, this is the first time redundant regressors have been shown to lead to increased accuracy in breakpoint estimation. If we are not interested in the distinction between m < k or m = k, the results contained in Theorem 3.1 can be stated as in Corollary 3.1. Corollary 3.1. Suppose that conditions of Theorem 3.1 are met, and that T/(N δ 2 ) → 0 and √ N δ → ∞. Then, as N → ∞ with T fixed, or as N, T → ∞, (3.10) Remark 3.1. As already mentioned, Baltagi et al. (2016) consider a model that is very similar to ours and that is estimated using CCE. They show thatb is consistent for b 0 ; however, they only consider the case when N, T → ∞, and they do not provide the rate of convergence. Moreover, the proof that they provide is based on the assumption that (ȳ t ,x t ) is rotationally consistent , which we show in the online appendix not to be correct. Bai (2010) is the only other paper that we are aware of that proves consistency under both fixed and large T; however, his model is very simple in the sense that it does not contain any regressors except for a breaking mean. Under stationarity, the model considered by Baltagi et al. (2017) is very similar to our but without interactive effects. The rate given in Corollary 3.1 is consistent with the one given in their Theorem 2. Remark 3.2. Corollary 3.1 requires that T/(N δ 2 ) → 0 and √ N δ → ∞. The latter condition is similar to Assumption 2 in Bai (2010) , and is tantamount to requiring δ = N −α δ 0 with α < 1/2 and δ 0 ∈ (0, ∞). Hence, while we allow for it, we do not require δ → 0, which is in contrast to studies such as Antoch et al. (2019) , where the magnitude of the break must be shrinking. The condition that T/(N δ 2 ) → 0, which is similar in spirit to Assumption 2 in Baltagi et al. (2016) , restricts the relative rate of expansion of N and T, and is only needed when T is large. For example, if δ = O(1), then we require that T/N → 0, as otherwise the error coming from the estimation of the factors will tend to accumulate as we sum over time. We also see that the larger is δ , the weaker the condition on T/N, as to be expected, because a larger break is As Corollary 3.1 makes clear, provided that T/(N δ 2 ) → 0 and √ N δ → ∞, consistency holds irrespectively of whether m = k or m < k, which is of course very useful in practice, as m is unknown here. This invariance is reflected also in the asymptotic distribution of the estimated break date, as our next theorem, Theorem 3.2, makes clear. Before we take the theorem, however, we need to introduce a few more conditions, which are given in Assumption 3.4. where Ω X is positive definite. Assumption 3.4 is restrictive, but is similar to the conditions used in the previous literature (see, for example, Bai, 1997a Bai, , 2010 . It demands that ε i,t is serially uncorrelated and that the large-N moments of x i,t do not depend on time. While indeed quite strong, because of the presence of f t , the first condition does not rule out serial correlation in e i,t . The second requirement is stronger than necessary, and can be relaxed to accommodate moments that are constant within break regimes but potentially varying between regimes, as in, for example, Bai (1997a) , and Perron and Yamamoto (2013). Theorem 3.2. Suppose that Assumptions 2.1, 2.2 and 3.1-3.4 are met, and that T/(N δ 2 ) → 0 and √ N δ → ∞. Then, as N → ∞ with T fixed, or as N, Remark 3.3. Most papers stop at consistency and do not report the asymptotic distribution of the estimated breakpoint. There are, however, a few exceptions. The study of Bai (2010) is one of them. He provides the asymptotic distribution of the estimated breakpoint for a model with a break-in-mean only, and no other regressors or error cross-section dependence. Kim (2011) extends this analysis to a model that allows for a break in both the mean and trend, and where the errors have a factor structure. Baltagi et al. (2018) allow for more general regressors. However, they assume instead that the errors are cross-sectionally independent. All three papers require that T is large in their distributional analyses. As far as we are aware, the asymptotic distribution reported in Theorem 3.2 is the first to allow for general regressors and common factors in panels where only N is required to be large. Theorem 3.2 can be used to construct confidence intervals for b 0 with asymptotically correct coverage. Under Assumption 3.4, consistent estimators of Ω X and Φ X can be constructed in the following obvious manner: whereσ 2 ε,i = T −1ε iε i with the T × 1 vectorε i being the i-th block of the NT × 1 vectorε = (ε 1 , ...,ε N ) = MX(Ỹ −Z(b)δ(b)). The probability density function of arg max v (−|v|/2 + B(v)) is known analytically and is given in Bai (1997a) . Let us denote by c α the (1 − α/2)-th percentile of this distribution function, and let x be the integer part of x. In analogy to Bai (1997a) , an asymptotically correctly sized 100(1 − α)% confidence interval for b 0 can now be constructed as (3.14) Testing for the existence of a structural break is a key first step before estimating the date of the break. In terms of the parameters of (2.1), the null hypothesis of no structural change is given by H 0 : δ = 0 r×1 . The alternative hypothesis can be formulated in (at least) two ways. We begin by considering the alternative that there is a single structural change (δ = 0 r×1 ) at a given date b, which may or may not be equal to b 0 . This hypothesis, henceforth denoted H 1 (b), can be tested using the following Wald test statistic: whereΣ δ (b) is a consistent estimator of the asymptotic covariance matrix ofδ(b), whose construction will be discussed later. Interestingly, W(b) will not have the expected asymptotic chi-squared distribution with r degrees of freedom, henceforth denoted χ 2 (r), under H 0 . The intuition behind this result goes as follows. As already pointed out, because of the presence of f t in both (2.3) and (2.4),X i is generally endogenous. The exception is in large-N samples, since hereX is rotationally consistent for F, and in this senseX i is "asymptotically exogenous". The problem is that while the use of MX takes care of the factors in X i , it does not take care of those in Z i (b), which are breaking. This is a problem because it means that whileX i is asymptotically exogenous,Z i (b) is not, which in turn invalidates inference based on W(b). Because of the consistency ofb, in Section 3 the endogeneity ofZ i (b) was not an issue. Of course, if we knew that there was a break present, as we did in Section 3, there would be no need to test for it in the first place. The situation considered here is therefore quite different and this requires some changes. The first change we make when compared to Section 3, which is quite natural given the discussion of the last paragraph, is to replaceX withH The definitions ofδ(b) and W(b) are adapted accordingly. The idea here is that by augmentingX withZ(b), we can eliminate the factors in both X i and Z i (b), which means that the endogeneity issue is gone. For this to happen, however, we need some additional assumptions. In order to appreciate this, note that where ( f t , f t I(t > b)) are the factors in (x i,t ,z i,t (b) ) . Hence, provided that rank(Γ) = rank(ΓR) = m, such that the (k + r) × 2m matrix has full column rank 2m ≤ k + r, analogous to the discussion of Section 2, (x t ,z t (b) ) is rotationally consistent for ( f t , f t I(t > b)) . We also need to restrict the type of heterogeneity that can be permitted in γ i . The way we do this is by assuming that γ i admits to a random coefficient representation, similarly to, for example, Pesaran (2006) and Karabiyik et al. (2017) . Assumption 4.1 below replaces Assumption 3.2 and is enough to ensure that the effect of the estimation of f t is asymptotically eliminated. Assumption 4.1. (a) rank(Γ) = m ≤ k and rank(ΓR) = m ≤ r for all N, including N → ∞. (b) Γ < ∞. (c) γ i is independent across i, and of ε j,t , v j,t and f t for all i and j with E(γ i ) = γ and E( γ i 2 ) < ∞. Another difference when compared to Section 3, where T could be allowed to be fixed, is that here both N and T have to be large. The basic reason for this is that while before we took the break as given, which meant that we could make use of the consistency ofb to construct asymptotically valid confidence intervals, here we do not know if there is a break present and so we cannot rely on said consistency. This means that the asymptotic distribution of W(b) is generally not nuisance parameter free. The main exception is if T is large. 4 But if T is large, we can relax the serial uncorrelatedness and time invariant moment conditions of Assumption 4.2. We also require that Assumption 3.3 holds whenH(b) is used in place ofX. Assumption 4.2. A third and final difference when compared to Section 3 is about the range of values considered for b. In Section 3, we only required that b ∈ B, which meant that in the large-T case b/T could take on any value in [0, 1]. Here this is not possible, for it is only when b/T is bounded away from zero and one that W(b) converges in distribution (see Andrews, 1993 , for a discussion). In this section, we therefore assume that b ∈ B , where The main implication of this in practice is that we have to truncate, or "trim", the range of values considered for b at both beginning and end. A very common way to do this is to set = 0.15, so that the first and last 15% of the observations are discarded (see, for example, Andrews, 1993, and Bai, 1997a) . The condition that b/T should bounded away from zero and one should hold for all b, including b 0 . The following assumption reflects this. We now have all the conditions we need in order to obtain the asymptotic distribution of W(b). where J(τ) is a r × 1 vector standard Brownian motion on τ ∈ T ⊂ (0, 1). which in turn implies that as N, T → ∞ with T/N → 0. This result holds for all b satisfying τ ∈ T ⊂ (0, 1), including b 0 . Remark 4.1. It is important to note that while we do require T to be large, N is still the most important dimension of the data. In the online appendix, we use Monte Carlo simulations as a means to evaluate the importance of the large-T requirement. According to the results, the Wald tests perform well even if T is as small as 10. The large-T requirement is therefore not very important in applied work. So far we have taken the date of the break as given. If the date of the break is unknown, as it usually is in practice, then H 0 can be tested against the alternative hypothesis of a single structural break at some unknown date b ∈ B , which we can formulate as H 1 : b∈B H 1 (b). Many researchers follow Andrews (1993) and take the supremum of Wald test statistics over all possible breakpoints, and therefore so shall we. The test statistic that we will be considering is therefore given by The asymptotic distribution of this test statistic depends on the distribution of W(b), and is presented in the following corollary to Theorem 4.1. Corollary 4.1. Suppose that H 0 holds, and that the conditions of Theorem 4.1 are met. Then, as N, The limiting distribution in Corollary 4.1 is the supremum of the square of a standardized tied-down Bessel process of order r, which has appeared previously in Andrews (1993) , and Hidalgo and Schafgans (2017), among others. The critical values only depend on r and , and can be found in Table I of Andrews (1993) . The above results rely on the availability of a consistent estimatorΣ δ (b) of the asymptotic covariance matrix ofδ(b), which is given by Σ δ = Ω −1 V Ψ V Ω −1 V . A natural approach in the current large-T setting is to takê Here k(·) is a real-valued kernel, S T is the bandwidth parameter,ε as in Pesaran and Tosetti (2011) . In the special case when ε i,t is independently and identically distributed across both i and t with variance σ 2 ε , Σ δ reduces to Σ δ = σ 2 ε Ω −1 V , which can in turn be estimated usinĝ Once the presence of a break has been established and its location determined, it is possible to make inference regarding θ = (β , δ ) . Let us therefore define w i,t (b) = (x i,t , z i,t (b) ) , such that (2.1) can be written as (4.14) The CCE estimator of θ is given byθ =θ(b), wherê where is asymptotically normal conditionally on F, which means that it supports standard normal and chi-squared inference. If T → ∞, then the asymptotic distribution ofθ is the one given by Theorem 4 of Pesaran (2006) . COVID-19 broke out in China in December 2019. Roughly one year later, WHO (2021) reports 94 million confirmed cases and over two million deaths. We also know that because of lockdowns, travel restrictions and social distancing policies, in 2020 GDP dropped by 4.2% globally and real world trade contracted by 10.3% (OECD, 2020). 5 The economic impact of the pandemic has therefore been substantial. This is what we know. There are some signs of recovery in the years to come; however, the global outlook is extremely uncertain, even in the short term. As an indication of this, the OECD world GDP projections for 2021 ranges from −2.75% to 5%, depending on, among other things, the evolution of the pandemic, the actions taken to contain the spread of the virus and their economic impact, and the time until effective vaccines can be deployed. Hence, even now, one year after the outbreak, much is uncertain. The uncertainty we face today is nothing compared to one year ago. At this time, little was known about the new virus, but it was clear that it was very infectious and deadly, as, in contrast to previous infectious disease outbreaks, most countries begun to announce the number of cases and deaths on a daily basis. Many were chocked by how quickly these numbers were increasing. Governments scrambled with emergency actions, such as closing schools and workplaces, travel bans, or even complete curfews, to try to contain the spread. However, since their effectiveness was far from clear and they made it impossible for firms and workers to continue their operations without knowing if and when they would be compensated, these actions added to the already existing uncertainty, leading to widespread public fear (see Mamaysky, 2020, and Phan and Narayan, 2020) . This was visibly apparent with news coming in of supermarkets being stocked out of toilet paper (Aggarwal et al., 2020) . In times of extreme uncertainty, stock markets often respond dramatically to news about the underlying economic and market conditions (see Mamaysky, 2020 (Wagner, 2020) . Stock markets all over the world reacted similarly. The unprecedented stock market behaviour in the initial stage of COVID-19 has attracted considerable attention not only in the news but also in research. The bulk of the evidence seem to suggest that stock markets have generally responded negatively, although the channel through which this effect works is still largely unknown. Ashraf (2020a) uses data for 64 countries and finds that stock prices have reacted negatively to the pandemic, but only when measured by the number of confirmed cases, as opposed to the death count. This is largely in agreement with the results of Erdem (2020). Ashraf (2020b) employs data for 77 countries. He finds that the COVID-19 effect operates not only through the number of cases, but also through government actions, such as social distancing measures, containment and health responses, and economic support packages. Similar findings have been reported by Aggarwal et al. (2020) , and Capelle-Blancard and Desroziers (2020). 7 The purpose of the current application is to contribute to the above mentioned literature. This is done in three ways. First, we account for the rebound of returns. Faced with near economic collapse, starting with the Federal Reserve's decision on March 16 to buy USD 700 billion worth of US Treasury bonds and mortgage-backed securities, central banks around the world announced aggressive quantitative easing programs (see Hartley and Rebucci, 2020) . These announcements were followed by an abrupt increase in stock prices. The US S&P500 stock market index, for example, increased by 29% between March 24 and April 17, a surge that left the index 6 Not all news were about the economy and many were just rumors, but they still attracted considerable attention and were therefore important in setting the public sentiment at the time. For example, on February 17, a run on toilet paper in Hong Kong was mentioned for the first time, and became a highly contagious story. Some people in lockeddown China reportedly were reduced to searching for minnows and ragworms to eat. In Italy, there were stories of medical workers in overwhelmed hospitals being forced to choose which patients would receive treatment (Shiller, 2020) . 7 Many studies focus on single countries. There are also those that focus on the volatility of stock returns, as opposed to stock returns themselves. These are not reviewed here. back where it stood in August of 2019 when the US economy was booming. The fact that this rebound took place while the pandemic was still ongoing is suggestive of a structural break. Most studies ignore this. The only exceptions known to us are Capelle-Blancard and Desroziers (2020), Mamaysky (2020), and Ramelli and Wagner (2020) , who divide their samples into subperiods based on major events. The breaks are therefore treated as known, which is risky, as misplaced breaks are just as problematic as omitted breaks. The second contribution is that we account for general forms of unobserved heterogeneity. Many studies recognize the problem but assume that it can be handled using country and period fixed effects. However, fixed effects do not work in general when pair-wise cross-section covariances of the regression errors differ across countries, and there is plenty of evidence to support this (see, for example, Zhang et al., 2020). Capelle-Blancard and Desroziers (2020) use robust standard errors but they can only handle weak cross-section dependence. The third contribution is that we account for the smallness of T. As alluded to in the previous paragraph, with COVID-19 being such a recent phenomenon, studies of it are constrained to data sets with short time span (see, for example, Salisu and Vo, 2020, for a discussion). Some "compensate" by using a relatively high frequency, such as daily data, but not all. To take an extreme example, Aggarwal et al. (2020) use monthly data from December 2019 to May 2020, which means that T = 6. Moreover, even if data are daily, the sub-periods considered are very short. It is therefore important to use appropriate small-T techniques. Our dependent variable is stock returns (RET), which we compute as the log difference of the price index. 8 We utilize four control variables; the US Dollar exchange rate (ER), stock market volatility (VOL), which we proxy using the Chicago Board Options Exchange's CBOE volatility index, world market returns (MRET), as measured by the cross-country average of RET, and the US three-month Treasury bill rate (TBILL) (see, for example, Aggarwal et al., 2020 , Capelle-Blancard and Desroziers, 2020 , Mamaysky, 2020 , and Salisu and Vo, 2020 , for similar control variable lineups). ER is motivated by the theoretical work of Dornbusch and Fischer (1980) , which says that exchange rates will influence stock returns because they capture the value of firms' future cash flows. VOL can be motivated in part by its ability to predict returns (see, for example, Bollerslev et al., 2009, and Bollerslev et al., 2015) , in part by the theory of Glasserman et al. (2020) , according to which information shocks can lead to large drops in stock prices and increases in volatility. TBILL captures both the risk-free interest rate and the importance of the US in shaping stock markets around the world. The need to control for MRET is due to CAPM. We use all available measures of COVID-19 that have sufficient time series data. A total of six variables meet this criterion. The first two capture the spread of the virus. They are the number of confirmed cases (CASE) and deaths (DEATH). The next four variables are indices that capture government response to COVID-19; a government stringency index (STR), a containment and health index (CONT), a government economic support index (ECON), and an overall government response index (RESP). STR records the strictness of government policies that primarily restrict people's behaviour, such as school and workplace closures, stay-at-home requirements, and travel bans. CONT captures mainly social distancing restrictions, but also health system policies such as testing policy, contact tracing, short term investment in healthcare and investments in vaccine. ECON is an index that captures government income support and debt relief. RESP captures all of the above. All indices are on a scale of zero to 100 with a larger value indicating greater stringency, greater commitment to health, greater economic support, and greater overall government response. All data are obtained from Datastream, except for TBILL, which is from Federal Reserve Bank St Louis. The data are weekly and cover N = 61 countries. 9 As in many other empirical scenarios, the number of time periods is limited and cannot be increased other than by the passage of time. We take the largest sample period available to us, which covers T = 38 weeks, from January 3 to September 25, 2020. The smallness of T in this case means that it is important to use techniques that work even if T is not large. The Monte Carlo results reported in the online appendix suggest that the proposed toolbox should work well here. Both theory and empirical observations stress the importance of news (see, for example, Mamaysky, 2020). We therefore follow the bulk of the existing literature and express all regressors in innovation form by taking first differences. Ashraf (2020b) considers the same stringency, containment and health, and economic support indices as we do. While he includes all three indices at the same time, we do not. The reason is that STR, CONT and RESP are highly collinear with correlations that range from 0.949 to 0.978 (see Capelle-Blancard and Desroziers, 2020, for a similar argument). We therefore include them one at a time. VOL, MRET and TBILL do not vary by country but only by week. We therefore want to treat these are observed common factors, a possibility that we did not consider in Sections 2-4. As pointed out in Section 2, the type of factors that can be permitted under Assumption 2.2 is very broad. This suggests that there is no need to discriminate between known and unknown factors, but that one can just as well treat them all as unknown to be estimated from the data. In fact, this is the main rationale for writing (2.1) and (2.4) in terms of (the unknown) f t only. The main drawback of this fully unknown factor treatment is that it puts strain on the Assumption 3.2 condition that m ≤ k, as k is fixed and additional factors increase m even if they are known. For this reason, it may be preferable to be able to distinguish between known and unknown factors. Fortunately, in CCE this is very easy. Let us therefore assume that there are two sets of factors, f t and d t , where f t is a m × 1 vector of unknown factors, just as before, while d t is a n × 1 vectors of known common regressors, which in this section is comprised of a constant (country fixed effects), VOL, MRET and TBILL. Hence, now the total number of factors is equal to m + n, out of which n are known. The model is the same as before but with (2.1) and (2.4) replaced by where α i and A i are n × 1 and n × k, respectively. Provided that Assumption 3.2 holds, so that rank(Γ) = m ≤ k, similarly to (3.2), we can show that (d t ,x t ) is rotationally consistent for The above discussion suggests that in terms of the known factor-augmented version of (2.1) in (5.1), in this section y i,t is RET, d t is a constant, VOL, MRET and TBILL, and x i,t is ER, CASE, DEATH, ECON, and one of STR, CONT and RESP. We allow the coefficients of the COVID-19 spread and response variables to be breaking, but not the coefficient of ER. The date of the break is treated as unknown not only in the estimation but also on the testing. We therefore focus on the SW test, which we implement using 15% trimming ( = 0.15) (as in, for example, Andrews, 1993, and Bai, 1997a) . Based on its good performance in the Monte Carlo study reported in the online appendix, the asymptotic covariance matrix ofδ(b) is computed based on the Bartlett kernel with the bandwidth parameter S T set equal to S T = T 1/3 . As in the present paper, Bai (2010) focuses on the single break case, although he also discuss the possibility of having multiple breaks. As he remarks, if the number of breaks is given, the one-at-a-time approach of Bai (1997b) can be used to estimate the breakpoints, and if the number of breaks is unknown, a test for existence of a break can be applied to each subsample before estimating another breakpoint. The same approach can be used also in the current more general context. The results are discussed in the next section. Following the convention in the literature (see, for example, Ashraf, 2020a , 2020b , Capelle-Blancard and Desroziers, 2020 , and Erdem, 2020 , Table 1 reports the mean, standard deviation, minimum and maximum of each variable. RET has a mean value of −0.392 with a standard deviation of 4.44. The fact that the mean is negative indicates that the pandemic has affected stock markets negatively. The mean of CASE and DEATH are positive, as expected because the pandemic has not settled down yet. The results for the response variables show that governments have responded to the pandemic. In order to get a feeling for the validity of the conventional fixed effects assumption, we computed the CD test of Pesaran (2021) , which tests the null hypothesis of no remaining cross-sectional correlation after controlling for fixed effects. The null is rejected at all conventional significance levels for all variables, suggesting that, as expected given the above discussion, fixed effects are not enough to account for the cross-section correlation. The results of the unit root tests of Elliott et al. (1996) , and Pesaran (2007) (2007), which allows for cross-section correlation. The use of the test of Elliott et al. (1996) is motivated by its good power properties. Notes: "Mean", "SD", "Min" and "Max" refer to the sample average, the standard deviation, the minimum value and the maximum value of each variable. The column labelled "UR" reports some unit root test results. If the variable exhibits cross-sectional variation we employ the CIPS test of Pesaran (2007) , which allows crosssectional dependence in the form of a common factor. If the variable only varies over time, we employ unit root test of Elliott et al. (1996) . Both tests are augmented with four lags to capture any serial correlation in the regression errors. The column labelled "CD" reports the results obtained by applying Pesaran's (2004) Most of the emerging economies' central banks made their announcements in the last 10 days of March. Our estimated break date is located directly after these announcements. Quantitative easing pushes interest rates down and this has two possible effects, which both result in an increase in stock prices (see, for example, Bernanke, 2012) . First, by decreasing the discount rate, quantitative easing increases the present value of future cash flows. Second, quantitative easing makes relatively safe assets unattractive, which creates an incentive for investors to rebalance their portfolios to include more stocks, and this in turn pushes stock prices up. We therefore speculate that it was the quantitative easing announcements that caused the break in the stock return-COVID-19 relationship. 11 As explained earlier in this section, the SW test was applied not only to the full sample but also to the pre-and post-break periods. There were, however, no significant breaks in the pre-and post-break periods, and so we conclude that there is just one break. Another observation is that all the COVID-19 regressors enter significantly but only before the break. Specifically, the estimated pre-break coefficients (β) are all significant, as are the estimated breaks (δ), but they sum up to zero, and the sum is insignificant in all cases. In other words, the estimated post-break effects (β + δ) are insignificant. Consider CASE and DEATH. Their pre-break effect is significantly negative, which is consistent with existing results (see, for example, Ashraf, 2020a , 2020b , Capelle-Blancard and Desroziers, 2020 , and Erdem, 2020 . Hence, as expected, stock markets therefore initially responded negatively to the news of the outbreak of the virus. This negative effect is, however, completely eliminated by the break, which is estimated to be of the same magnitude but of opposite sign. The post-break effect of CASE and DEATH is therefore estimated to zero, suggesting that the central bank interventions have had a substantial positive effect on stock markets. Let us now move on to the response regressors, STR, CONT, ECON and RESP. The estimated pre-break effect of ECON is significantly positive, meaning that stock markets initially responded positively to news of increased government support, which is again in accordance with our a priori expectations. After the break, however, stock markets became insensitive to such news. Similarly, while initially markets responded negatively to announcements of stricter and more extensive government restrictions, as measured by STR and RESP, after the break they did not respond at all. The same is true for CONT, which is probably due to the fact that while this variable captures both social distancing restrictions and investments in healthcare, the restrictions are weighted higher in the construction of the index and they did came first. As an illustration of the effect of STR, CONT and RESP, in Figure 1 we plot the cross-sectional averages of these variables against that of RET. We see that while before the break the co-movement between average RET on the one hand and STR, CONT and RESP on the other hand is clearly 11 We also note that our estimated breakpoint does not coincide with the sample splits considered by Capelle-Blancard and Desroziers (2020), Mamaysky (2020), and Ramelli and Wagner (2020). government stringency (STR), overall government response (RESP), government containment and health (CONT) and government economic support (ECON). All specifications include country fixed effects, the US dollar exchange rate (ER), stock market volatility (VOL), the three-month US Treasury Bill rate (TBILL) and average stock returns (MRET) as controls. As estimates of the unobserved factors we use the main regressors of each specification and ER. All specifications also include country fixed effects, VOL, TBILL and MRET as observed common factors. β and δ refer to the pre-break slope and the size of the break, respectively. r and n refer to the number of panel data regressors and observed common factors, respectively. SW refers to the sup-Wald test for the existence of a structural break,b refers to the estimated breakpoint and "95% CI" refers to the associated 95% confidence interval. The reported dates refer to the last day of the relevant week. The numbers within parentheses are the standard errors. Finally, *, ** and *** denote statistical significance at the 10%, 5% and 1% levels, respectively. negative, after the break the co-movement is much weaker. These results are quite different from existing ones. Capelle-Blancard and Desroziers (2020) find that STR has a positive but insignificant effect, which becomes significant only in the absence of fixed effects or other control variables. Ashraf (2020b) reports a significantly negative effect of STR, a significantly positive effect of CONT and an insignificant effect of ECON. However, these other studies only allow for fixed effects and they do not take into consideration our estimated breakpoint, which could very well explain the observed differences in the results. According to the results of Baker et al. (2020) , and Mamaysky (2020), in the early phase of the pandemic (late February to late March) stock market movements were driven by news about the virus. In fact, markets were "hypersensitive" and overreacted not only to news themselves but also to other markets' reaction to news (Mamaysky, 2020). "Markets started to oscillate wildly, and people suddenly realized that the virus could affect them directly. Panic selling in the stock market went hand-in-hand with panic buying in supermarkets" (Wagner, 2020, page 440) . This explains why initially stock markets reacted significantly to all COVID-19 related news (CASE, DEATH, STR, CONT, ECON and RESP). The powerful central bank interventions acted as a wake-up call. They signalled a clear commitment to deal with the pandemic, thereby bringing some certainty to an otherwise extremely uncertain future. Stock markets reacted positively and progressed on a path to recovery. This is noteworthy because the economic conditions have been steadily deteriorating as a result of closures and social distancing (see IMF, 2020). As Krugman (2020) puts it, "[t]he relationship between stock performance -largely driven by the oscillation between greed and fear -and real economic growth has always been somewhere between loose and nonexistent". The explanation for our results given in the previous paragraph is consistent with (at least) two theories. The first is the so-called "overreaction" hypothesis of Daniel et al. (1998) , and Hong and Stein (1999) , which states that investors overreact to negative shocks, such as those that hit stock markets in the early phase of the pandemic. As more information becomes available, however, and the central bank announcement were very informative, investors correct their behavior, which leads to market recovery. The second theory is that of Glasserman et al. (2020) . It states that information shocks, such as the outbreak of COVID-19, can lead to large drops in prices and increases in volatility, which in turn cause prices to become hypersensitive to newsflow. However, information can also push prices out of hypersensitivity, and our results show that in the post-break regime returns are no longer reacting to news of the pandemic. The main aim of this paper is to provide a toolbox that meets the basic needs of researchers interested in a linear panel data model with a possible structural break. The toolbox allows researchers to test for the presence of a break, and, if a break is detected, to also estimate the location of the break and construct a confidence interval for the true breakpoint. The toolbox does not require that the data are independent, nor that T is large, which means that it widely is applicable. The new toolbox in employed to investigate the relationship between stock market returns and COVID-19 in a sample covering 61 countries across 38 weeks. Stock markets all over the world plunged in the early phase of the pandemic but they quickly rebounded, and this rebound took place although the end of the pandemic is still not in sight. Our analysis shows that while initially responsive, the effect of COVID-19 stopped dead at the end of March-beginning of April 2020. We attribute this break to the massive quantitative easing programs announced by central banks around the world in the second half of March. What Caused Global Stock Market Meltdown during the COVID Pandemic-Lockdown Stringency or Investor Panic? Tests for Parameter Instability and Structural Change with Unknown Change Point Structural Breaks in Panel Data: Large Number of Panels and Short Length Time Series Stock Markets' Reaction to COVID-19: Cases or Fatalities? Economic Impact of Government Interventions during the COVID-19 Pandemic: International Evidence from Financial Markets Estimation of a Change Point in Multiple Regression Models Estimating Multiple Breaks One at a Time Common Breaks in Means and Variances for Panel Data The Unprecedented Stock Market Impact of COVID-19 Estimation of Heterogeneous Panels with Structural Breaks Estimation and Identification of Change Points in Panel Models with Nonstationary or Stationary Regressors and Error Term Monetary Policy Since the Onset of the Crisis. Remarks at the Federal Reserve Bank of Kansas City Economic Symposium Change Point Estimation in Panel Data with Time-Varying Individual Effects Expected Stock Returns and Variance Risk Premia Stock Return and Cash Flow Predictability: The Role of Volatility Risk The Stock Market is not the Economy? Insights from the Covid-19 crisis Investor Psychology and Security Market Under-and Overreactions Exchange Rates and the Current Account Efficient Tests for an Autoregressive Unit Root Freedom and Stock Market Performance during COVID-19 Outbreak. Finance Research Letters 36 Dynamic Information Regimes in Financial Markets An Event Study of COVID-19 Central Bank Quantitative Easing in Advanced and Emerging Economies Inference and Testing Breaks in Large Dynamic Panels with Strong Cross Sectional Dependence A Unified Theory of Underreaction, Momentum Trading, and Overreaction in Asset Markets Special Series on COVID-19: The Disconnect between Financial Markets and the Real Economy On the Role of the Rank Condition in CCE Estimation of Factor-Augmented Panel Regressions Estimating a Common Deterministic Time Trend Break in Large Panels with Cross Sectional Dependence Common Local Breaks in Time Trends for Large Panels Crashing Economy, Rising Stocks: What's going on Panel Data Models With Interactive Fixed Effects and Multiple Structural Break Economic Outlook Estimation and Inference in Large Heterogeneous Panels with a Multifactor Error Structure A Simple Panel Unit Root Test in the Presence of Cross-Section Dependence General Diagnostic Tests for Cross-Sectional Dependence in Panels. Forthcoming in Empirical Economics Large Panels with Common Factors and Spatial Correlation Country Responses and the Reaction of the Stock Market to COVID-19-a Preliminary Exposition Feverish Stock Price Reactions to COVIS-19. Forthcoming in Review of Corporate Finance Studies Predicting Stock Returns in the Presence of COVID-19 Pandemic: The Role of Health News Opinion: Robert Shiller Explains the Pandemic Stock Market and why it's Decoupled from the Economy. Project Syndicate What the Stock Market Tells us about the Post-COVID-19 World Common Breaks in Means for Cross-Correlated Fixed-T Panel Data Testing for Predictability in Panels with General Predictors CCE in Fixed-T Panels Weekly operational update on COVID-19 -19 Financial Markets under the Global Pandemic of COVID-19