key: cord-0574478-yes2u7u4 authors: Bugni, Federico A.; Li, Jia; Li, Qiyuan title: Permutation-based tests for discontinuities in event studies date: 2020-07-20 journal: nan DOI: nan sha: 056b46f3d5f18a2d480b8d2beadc4e060e8c206a doc_id: 574478 cord_uid: yes2u7u4 We propose using a permutation test to detect discontinuities in an underlying economic model at a known cutoff point. Relative to the existing literature, we show that this test is well suited for event studies based on time-series data. The test statistic measures the distance between the empirical distribution functions of observed data in two local subsamples on the two sides of the cutoff. Critical values are computed via a standard permutation algorithm. Under a high-level condition that the observed data can be coupled by a collection of conditionally independent variables, we establish the asymptotic validity of the permutation test, allowing the sizes of the local subsamples to be either be fixed or grow to infinity. In the latter case, we also establish that the permutation test is consistent. We demonstrate that our high-level condition can be verified in a broad range of problems in the infill asymptotic time-series setting, which justifies using the permutation test to detect jumps in economic variables such as volatility, trading activity, and liquidity. These potential applications are illustrated in an empirical case study for selected FOMC announcements during the ongoing COVID-19 pandemic. Many econometric problems can be expressed in terms of the continuity or the discontinuity of certain component in the underlying economic model. In an influential paper, Chow (1960) tested the temporal stability in the demand for automobiles, and subsequently stimulated a large literature on structural breaks in time series analysis; see, for example, Andrews (1993) , Stock (1994) , Bai and Perron (1998) , and many references therein. In microeconometrics, the regression discontinuity design (RDD) has been extensively used for causal inference. This literature identifies and estimates an average treatment effect by evaluating discontinuities of conditional expectation functions of outcome and treatment variables at a cutoff point of the running variable; see Imbens and Lemieux (2008) and Lee and Lemieux (2010) for comprehensive reviews. 1 Meanwhile, a more recent highfrequency financial econometrics literature has been devoted to studying discontinuities, or jumps, in various financial time series (e.g., price, volatility, trading activity, etc.). The high-frequency jump literature is pioneered by Barndorff-Nielsen and Shephard (2006) , who propose the first nonparametric test for asset price jumps using high-frequency data in an infill asymptotic setting. More recently, Bollerslev et al. (2018) study the jumps of volatility and trading intensity in highfrequency jump regressions (Li et al. (2017) ) that closely resemble the classical RDD. Although these strands of literature involve apparently different terminology and technical tools, they share a common theme: The econometric goal is to learn about differences in the data generating processes between two subsamples separated by the cutoff. Imbens and Kalyanaraman (2011) emphasize that these subsamples should be "local" to the cutoff point, which is quite natural given the nonparametric nature of discontinuity inference (Hahn et al. (2001) ). The issue under study is thus a local version of the classical two-sample problem. Correspondingly, the related inference is often carried out using nonparametric two-sample t-tests, which are based on kernel regressions in the RDD (Hahn et al. (2001) , Imbens and Kalyanaraman (2011) , Calonico et al. (2014) ) or, in the same spirit, spot high-frequency estimators (Foster and Nelson (1996) , Comte and Renault (1998) , Jacod and Protter (2012) , Li et al. (2017) , Bollerslev et al. (2018) ) in the infill time-series setting. In an ideal scenario in which the subsamples separated by the cutoff are i.i.d. and independent of each other, the permutation test is an excellent tool to detect differences in their distributions. In particular, standard results for randomization inference (Lehmann and Romano (2005, Chapter 15 .2)) indicate that a permutation test implemented with any arbitrary test statistic is finite-sample valid under these conditions. The recent literature has investigated the properties of permutation tests to detect differences between two samples under less ideal conditions. One example is Canay and Kamat (2017) , who consider an RDD and show that permutation-based inference is asymptotically valid to detect discontinuities in the distribution of the baseline covariates at the cutoff. These authors implement their test with a finite number of observations that are located closest to the cutoff, effectively forcing them to concentrate on a small neighborhood of the cutoff as the sample size grows. In the same spirit, Cattaneo et al. (2017) propose using permutation-based inference to detect discontinuities at the cutoff under the "local randomization framework" introduced in Cattaneo et al. (2015) . Outside of the RDD literature, Chung and Romano (2013) and DiCiccio and Romano (2017) investigate the asymptotic properties of permutation-based inference to test for differences in specific distributional features of two samples, such as the mean or the correlation coefficient. It is important to note that all of the references mentioned in this paragraph presume cross-sectional data. In the context of time-series applications, there is an active literature on change-point tests implemented via permutations. This approach was first suggested by Antoch and Hušková (2001) and later pursued by other authors. See Hušková (2004) or Horváth and Rice (2014) for surveys of this literature. While most of this literature imposes independent errors, some allow for limited forms of weak dependence (Kirch and Steinebach (2006) , Kirch (2007) , and Jentsch and Pauly (2015) ). In contrast, our econometric setting accommodates essentially unrestricted persistence and nonstationarity in the underlying state processes (e.g., volatility), which better suits our interest on their dynamics over short time windows around economic news events. In the context of machinelearning methods, Chernozhukov et al. (2018) propose using permutations to implement conformal inference that allows for time series data. Set against this background, our main goal in this paper is to establish a general theory for permutation-based discontinuity tests, with a special emphasis on event studies based on time-series data. To capture the "local" nature of this problem, we adopt an infill asymptotic framework, under which the inference concentrates on observations "close" to the event time. Specifically, we consider the Cramér-von Mises test statistic formed as the squared L 2 distance between the empirical cumulative distribution functions for the two local subsamples near the cutoff, and compute the critical value via a standard permutation algorithm. As explained earlier, if the data were i.i.d., the behavior of this permutation test would follow directly from standard results for randomization inference. This "off-the-shelf" theory, however, is not applicable here because time-series data observed in a short event window can be serially highly dependent. The main theoretical contribution of the present paper is to establish the asymptotic validity of the permutation test in this non-standard setting. The theory has two components. The first is a new generic result for permutation test. Specifically, we link the (feasible) permutation test formed using the original data with an infeasible test constructed in a "coupling" problem that involves conditionally i.i.d. coupling variables. Since the latter resembles the classical two-sample problem, the infeasible test controls size exactly under the coupling null hypothesis (i.e., coupling variables in the two subsamples are homogeneous), and is consistent under the complementary alternative hypothesis. Under a proper notion of coupling, which is customized for the permutation test, we show that the feasible test inherits the same asymptotic rejection properties from the infeasible one. Since this result is of independent theoretical interest that is well beyond our subsequent analysis in the infill time-series setting, we frame the theory under general high-level conditions so as to facilitate other types of applications. The second component of our analysis pertains to specializing the generic result to the infill time-series setting designed for event-study applications. The event-study framework is particularly relevant for studying macroeconomic and financial shocks, including monetary shocks triggered by FOMC announcements (Cochrane and Piazzesi (2002) , Nakamura and Steinsson (2018a) ), or "natural disasters" such as the ongoing COVID-19 pandemic. Following Li and Xiu (2016) and Bollerslev et al. (2018) , we model observed data using a general state-space framework, in which the observations are discretely sampled from a latent state process "contaminated" by random disturbances. This model has been used to model variables such as asset returns, trading volume, duration, and bid-ask spread, and readily accommodates both continuously and discretely valued variables. Under this state-space model, the temporal discontinuity in the data's distribution is mainly driven by the jump of the latent state process (e.g., asset volatility, trading intensity, and propensity of informed trading), which can be detected by the permutation test. Under easy-toverify primitive conditions, we construct coupling variables and apply the aforementioned general theory to establish the permutation test's asymptotic validity. We recognize two advantages of the proposed permutation test in comparison with the standard approach based on the nonparametric "spot" estimation of the underlying state process. Firstly, the permutation test attains asymptotic size control even if the number of observations in each subsample is fixed. 2 This remarkable property is reminiscent of the finite-sample exactness of the permutation test in the classical two-sample problem for i.i.d. data. In contrast, the nonparametric estimation approach works in a fundamentally different way, as it relies on the asymptotic (mixed) normality of the estimator, which in turn requires the sizes of the local subsamples to grow to infinity. In empirical applications, however, it is often desirable to use a short time window, either to reduce the effect of confounding factors in the background, or simply because of the lack of observations soon after the occurrence of the economic event (say, in a real-time research situation). Not surprisingly, the conventional inference based on asymptotic Gaussianity often results in large size distortions in this "small-sample" scenario, as we demonstrate concretely in a realistically calibrated Monte Carlo experiment (see Section 3). Meanwhile, the permutation test exhibits much more robust size control in finite samples. The second advantage of the permutation test is its versatility: The same test can be applied in many different empirical contexts without any modification. On the other hand, the nonparametric estimation approach often relies on specific features of the problem, and needs to be designed on a case-by-case basis. Therefore, the proposed permutation test may be particularly attractive in new empirical environments for which tests based on the conventional approach are not yet developed or not yet well-understood. In Section 2.2, we illustrate this point more concretely in the context of testing for volatility jumps. In that case, the standard approach relies crucially on the assumption that the price shocks are Brownian in its design of the spot volatility estimator and the associated t-statistic, and it cannot be adapted easily to accommodate a more general setting with Lévy-driven shocks. 3 The permutation test, on the other hand, is valid even in the latter, more general, setting. That being said, we stress that the proposed permutation test is a complement, rather than substitute, for the conventional nonparametric estimation method, because it has two limitations. One is that the permutation test focuses exclusively on hypothesis testing, without producing a point estimate for the jump of the state process (e.g., volatility) of interest, whereas the estimate is a by-product of the conventional approach. In addition, the proposed permutation test is purely nonparametric and it does not exploit any parametric structure that one may be willing to impose. It is therefore conceivable that in certain semiparametric settings, more efficient tests may be designed to exploit a priori model restrictions. Put differently, the aforementioned versatility of the permutation test may come with an efficiency cost. A better understanding about the robustnessefficiency tradeoff might be an interesting topic for future research. In an empirical illustration, we apply the permutation test to a recent sample of high-frequency intraday returns of the SPY ETF for the S&P 500 index. Specifically, we focus on four important FOMC announcements during the ongoing COVID-19 pandemic, and test whether each announcement induces discontinuities in volatility, trading activity, and two measures of market illiquidity. We document robust empirical evidence for discontinuities in volatility and trading activity. We also find evidence for announcement-induced discontinuity in transaction cost (measured by bid-ask spread), but not in market impact (gauged by Amihud's measure). This application highlights one of the main advantages of the proposed test, namely, it is applicable for a broad variety of highfrequency observations modeled in distinct ways, which is unlike, for example, the conventional t-test designed specifically for testing volatility jumps in the Brownian setting. The rest of the paper is organized as follows. We present the asymptotic theory for the permutation test in Section 2. Section 3 reports the test's finite-sample performance in Monte Carlo experiments, and Section 4 presents the empirical illustration. Section 5 concludes. The appendix contains all proofs. 3 As explained by Barndorff-Nielsen and Shephard (2001) , these more general processes offer the possibility of capturing important deviations from Brownian shocks and for flexible modelling of dependence structures. However, to the best of our knowledge, the estimation and inference of the spot volatility (i.e., the scaling process) in the non-Brownian case remains to be an open question in the literature. There is some limited work on the inference of integrated volatility functionals for the non-Brownian case (see Todorov and Tauchen (2012) ) which demonstrates various distinct complications in the non-Brownian setting. We use x to denote the Euclidean norm of a vector x. For any real number a, we use a to denote the smallest integer that is larger than a. For any constant p ≥ 1, · p denotes the L p norm for random variables. For two real sequences a n and b n , we write a n b n if a n /C ≤ b n ≤ Ca n for some finite constant C ≥ 1. We first prove a new result that is broadly useful for establishing the asymptotic validity of permutation tests. Because of its independent theoretical interest, we develop the theory under high-level conditions. In Section 2.2, below, we shall specialize this general result in event-study applications under a more specific infill time-series setting, for which the existing theory on permutation tests is not applicable. Consider an array (Y n,i ) i∈In of R-valued observed variables defined on a probability space (Ω, F, P), which may be either "raw" data or preliminary estimators. Our econometric goal is to decide whether two subsamples (Y n,i ) i∈I 1,n and (Y n,i ) i∈I 2,n have "significantly" different distributions, where (I 1,n , I 2,n ) is a partition of I n ⊆ Z. For ease of exposition, we assume that I 1,n and I 2,n contain the same number of observations, denoted by k n . 4 We stress from the outset that k n may either be fixed or grow to infinity in the subsequent analysis. As such, our analysis speaks to not only the classical finite-sample analysis of permutation tests, but also the large-sample analysis routinely used in econometrics. To implement the test, we first estimate the empirical cumulative distribution functions (CDF) for the two subsamples using F j,n (x) ≡ 1 k n i∈I j,n 1{Y n,i ≤ x}, j ∈ {1, 2}. We then measure their difference via the Cramér-von Mises statistic, given by (1) For a significance level α ∈ (0, 1), we compute the critical value via a standard permutation algorithm as in Lehmann and Romano (2005, page 633 ), which we specify in Algorithm 1 below. We use π to denote a permutation of the elements of I n , that is, a bijective mapping from I n to itself. Let G n denote the collection of all possible permutations of I n , with M n being its cardinality. Algorithm 1. Step 1. For each permutation π ∈ G n , compute the permuted test statistic T n (π) as T n , but with (Y n,i ) i∈In replaced by (Y n,π(i) ) i∈In . Step 2. Order { T n (π) : π ∈ G n } as T (1) Step 3. If T n > T * n , reject the null hypothesis. If T n < T * n , do not reject the null hypothesis. If n = T * n }, respectively. The resulting test then rejects according toφ n ≡ 1{ T n > T * n } +p n 1{ T n = T * n }. Remark 2.1. The testφ n specified in Algorithm 1 is a randomized test and has a random outcome when T n = T * n . One can construct a non-randomized (and more conservative) version by replacinĝ p n with zero. Also, in practice, M n may be too large to consider G n in its entirety. In such cases, we could replace G n with a random subset of it, denoted by G n , and composed of the identity permutation and an i.i.d. sample of permutations in G n . All of the formal results in this paper apply if we use G n instead of G n in Algorithm 1. Remark 2.2. In this paper, we use the Cramér-von Mises statistic in (1) for concreteness and simplicity of exposition. However, the supplement of this paper shows that our main results extend well beyond this particular statistic. In particular, the asymptotic validity of the permutation test extends to any other rank statistic, i.e., a statistic that only depends on the rank of the observations. Furthermore, our consistency result applies to any other rank statistic under mild regularity conditions. For example, both of these results hold if we replace (1) with the Kolmogorov-Smirnov statistic, given by For details, see the supplement of this paper. If the data (Y n,i ) i∈In are i.i.d., then the null hypothesis of the classical two-sample problem holds, and Lehmann and Romano (2005, Theorem 15.2.1) implies that the aforementioned permutation test has exact size control in finite samples. This is a remarkable property of the permutation test, as it holds without requiring any specific distributional assumptions on the data. In contrast to the classical two-sample problem, however, we shall not assume that the data are independent, or even "weakly" dependent (e.g., mixing). As mentioned in the Introduction, the main goal of this paper is to study the permutation test for time-series data observed within a short event window (say, a few days or hours), which can be serially highly dependent in practice. Our key theoretical insight is that the permutation test is still asymptotically valid if the data (Y n,i ) i∈In can be approximated, or "coupled," by another collection of variables that are conditionally independent, as formalized by the following assumption. Assumption 2.1. There exists a collection of variables (U n,i ) i∈In such that the following conditions hold for a sequence (G n ) n≥1 of σ-fields: (i) for each n ≥ 1, the variables (U n,i ) i∈In are G n -conditionally independent, and U n,i has the same G n -conditional distribution as U n,j if i, j belong to the same subsample (i.e., I 1,n or I 2,n ); (ii) for any real sequence η n = o(1), we have where ( Y n,i ) i∈In is an identical copy of (Y n,i ) i∈In in G n -conditional distribution. Assumption 2.1 lays out the high-level structure for bridging our analysis with the classical theory on permutation tests, which we carry out in Theorem 2.1 below. Condition (i) sets up the "coupling" problem, which corresponds to a conditional version of the classical two-sample problem, treating the (U n,i ) i∈I 1,n and (U n,i ) i∈I 2,n variables as "data". In part (a) of Theorem 2.1, we consider the situation in which both subsamples have the same conditional distribution. In this case, our coupling variables (U n,i ) i∈In give rise to an infeasible permutation test that can be analyzed as a classical two-sample problem. In particular, this infeasible permutation test attains the exact finite-sample size under our conditions. This infeasible test, however, only plays an auxiliary role in our analysis, because our interest is on the feasible testφ n formed using the original (Y n,i ) i∈In data. Therefore, a key component of our theoretical argument in Theorem 2.1 is to show that the feasible test for the original data inherits asymptotically the same rejection properties from the infeasible test. Conditions (ii) and (iii) in Assumption 2.1 are introduced for this purpose. Specifically, condition (ii) requires the variable U n,i to be non-degenerate, in the sense that its conditional probability mass within any small [x − η, x + η] interval is of order O (η) in probability. 5 Condition (iii) specifies the requisite approximation accuracy of the coupling variables. We note that this condition is easier to hold when k n is smaller, because the joint coupling requirement would involve a smaller number of variables and the o p (k −2 n ) error bound is easier to attain. This condition can be verified under more primitive conditions pertaining to the smoothness of underlying processes and an upper bound on the growth rate of k n , as detailed in Section 2.2. 6 Theorem 2.1. Under Assumption 2.1, the following statements hold for the permutation testφ n described in Algorithm 1: (a) If the variables (U n,i ) i∈In have the same G n -conditional distribution, we have E[φ n ] → α. (b) Let Q j,n (·) denote the G n -conditional distribution function of U n,i for i ∈ I j,n and j ∈ {1, 2}, and Q n = (Q 1,n + Q 2,n )/2. If k n → ∞ and P( (Q 1,n (x) − Q 2,n (x)) 2 dQ n (x) > δ n ) → 1 for any real sequence δ n = o(1), we have E[φ n ] → 1. Theorem 2.1 characterizes the asymptotic rejection probabilities of the feasible testφ n under the null and alternative hypotheses of the two-sample problem for the coupling variables. Part (a) pertains to the situation in which the two subsamples of coupling variables, (U n,i ) i∈I 1,n and (U n,i ) i∈I 2n , have the same conditional distribution, which corresponds to the null hypothesis. In this case, the theorem shows that the asymptotic rejection probability of the feasible test is equal to the nominal level α. It is relevant to note that this result holds whether k n is fixed or divergent. This property is clearly reminiscent of the permutation test's finite-sample exactness in the classical setting. Part (b) of Theorem 2.1 concerns the power of the feasible testφ n . It shows that the feasible test rejects with probability approaching one when the conditional distributions of the two coupling subsamples, Q 1,n and Q 2,n , are different, in the sense that their "distance" measured by (Q 1,n (x) − Q 2,n (x)) 2 dQ n (x) is asymptotically non-degenerate, where the mixture distribution Q n captures approximately the distribution of the permuted data. This consistency-type result requires that the information available from each subsample grows with the sample size, i.e., k n → ∞. This result appears to be new in the context of permutation-based tests under a fixed alternative for the coupling variables. In particular, we note that an analogous result is unavailable in Canay and Kamat (2017), as they restrict attention to an asymptotic framework with a fixed k n , which makes a consistency-type result unavailable. Our proof relies on applying Lehmann and Romano (2005, Theorem 15.2 .3) to the infeasible test, for which we use the coupling construction developed by Chung and Romano (2013) to show that the so-called Hoeffding (1952) condition is satisfied. We note that this argument is used to establish the consistency of the permutation test rather than its asymptotic size property. Theorem 2.1 establishes the relation between the rejection probability of the feasible testφ n and the homogeneity (or the lack of it) across the two coupling subsamples (U n,i ) i∈I 1,n and (U n,i ) i∈I 2,n . This result does not speak directly to hypotheses formulated in terms of the original (Y n,i ) i∈In observations. Rather, its theoretical significance is to "absorb" all generic technicalities stemming from the (feasible) permutation test, which in turn considerably simplifies our overall analysis. The residual issue for any specific application is to explicitly construct the coupling variables and translate their homogeneity in terms of the primitive structures of the original empirical problem, which can be done using domain-specific techniques. We provide general results along this line in the infill time-series context, as detailed in Section 2.2 below. To help anticipate the general discussion, it is instructive to sketch the scheme in a basic running example. Let Y n,i = ∆ −1/2 n (P (i+1)∆n − P i∆n ) be the scaled increment of the asset price process P t over the ith sampling interval. Let τ be a "cutoff" time point of interest that is known to the researcher (e.g., the announcement time of a news release) and i * be the unique integer such that τ ∈ [i * ∆ n , (i * + 1)∆ n ). 7 We consider two index sets I 1,n = {i * − k n , . . . , i * − 1} and I 2,n = {i * + 1, . . . , i * + k n }, which collect observations before and after the cutoff, respectively. We consider an asymptotic setting in which these subsamples are "local" in calendar time, that is, k n ∆ n → 0. Note that this implies that ∆ n → 0, which means that we are considering an infill asymptotic setting. If P t is an Itô process with respect to an information filtration (F t ) t≥0 , we may represent Y n,i as where b t is the drift process, σ t is the stochastic volatility process, and W t is a standard Brownian motion. 8 If the σ t process is smooth (e.g., Hölder continuous) in a local neighborhood before τ , then the volatility throughout the pre-event subsample I 1,n is approximately σ (i * −kn)∆n . Further recognizing that the drift term is negligible relative to the Brownian component, we can approximate Y n,i for each i ∈ I 1,n using the coupling variables where MN denotes the mixed normal distribution. Since the Brownian motion has independent and stationary increments, it is easy to see that the coupling variables (U n,i ) i∈I 1,n are F (i * −kn)∆nconditionally i.i.d. Moreover, if the volatility process σ t does not jump at the cutoff time τ , we may follow the same logic to extend the approximation in (3) further to i ∈ I 2,n . In other words, if the volatility process process does not jump then the coupling variables (U n,i ) i∈In are conditionally i.i.d., which corresponds to the situation in part (a) of Theorem 2.1. On the other hand, if the volatility process jumps at time τ , say by a constant c = 0, then the coupling variables for the I 2,n subsample will instead take the form U n,i = σ (i * −kn)∆n + c (W (i+1)∆n − W i∆n ). In this case, the two subsamples of U n,i 's have distinct conditional distributions (i.e., mixed normal with different conditional variances), corresponding to the scenario in part (b) of Theorem 2.1. Within the context of this illustrative example, we can further clarify a key feature of the proposed test that holds more generally. It is not aimed at detecting "small" time-variations in the distribution of the observed data. In fact, by allowing the drift b t and the volatility σ t to be time-varying, a smooth form of heterogeneity is always built in. The test instead detects abrupt changes, or discontinuities, in the evolution of the distribution, which can be more plausibly associated with the "lumpy" information carried by the underlying economic announcement, as emphasized by Nakamura and Steinsson (2018b) . Specifically in this example, the asset returns are locally centered Gaussian (due to the assumption that the price is an Itô process), and hence, the temporal discontinuity in the return distribution manifests itself as a volatility jump. The 7 The integer i * depends on n. We suppress this dependence in our notation for simplicity and to avoid having nested subscripts. 8 Note that In does not include the i * th return observation. Therefore, although the returns in (2) do not contain price jumps, an event-induced price jump is allowed to occur at time τ . empirical scope of our permutation test, however, is far beyond volatility-jump testing depicted in this illustration, as we shall demonstrate in the remainder of the paper. We now specialize the generic Theorem 2.1 into an infill asymptotic time-series setting that is particularly suitable for event studies. By introducing a mild additional econometric structure, we shall establish the asymptotic validity of the permutation test under more primitive conditions that are easy to verify in a variety of concrete empirical settings. As in the running example above, we consider an event occurring at time τ ∈ [i * ∆ n , (i * + 1)∆ n ), which separates two subsamples indexed by I 1,n = {i * − k n , . . . , i * − 1} and I 2,n = {i * + 1, . . . , i * + k n }, respectively. All limits in the sequel are obtained under the infill asymptotic setting with ∆ n → 0. We suppose that the data are generated from an approximate state-space model of the form where the state process ζ t is càdlàg, adapted to a filtration F t , and takes values in an open set is a "smooth" transform; and R n,i is a residual term that is negligible relative to the leading term g (ζ i∆n , n,i ) in a proper sense detailed below. A simpler version of this state-space model without the R n,i residual term has been used by Li and Xiu (2016) and Bollerslev et al. (2018) , among others, for modeling market variables such as trading volume and bid-ask spread. By introducing the R n,i residual term, we can use a unified framework to accommodate a broader class of models, which in particular include increments of an Itô semimartingale. We now revisit the model in (2) as the first illustration. Example 1 (Brownian Asset Returns). We represent the Itô-process model (2) for asset returns in the form of (4) by setting ζ t = σ t , n, resulting residual term has the form Under mild and fairly standard regularity conditions, it is easy to show that max i∈In |R n,i | is o p (1). On the other hand, the leading term g (ζ i∆n , n,i ) has a non-degenerate centered mixed Gaussian distribution with conditional variance σ 2 i∆n . This running example further illustrates the distinct roles played by ζ t , n,i , and R n,i in our state-space model (4). The leading term g (ζ i∆n , n,i ) captures the "main feature" of the observed data; in addition, since the n,i disturbance terms are i.i.d., any "large" change in the empirical distribution across the two subsamples must be attributed to the time-τ discontinuity in the state process ζ t . From this description, it follows that the hypothesis test for the continuity of the distribution of the main feature of the observed data can be formulated as where ∆ζ τ ≡ ζ τ − ζ τ − ≡ ζ τ − lim s↑τ ζ s denotes the jump of the state process at time τ . With the state-space model (4) in place, we can design more primitive sufficient conditions for establishing the asymptotic validity of the permutation test under the hypotheses in (6). We need some additional notation to describe these conditions. For each fixed z ∈ Z, let f z (·) and F z (·) denote the probability density function (PDF) and the CDF of the random variable g (z, ε n,i ), respectively. It is also convenient to introduce a "shifted" version of ζ t defined asζ t ≡ ζ t −∆ζ τ 1{t ≥ τ }, which has the same increments as ζ t over time intervals not containing τ . Assumption 2.2. (i) The collection of variables ( n,i ) i∈In are i.i.d. and, for each k ∈ I n , the variables ( n,i ) i≥k are independent of F k∆n . Moreover, for any compact subset K ⊆ Z, we have (ii) Assumption 2.3. There exist a sequence (T m ) m≥1 of stopping times increasing to infinity, a sequence of compact subsets (K m ) m≥1 of Z, and a sequence (K m ) m≥1 of constants such that for some real sequence a n ≥ 1 and each m ≥ 1: Assumption 2.2 entails regularity conditions pertaining to the random disturbance terms, which are often easy to verify in concrete examples as demonstrated later in this subsection. Assumption 2.3 imposes a set of smoothness conditions that permits the approximation of the observed data using properly constructed coupling variables. 9 Specifically, condition (i) requires that the random function z → g (z, n,i ) is Lipschitz in z over compact sets under the L 2 distance. The a n sequence captures the scale of the Lipschitz coefficient. In many applications, we can verify this condition simply with a n ≡ 1, but allowing a n to diverge to infinity is sometimes necessary (see Example 2 below). Condition (ii) states that the ζ t process is locally compact (up to each stopping time T m ) and, upon removing the fixed-time discontinuity at τ , it is (1/2)-Hölder continuous under the L 2 norm. This Hölder-continuity requirement can be easily verified using well-known results provided that theζ process is an Itô semimartingle or a long-memory process (see Jacod and Protter (2012, Chapter 2) and Li and Liu (2020) ). Condition (iii) imposes the requisite assumptions on the residual terms. In some applications, this condition holds trivially with R n,i ≡ 0, but, more generally, it 9 Note that the assumption is framed in a localized fashion using the stopping times (Tm) m≥1 , which is a standard technique for weakening the regularity condition in the infill asymptotic setting. See Jacod and Protter (2012, Section 4.4.1) for a comprehensive discussion on the localization technique. needs to be verified on a case-by-case basis using (relatively standard) infill asymptotic techniques. Theorem 2.2, below, establishes the size and power properties of the permutation test under the hypotheses described in (6). Theorem 2.2. In the state-space model (4), suppose that Assumptions 2.2 and 2.3 hold, and that a n k 3 n ∆ 1/2 n = o(1). Then, the following statements hold for the permutation testφ n described in Algorithm 1: (a) Under the null hypothesis in (6), i.e., ∆ζ τ = 0, we have E[φ n ] → α; (b) Under a fixed alternative hypothesis in (6), i.e., ∆ζ τ = c for some (unknown) constant c = 0 This theorem is proved by verifying the high-level conditions in Theorem 2.1 with properly constructed coupling variables analogous to those in equation (3). The condition a n k 3 n ∆ 1/2 n = o(1) mainly requires that the window size k n does not grow too fast, which ensures the closeness between the coupling variables and the original data. In the typical case with a n = 1, it reduces to k n = o(∆ −1/6 n ). 10 In general, a larger k n allows one to utilize more data, but the associated longer event window may also lead to a larger nonparametric bias, and hence, a more severe size distortion. Part (a) shows that the permutation test attains the desired asymptotic level under the null hypothesis in (6). Again, we stress that the test has valid asymptotic size control even in the "small-sample" case with fixed k n . As in Theorem 2.1, the "large-sample" condition k n → ∞ is needed for establishing the consistency of the test under the alternative, as shown in part (b). In the remainder of this subsection, we use a few prototype examples to demonstrate how the proposed test may be used in various empirical settings. In particular, we show how to cast the specific problems into the approximate state-space model (4), and discuss how to verify our sufficient regularity conditions. We start by revisiting the running example. Example 1 (Brownian Asset Returns, Continued). Recall that n, and g (z, ) = z . In this context, the hypothesis testing problem in (6) represents a test of the continuity of the volatility process σ t at time t = τ , i.e., We suppose that the volatility process σ t is non-degenerate by setting its domain to Z = (0, ∞). Since the Brownian motion has independent increments with respect to the underlying filtration, the disturbance term n,i satisfies Assumption 2.2(i). In addition, for each point z ∈ Z, the random 10 This sufficient condition for the growth rate of kn is different from the conditions needed for conventional asymptotic-Gaussian-based spot inference, which requires kn ∆ −ι n for some ι ∈ (0, 1/2). For the permutation test, kn may be fixed or grow to infinity. However, in the latter case, our condition on the growth rate of kn is more stringent than what is needed for the conventional spot inference theory. variable f (z, n,i ) has an N 0, z 2 distribution. It is then easy to see that conditions (ii) and (iii) in Assumption 2.2 hold for any compact subset K ⊆ Z (note that K is necessarily bounded away from zero). To verify Assumption 2.3, first note that g (z, n,i ) − g(z , n,i ) = (z − z ) n,i , and hence, g (z, n,i ) − g(z , n,i ) 2 = |z − z |. Assumption 2.3(i) thus holds for a n = 1. It is well known that σ t is locally (1/2)-Hölder continuous under the L 2 norm if it is an Itô semimartingale or a longmemory process; if so, Assumption 2.3(ii) is satisfied if the σ t and σ −1 t processes are both locally bounded. Finally, to verify Assumption 2.3(iii), we assume that the drift process b t is locally bounded. It is then easy to show via routine calculations that max i∈In |R n,i | = O p (k 1/2 n ∆ 1/2 n ). Since the condition a n k 3 n ∆ 1/2 n ) as needed in Assumption 2.3(iii). All conditions in Theorem 2.2 are now verified, and this shows that the permutation testφ n is asymptotically valid for testing the null hypothesis ∆σ τ = 0. Example 1 shows that the permutation testφ n is asymptotically valid for testing the presence of a volatility jump. This is a relatively familiar problem in the literature. It is therefore useful to contrast the proposed permutation test with the standard approach, which is based on nonparametric "spot" estimators of the asset price's instantaneous variances before and after the event time given by, respectively,σ Assuming k n → ∞ and k 2 n ∆ n → 0, it can be shown that (see Jacod and Protter (2012, Chapter 13)) k Thus, we can test H 0 : ∆σ τ = 0 by comparing the t-statistic k 1/2 n σ 2 τ −σ 2 τ − / 2σ 4 τ + 2σ 4 τ − with critical values based on the standard normal distribution. Two remarks are in order. First, note that the asymptotic size control of the standard approach relies on the asymptotic normal approximation (8), which depends crucially on k n → ∞ (in addition to having ∆ n → 0) because the underlying central limit theorem is obtained by aggregating a "large" number of martingale differences. Hence, the t-test may suffer from severe size distortion when k n is relatively small. This issue is empirically relevant because an applied researcher may use a short time window to capture short-lived "impulse-like" dynamics and/or to minimize the impact of other confounding economic factors in the background. Moreover, for "real-time" applications, the researcher may have no choice but to use a small k n simply because of the limited amount of available data soon after the event time τ . In sharp contrast, the permutation test controls asymptotic size even when k n is fixed. This remarkable property is inherited from the coupling two-sample problem, in which the permutation test controls size exactly regardless of whether k n is fixed or grows to infinity. The second and perhaps practically more important difference between the two tests is that the permutation test is more versatile. Under the spot-estimation-based approach, both the design of the spot estimators in (7) and the convergence in (8) depend heavily on the fact that the increments of the Brownian motion are not only i.i.d., but also Gaussian. Gaussianity is obviously essential for the conventional approach because, among other things, it ensures that the instantaneous variance of the normalized returns are well-defined. 11 The permutation test, on the other hand, only exploits the i.i.d. property of the Brownian shocks, without relying on their Gaussianity. Therefore, the permutation test readily accommodates a more general model for asset returns with Lévy shocks, as we demonstrate in the following example. Example 2 (Lévy-driven Asset Returns). We generalize the model in Example 1 by replacing the Brownian motion W with a Lévy martingale L, so that the asset return has the form In this case, we define the random disturbance as n,i ≡ ∆ −1/β n (L (i+1)∆n − L i∆n ) for some constant β ∈ (1, 2]. The more general normalizing sequence ∆ −1/β n is used to ensure that n,i has a nondegenerate distribution. For instance, if L is a stable process, we take β to be its jump-activity index, so that n,i has a centered stable distribution (recall that the Brownian motion is a stable process with index β = 2). We treat the value of β as unknown. Since the permutation test is scale-invariant with respect to the data, we can nonetheless regard the normalized return Y n,i = ∆ −1/β n (P (i+1)∆n − P i∆n ) as directly observable (because tests implemented for P (i+1)∆n − P i∆n and Y n,i are identical). To apply our theory, we represent Y n,i using the state-space model (4) with ζ t = σ t , g (z, ) = z , and the residual term given by Recognizing that the scaled Lévy increments ( n,i ) i∈In are i.i.d., we can verify Assumptions 2.2 and 2.3 using similar arguments as in Example 1 but with a n = ∆ 1/2−1/β n , which depicts the rate at which n,i 2 diverges. In particular, the condition a n k 3 n ∆ 1/2 n = o(1) requires k n to obey . Then, we can apply Theorem 2.2 to show that the permutation testφ n is asymptotically valid for testing the discontinuity in the volatility process σ t at time τ , regardless of whether the driving Lévy process is a Brownian motion or not. So far, we have illustrated the use of the permutation test for high-frequency asset returns data. Under the settings of Examples 1 and 2, the distributional change of asset returns is mainly 11 Recall that many distributions used in continuous-time models do not have finite second moments. For example, within the class of stable distributions, the Gaussian distribution is the only one with a finite second moment. Moreover, Gaussianity also implies that the variance of ∆ −1 n (Wi∆ n − W (i−1)∆n ) 2 is 2, which explains the "2" factor in the denominator of the t-statistic. driven by the time-τ discontinuity in volatility, and hence, the permutation test is effectively a test for volatility jumps. Example 2, in particular, highlights the versatility and robustness of the permutation test compared with the conventional approach based on spot estimation. Going one step further, we now illustrate how to apply the permutation test to other types of economic variables. Example 3 (Location-Scale Model for Volume). Consider a simple model for trading volume, under which the volume within the ith sampling interval is given by Y n,i = µ i∆n + v i∆n n,i . The µ t location process captures the local mean, or trading intensity, and the v t scale process captures the time-varying heterogeneity in the order size. This location-scale model fits directly into the state-space model (4) with ζ t = (µ t , v t ), g ((µ, v), ) = µ + v , and R n,i ≡ 0. Let F t be the filtration generated by the ζ t process. If n,i is independent of the ζ t process and has finite second moment and bounded PDF, then it is easy to verify Assumptions 2.2 and 2.3 with a n = 1. Theorem The location-scale structure in Example 3 is by no means essential in applications, because the permutation test is valid provided that the more general conditions in Assumptions 2.2 and 2.3 hold. This illustration is pedagogically convenient, in that it permits a straightforward verification of our high-level conditions. That being said, this example does reveal a limitation of our theory developed so far. That is, the data variable needs to be continuously distributed, as required in Assumption 2.2(ii) (which in turn is related to Assumption 2.1(ii)). Observed data in actual applications are invariably discrete, but this continuous-distribution assumption is often deemed as a reasonable approximation to reality. In some situations, however, the discreteness in the data is more salient. For example, the trading volume of a relatively illiquid asset may take values as small integer multiples of the lot size (e.g., 100 shares). 12 This motivates us to directly confront the discreteness in the data, as detailed in the next subsection. The extension will be carried out in similar steps as the theory developed above. We start with modifying the generic result in Theorem 2.1 to accommodate discretely valued observations; we then specialize the general theory to a high-frequency setting under more primitive conditions. Recall that Q j,n (·) denotes the G n -conditional distribution function of the coupling variable U n,i for i ∈ I j,n and j ∈ {1, 2}, and Q n = (Q 1,n + Q 2,n )/2. Theorem 2.3. Suppose that there exists a collection of variables (U n,i ) i∈In that satisfies Assumption 2.1(i) for some sequence (G n ) n≥1 of σ-fields, and P( Y n,i = U n,i ) = o k −1 n uniformly in i ∈ I n where ( Y n,i ) i∈In is an identical copy of (Y n,i ) i∈In in G n -conditional distribution. Then, the following statements hold for the testφ n described in Algorithm 1: (a) If the variables (U n,i ) i∈In have the same G n -conditional distribution, we have E[φ n ] → α. (b) If k n → ∞ and P( (Q 1,n (x) − Q 2,n (x)) 2 dQ n (x) > δ n ) → 1 for any real sequence δ n = o(1), we have E[φ n ] → 1. Theorem 2.3 establishes exactly the same asymptotic properties for the permutation test as Theorem 2.1, but under different conditions: it does not impose the anti-concentration requirement for the coupling variable (i.e., Assumption 2.1(ii)), and the "distance" between the observed data and the coupling variable is measured by the probability mass of { Y n,i = U n,i }. These modifications seem natural for the discrete-data setting. Next, we specialize the generic result in Theorem 2.3 to the state-space model (4), starting with some motivating examples. The first is an alternative model for the trading volume that explicitly features discretely valued data, which shows an interesting contrast to Example 3. Example 4 (Poisson Model for Volume). Let Y n,i be the trading volume of an asset within the ith sampling interval. Following Andersen (1996) , we model the discretely valued volume using a Poisson distribution with time-varying mean. To form a state-space representation, let ( n,i (t)) t≥0 be a copy of the standard Poisson process on R + , independent across i, and let ζ t be the timevarying mean process independent of the n,i 's. We then set Y n,i = n,i (ζ i∆n ), which, conditional on the ζ process, is Poisson distributed with mean ζ i∆n . This representation is a special case of (4), with g(ζ, ) = (ζ) being a time-change and R n,i = 0. We also note that although the n,i 's are assumed to be i.i.d., the (Y n,i ) i∈In series can be highly persistent through its dependence on the stochastic mean process ζ t . To further broaden the empirical scope, we consider another example concerning the bid-ask spread of asset quotes. This example is econometrically interesting because of its resemblance to the discrete-choice models (e.g., probit and logit) commonly used for modeling binary and multinomial data. Example 5 (Bid-Ask Spread). Let Y n,i be the bid-ask spread of an asset at time i∆ n . For a liquid asset, the spread is often maintained at 1 tick (e.g., 1 cent), but it may widen to several ticks due to a higher level of asymmetric information or dealer's inventory cost. For ease of exposition, we suppose that Y n,i is a binary variable taking values in {1, 2}, while noting that a multinomial extension is straightforward. Motivated by the classical discrete-choice models, we model the spread as Y n,i = 1 + 1 {ζ i∆n ≥ n,i }, and suppose that the variables ( n,i ) i∈In are i.i.d. and independent of the ζ t process. With the CDF of n,i denoted by F (·), we have P (Y n,i = 2|ζ i∆n ) = F (ζ i∆n ). Evidently, upon redefining ζ t as F (ζ t ), we can assume that n,i is uniformly distributed on the [0, 1] interval without loss of generality. This normalization in turn allows us to interpret ζ t as the stochastic propensity of a "wide" spread, which may serve as a measure of market illiquidity. We now proceed to establish the asymptotic validity of the permutation test for the hypotheses described in (6) for discretely valued observations; see Theorem 2.4 below. Since the state-space representation (4) holds with the residual term R n,i = 0 in the examples above, it seems reasonable to avoid unnecessary redundancy by restricting our analysis to a simpler version given by We replace Assumption 2.3 with the following assumption, where we recall that for each z ∈ Z, F z (·) denotes the CDF of the random variable g (z, ε n,i ) andζ t = ζ t − ∆ζ τ 1{t ≥ τ }. Assumption 2.4. There exist a sequence (T m ) m≥1 of stopping times increasing to infinity, a sequence of compact subsets (K m ) m≥1 of Z, and a sequence (K m ) m≥1 of constants such that for each m ≥ 1: (i) P (g (z, n,i ) = g(z , n,i )) ≤ K m z − z for all z, z ∈ K m ; (ii) ζ t takes values in K m for all t ≤ T m , and ζ t∧Tm −ζ s∧Tm 2 ≤ K m |t − s| 1/2 for all t, s in some fixed neighborhood of τ . Theorem 2.4. In the state-space model (9), suppose that Assumptions 2.2(i), 2.2(iii) and 2.4 hold, and that k 3 n ∆ n = o(1). Then, the following statements hold for the permutation testφ n described in Algorithm 1: (a) Under the null hypothesis in (6), i.e., ∆ζ τ = 0, we have E[φ n ] → α; (b) Under a fixed alternative hypothesis in (6), i.e., ∆ζ τ = c for some (unknown) constant c = 0, and if k n → ∞, we have that E[φ n ] → 1. Theorem 2.4 depicts the same asymptotic behavior of the permutation test as in Theorem 2.2. The sufficient conditions of these results differ mainly in how to gauge the closeness between the data and the coupling variable, as manifest in the difference between Assumption 2.3(i) and Assumption 2.4(i). The latter is easy to verify under more primitive conditions in concrete settings. Specifically, in Example 4, we note that |g(z, n,i ) − g(z , n,i )| is a Poisson random variable with mean |z − z |, and hence, P(g(z, n,i ) = g(z , n,i )) = 1 − exp(− |z − z |) ≤ |z − z | as desired. In Example 5, we can use n,i ∼ Uniform [0, 1] to deduce that P g (z, n,i ) = g z , n,i = P 1{z ≥ n,i } = 1{z ≥ n,i } = z − z , which, again, verifies Assumption 2.4(i). Therefore, in the context of Examples 4 and 5 above, the permutation test is asymptotically valid for detecting discontinuities in trading activity and illiquidity, respectively. Our Monte Carlo experiment is based on the setting of Example 2. We simulate the (log) price process according to dP t = σ t dL t under an Euler scheme on a 1-second mesh, and then resample the data at the ∆ n = 1 minute frequency. We simulate L either as a standard Brownian motion or as a (centered symmetric) stable process with index β = 1.5. To avoid unrealistic price path, we truncate the stable distribution so that its normalized increment ∆ −1/β n L i∆n − L (i−1)∆n is supported on [−C, C], and we consider C ∈ {10, 20, 30} to examine the effect of the support. The unit of time is one day. To simulate the volatility process, we first simulate two volatility factors according to the following dynamics (see Bollerslev and Todorov (2011) ): where B 1,t and B 2,t are independent standard Brownian motions that are also independent of L t , ρ = −0.7 captures the negative correlation between price and volatility shocks (namely the "leverage" effect). The V 1 volatility factor is highly persistent with a half-life of 2.5 months, while the V 2 volatility factor is quickly mean-reverting with a half-life of only one day. The constant c determines the size of the volatility jump at the event time τ . In particular, c = 0 corresponds to the null hypothesis, and we consider a range of c values in (0, 5] in order to trace out a power curve for the corresponding alternative hypotheses. The range of the c parameter is calibrated according to Bollerslev et al.'s (2018) empirical estimates for FOMC announcements. 13 We note that the two volatility factors, V 1 and V 2 , capture the slow-and fast-mean-reverting volatility dynamics, respectively, with the former having "smoother" sample paths than the latter. With this in mind, we simulate σ t using two models: In finite samples, Model A features relatively smooth volatility paths, which is close to the "ideal" scenario underlying the infill asymptotic theory. Meanwhile, Model B generates more realistic, and rougher, sample path for σ, providing a nontrivial challenge for the proposed inference theory. We implement the permutation test at the 5% significance level, with the window size k n ∈ {15, 30, 60, 90}. The six-fold increase from the smallest window size to the largest one represents 13 Specifically, Bollerslev et al. (2018) estimate the average jump size of log(σt) for the S&P 500 ETF around FOMC announcements to be 1.037 (see Table 3 of that paper). This suggests that σ 2 τ /σ 2 τ − = (exp(1.037)) 2 ≈ 8 on average, corresponding to c ≈ 3.5 in this Monte Carlo design. under the null hypothesis σ 2 τ − = σ 2 τ . The significance level is fixed at 5%. Column (1) corresponds to the case with L being a standard Brownian motion, and columns (2)-(4) correspond to cases in which L is truncated stable with index 1.5 and truncation parameter C ∈ {10, 20, 30}. The rejection frequencies are computed based on 2,000 Monte Carlo trials. a considerable range that allows us to explore the robustness of the proposed test with respect to the k n tuning parameter. 14 The critical value is computed as in Remark 2.1 based on 1,000 i.i.d. permutations. For comparison, we also implement the standard (two-sided) t-test based on (8). Rejection frequencies are computed based on 2,000 Monte Carlo trials. We first examine the size properties of the permutation testφ n and the t-test based on (8). Table 1 reports the rejection frequencies of these tests under the null hypothesis (i.e., c = 0) for various data generating processes. Column (1) corresponds to the case with L being a standard Brownian motion, and columns (2), (3), and (4) report results when L is a truncated stable process with the truncation parameter C = 10, 20, and 30, respectively. 14 We also implemented simulations in which the two subsamples, I1,n and I2,n, have different sample sizes k1,n, k2,n ∈ {15, 30, 60, 90}. As anticipated in footnote 4, these results are quantitatively similar to those with a common sample size, i.e., k1,n = k2,n. These additional results are omitted for brevity but are available upon request. The top panel of the table shows results from Model A, where the volatility is solely driven by the "slow" factor. Quite remarkably, the rejection frequencies of the permutation test are very close to the 5% nominal level for all specifications of L and, importantly, for a wide range of the window size k n . In contrast, the rejection rates of the t-test appear to be far more sensitive to the choice of k n . As we increase k n from 15 to 90, the rejection rate increases from 1.1% to 4.8% when L is a Brownian motion. A similar pattern emerges when L is a truncated stable process, except that the rejection rates now exceed the nominal level and reach 7.4% and 9.1% in columns (3) and (4), respectively. It is relevant to note that the t-test is not formally justified when L is not a Brownian motion. The more challenging case is Model B with the two-factor volatility dynamics. Looking at the bottom panel of Table 1 , we find that the permutation test still has rejection rates that are quite close to the nominal level, although we see a slight over-rejection of 6.4% when k n = 90. This is likely due to the fact that the approximation error in the coupling has nontrivial impact when the window size is large. That being said, we note that the benchmark t-test is more severely affected by this bias issue, with rejection rates reaching 9.1% and 13.6% when k n = 60 and k n = 90, respectively. A and Model B. We note that the four specifications of L produce qualitatively similar results. We see that the rejection frequencies increase with the window size k n and the jump size c, which is expected from our consistency result obtained under k n → ∞. The permutation test appears to be less powerful than the t-test under the alternative hypothesis. This is a natural consequence of the typical trade-off between efficiency and robustness to the assumptions. The t-test is based on the spot variance estimator, which is "locally" the maximum-likelihood estimator of the spot variance under the Brownian shocks. The asymptotic validity and efficiency of the t-test rely on the Brownian assumption. In contrast, the permutation test is asymptotically valid regardless of whether the shocks are Brownian or not. As expected, this robustness is costly in terms of power. On the flip side, the power advantage of the t-test comes at the cost of size distortion when shocks are non-Brownian, which can be large, as shown in Table 1 . Overall, we find that the permutation test controls size remarkably well under the null hypothesis. Although it appears to be less powerful than the t-test, it does not suffer from the latter's size distortion which can be severe in the two-factor volatility model. Our results suggest that, given its robustness, the permutation test is a useful complement to the conventional test based on spot estimation and asymptotic Gaussian approximation. We apply the proposed permutation test to a recent sample of high-frequency price and volume observations of the S&P 500 ETF (NYSE: SPY) as an empirical illustration. The sampling frequency is one minute; the data source is the TAQ database. With the permutation test, we are interested in testing distributional discontinuities of the ETF's return, trading volume, and two measures of illiquidity for several FOMC announcements during the COVID-19 pandemic. This setting highlights one of the key merits of the proposed test, namely, it is applicable for a broad variety of high-frequency observations modeled in distinct ways. This is in sharp contrast to the conventional t-test based on (8), which is designed specifically for testing volatility jumps and whose validity relies on the assumption of Brownian shocks. We construct the high-frequency volume series as the total number of shares within each one-minute trading session. The illiquidity measures of interest include Amihud's measure defined as the ratio between absolute return and trading volume (Amihud (2002) ) and the bid-ask spread averaged within each one-minute trading session. We focus on four important FOMC announcements in the 2020-2021 sample period that are related to distinct aspects of the Federal Reserve's monetary policy during the COVID-19 pandemic. The first is the announcement made on March 3, 2020, which was also the first FOMC announcement after COVID-19 hit the United States. The Fed stated its decision to lower the federal funds rate by 1/2 percentage point as its first response to counter the pandemic's negative impact on the economy. The second event occurred on December 16, 2020. At that time, being concerned with the rising long-term yield, many market participants expected that the Fed might implement the so-called "operation twist" to tame the steepening of the yield curve. But this turned out not to be on the Fed's agenda, and so, the Fed's "inaction" may be deemed as a shock relative to the market's anticipation. The third case pertains to the announcement on March 17, 2021. During the press conference, the Fed Chairman suggested that the central bank would be unlikely to raise the rate in the next 2-3 years, which may be regarded as a forward guidance on the target rate. The final example is the announcement on September 22, 2021, when the Fed officially declared its intention to taper the large-scale asset purchase program. Figure 2 plots the asset return and trading volume of SPY over one-hour windows centered at these announcement times. For ease of comparison, we plot the return and volume data for the four events on the same scale. These plots immediately reveal the highly distinct market conditions at those times. This highlights the usefulness of adopting a high-frequency event-study research design, which allows us to investigate each event separately, rather than pooling information across different announcements under a likely fragile homogeneity assumption. We also observe several interesting patterns regarding how the market responds to the "lumpy" information embedded in the announcements. We generally see a rise in trading activity after the announcement. The price also tends to fluctuate more in the post-announcement window, although the March 3, 2020 event may be an exception as the market was already quite volatile even before the announcement. As mentioned above, we implement the permutation test described in Algorithm 1 on the ETF's returns, trading volume, Amihud's measure, and bid-ask spread constructed on the one-minute sampling frequency. For ease of interpretation, we consider the non-randomized version of the test described in Remark 2.1. We consider two event windows: k n = 10 minutes or 30 minutes. Recall that each FOMC announcement contains two stages. The first is an immediate release of a short summary on the Federal Reserve's webpage, which will be further detailed in the Fed Chairman's opening statement during the first (roughly) 10 minutes of the press conference. The second part is a Q&A session in which the Chairman responds to questions from the media, with the first few generally being more important. Given this setup, the shorter k n = 10 window allows us to focus on the immediate impact of the FOMC statement, whereas the longer k n = 30 window further covers the "more subtle" policy information conveyed to the public during the Q&A session. It is worth noting that the 30-minute event window is also adopted in prior work on the high-frequency identification of monetary policy shocks; see Nakamura and Steinsson (2018a) . The latter non-rejection is consistent with our previous observation that the volatility of SPY was high even before the announcement. Meanwhile, the test also strongly rejects the null hypothesis of distributional continuity for the volume series, echoing the burst of trading activity seen in Figure 2 . The permutation test applied to the two illiquidity measures generates mixed results. We find some moderate evidence for the distributional discontinuity of Amihud's measure shortly after the March 3 and December 16 announcements in 2020. For the case with a 30-minute window, we do not reject the null of continuity for any of the announcements. The overall evidence suggests that the FOMC announcements under study did not lead to an abrupt change in the market impact coefficient gauged by Amihud's measure. Needless to say, this finding per se does not imply that the liquidity condition is unchanged after the announcement, as the notion of liquidity is a multifaceted concept. Indeed, we see that the permutation test applied to the bid-ask spread always strongly rejects the null of continuity. The post-announcement spread tends to be larger than its pre-announcement level, suggesting that it is significantly more costly to trade during the post-announcement trading session. All in all, the empirical illustration above demonstrates how the proposed permutation test may be used to test for distributional discontinuities for a variety of market variables. This type of versatility is not easily attained by existing methods in the high-frequency econometrics literature. We also see that interesting empirical findings may be obtained even with a small number of observations, which confirms the practical relevance of allowing the k n window to be possibly fixed in our asymptotic theory for the permutation test. In this paper, we propose using a permutation test to detect discontinuities in an economic model at a cutoff point. Relative to the existing literature, we show that the permutation test is well suited for event studies based on time-series data. While nonparametric t-tests have been widely used for this purpose in various empirical contexts, the permutation test proposed in this paper provides a distinct alternative. Instead of relying on asymptotic (mixed) Gaussianity from central limit theorems, we exploit finite-sample properties of the permutation test in the approximating, or "coupling", two-sample problem. We demonstrate that our new theory is broadly useful in a wide range of problems in the infill asymptotic time-series setting, which justifies using the permutation test to detect jumps in economic variables such as volatility, trading activity, and liquidity. Compared with the conventional nonparametric t-test, the proposed permutation test has several distinct features. First, the permutation test provides asymptotic size control regardless of whether the sizes of the local subsamples are fixed or growing to infinity. In the latter case, we also establish that the permutation test is consistent. Second, the permutation test is versatile, as it can be applied without modification to many different contexts and under relatively weak conditions. Throughout the proofs, we use K to denote a positive constant that may change from line to line, and write K p to emphasize its dependence on some parameter p. For any event E ∈ F, we identify it with the associated indicator random variable. Proof of Theorem 2.1. Step 1. Define φ n in the same way asφ n but with (Y n,i ) i∈In replaced by (U n,i ) i∈In . In this step, we show that Letφ n be defined in the same way asφ n , but with (Y n,i ) i∈In replaced by ( Y n,i ) i∈In , as defined in Assumption 2.1(iii). Since ( Y n,i ) i∈In and (Y n,i ) i∈In have the same (conditional) distribution, Let E n ∈ F be the event where the ordered values of (U n,i ) i∈In and ( Y n,i ) i∈In correspond to the same permutation of I n . Since the test statistic is only a function of the rank of the observations, we haveφ n = φ n in restriction to E n . Hence, By (12) and (13), (11) follows from P(E c n ) = o(1), which will be proved below. Let A n,i,j ≡ {U n,j − U n,i ≥ 0, Y n,j − Y n,i < 0} for every (i, j) ∈ I n × I n , and note that E c n ⊆ ∪ i,j A n,i,j . Recall the elementary fact that if a sequence of random variables X n = o p (1), then there exists a real sequence δ n = o(1) such that P(|X n | ≤ δ n ) → 1. Under Assumption 2.1(iii), by applying this result to X n = 2 max i∈In | Y n,i − U n,i |k 2 n , we can find a sequence δ n = o(1) such that P max i∈In | Y n,i − U n,i | ≤ δ n k −2 n /2 → 1. We then observe that Therefore, which, together with (14), implies that Next, consider the following argument: where the last line holds by Assumption 2.1(ii). By (16) and the bounded convergence theorem, By combining (15) and (17), we conclude that P(E c n ) = o(1), as desired. Step 2. We now prove the assertions in parts (a) and (b) of the theorem. In view of (11), we only need to prove E[φ n ] → α and E[φ n ] → 1 in these two parts, respectively. For part (a), note that (U n,i ) i∈In are conditionally i.i.d. and so permutations constitute a group of transformations that satisfy the randomization hypothesis in Lehmann and Romano (2005, Definition 15.2.1) . Then, Lehmann and Romano (2005, Theorem 15.2.1) implies that E[φ n | G n ] = α, and E[φ n ] = α then follows from the law of iterated expectations. To prove part (b), we need some additional notation. To emphasize the dependence of T n , T * n , andφ n on the original data (Y n,i ) i∈In , we explicitly write them as T n (Y ), T * n (Y ), andφ n (Y ). With this notation, we can rewrite φ n =φ n (U ), since it is computed in the same way asφ n but with (Y n,i ) i∈In replaced by (U n,i ) i∈In . We first analyze the asymptotic behavior of T n (U ). Define the empirical analogue of Q j,n (·) as Q j,n (x) ≡ 1 k n i∈I j,n 1{U n,i ≤ x}. Since the variables (U n,i ) i∈I j,n are G n -conditionally i.i.d., By Markov's inequality and the law of iterated expectations, this implies that Q j,n (x) − Q j,n (x) = o p (1) for each x ∈ R. This and a classical Glivenko-Cantelli theorem (e.g., Davidson (1994, Theorem 21 .5)) imply that By definition, In addition, we define S n ≡ 1 2k n i∈In (Q 1,n (U n,i ) − Q 2,n (U n,i )) 2 . Note that the functions Q j,n (·) and Q j,n (·) are uniformly bounded. Hence, by the triangle inequality and (18), Conditional on G n , the bounded random functions Q 1,n (·) and Q 2,n (·) can be treated as deterministic functions. Next, note that S n = 1 2 j∈{1,2} (Q 1,n (x)−Q 2,n (x)) 2 dQ j,n (x)+o p (1) = (Q 1,n (x)−Q 2,n (x)) 2 dQ n (x)+o p (1), (20) where the first equality holds by a law of large numbers for the conditionally i.i.d. variables (U n,i ) i∈I j,n for j = 1, 2, and the second equality holds by the definition of Q n . By combining (19) and (20), we deduce that In turn, by (21) and the condition in part (b), we conclude that P( T n (U ) > δ n ) → 1 for any δ n = o(1). Next, we analyze the asymptotic behavior of T * n (U ). It is useful to consider the following representation of this variable. We denote Uπ = (U n,π(i) ) i∈In , whereπ is a random permutation of I n , independent from the data, and is drawn uniformly from the set of all permutations of I n . By definition, T * n (U ) is the 1 − α quantile of T n (Uπ), conditional on the sample, where the randomness comes from the random realization ofπ. To analyze the permutation distribution, we construct an additional coupling sequence of (U n,i ) i∈In following the method of Chung and Romano (2013, Section 5.3) . We note that their coupling construction does not require the null hypothesis to hold, and it is thus suitable for our current purposes. The result of their coupling construction is another random sequence (U n,i ) i∈In such that (i) U n,i = U n,i for all i in some random subset I n ⊆ I n ; (ii) the cardinality of I n \ I n , denoted D n , satisfies E[D n ] = O(k 1/2 n ); and (iii) (U n,i ) i∈In are G n -conditionally i.i.d. with marginal distribution Q n . For any fixed arbitrary permutation π and for j ∈ {1, 2}, define Q j,n (x; π) ≡ 1 k n i∈I j,n 1{U n,π(i) ≤ x} and Q j,n (x; π) ≡ 1 k n i∈I j,n 1{U n,π(i) ≤ x}. By repeatedly using the triangle inequality, | T n (U π ) − T n (U π )| = 1 2k n i∈In Q 1,n (U n,π(i) ; π) − Q 2,n (U n,π(i) ; π) 2 − Q 1,n (U n,π(i) ; π) − Q 2,n (U n,π(i) ; π) 2 ≤ K k n j∈{1,2} i∈In Q j,n (U n,π(i) ; π) − Q j,n (U n,π(i) ; π) where the last inequality uses the fact that (U n,i , U n,k ) = (U n,i , U n,k ) if (i, k) ∈ I n × I n , and so the summation on the previous line only has (2k n ) 2 − (2k n − D n ) 2 ≤ 4k n D n bounded terms that can be different from zero; and the o p (1) statement follows from E[D n ] = O(k 1/2 n ), k n → ∞, and Markov's inequality. For any fixed arbitrary permutation π, T n (U π ) is the Cramér-von Mises statistic for the G nconditionally i.i.d. variables (U n,π(i) ) i∈In . Hence, by a similar argument leading to (22), we have T n (U π ) = o p (1). By combining this with (23), it follows that Since this result holds for any arbitrary fixed permutation π, it also holds for any pair of permutations considered at random from the set of all possible permutations of I n , independently from the data. By elementary properties of stochastic convergence, this implies the so-called Hoeffding's condition (e.g., Lehmann and Romano (2005, Equation ( 15.10))). By this and Lehmann and Romano (2005, Theorem 15.2 .3), the permutation distribution associated with the test statistic T n (U ), conditional on the data, converges to zero in probability. As a corollary of this, From (22) and (25), it is easy to see that T n (U ) > T * n (U ) with probability approaching 1. This further implies that E[φ n ] → 1, which, together with (11) proves the assertion of part (b). Proof of Theorem 2.2. (a) We prove the assertion of part (a) by applying Theorem 2.1(a). We construct the coupling variable U n,i as follows: We set G n = F (i * −kn)∆n . By Assumption 2.2, ( n,i ) i∈In are i.i.d. and independent of G n . Since ζ (i * −kn)∆n is G n -measurable, the variables (U n,i ) i∈In are G n -conditionally i.i.d. This verifies the condition in part (a) of Theorem 2.1, which also implies Assumption 2.1(i). It remains to verify conditions (ii) and (iii) in Assumption 2.1. By a standard localization argument (see Jacod and Protter (2012, Section 4.4 .1)), we can strengthen Assumption 2.3 by assuming T 1 = ∞, K m = K, and K m = K for some fixed compact set K and constant K > 0. In particular, ζ (i * −kn)∆n takes values in the compact set K. By Assumption 2.2, it is then easy to see that the G n -conditional probability density of U n,i = g(ζ (i * −kn)∆n , n,i ) is uniformly bounded (and it does not depend on i). This implies condition (ii) of Assumption 2.1. Finally, we verify condition (iii) of Assumption 2.1. By Assumption 2.2(i), for each i ∈ I n , ε n,i is independent of F i∆n . Since ζ i∆n and ζ (i * −kn)∆n are F i∆n -measurable, we deduce from Assumption Note that under the null hypothesis with ∆ζ τ = 0, the processes ζ t andζ t are identical. Hence, by Assumption 2.3(ii) and (27), g(ζ i∆n , n,i ) − g(ζ (i * −kn)∆n , n,i ) 2 ≤ Ka n k 1/2 n ∆ 1/2 n . By the maximal inequality under the L 2 norm (see, e.g., van der Vaart and Wellner (1996, Lemma 2.2.2)), we further deduce that max i∈In g(ζ i∆n , n,i ) − g(ζ (i * −kn)∆n , n,i ) 2 ≤ Ka n k n ∆ 1/2 n . Recall that a n k 3 n ∆ 1/2 n = o(1) by assumption. Hence, Note that, by the definitions in (4) and (26), Y n,i − U n,i = g(ζ i∆n , n,i ) − g(ζ (i * −kn)∆n , n,i ) + R n,i . Combining (29), (30), and Assumption 2.3(iii), we deduce that max i∈In |Y n,i − U n,i | = o p k −2 n , which verifies Assumption 2.1(iii). We have now verified all the conditions needed in Theorem 2.1(a), which proves the assertion of part (a) of Theorem 2.2. (b) We prove the assertion of part (b) by applying Theorem 2.1(b) . Under the maintained alternative hypothesis, we have ∆ζ τ = c for some constant c = 0. The coupling variable now takes the following form U n,i = g(ζ (i * −kn)∆n , n,i ) i ∈ I 1,n , g(ζ (i * −kn)∆n + c, n,i ) i ∈ I 2,n . Under Assumption 2.2, it is easy to see that, for each j ∈ {1, 2}, the variables (U n,i ) i∈I j,n are G n -conditionally i.i.d., which verifies Assumption 2.1(i). We now turn to the remaining conditions in Assumption 2.1. As in part (a), we can invoke the standard localization procedure and assume that the ζ t process takes value in a compact set K. Note that where the o p (1) statement follows from the fact that the ζ t process is càdlàg and k n ∆ n → 0. Therefore, by enlarging the compact set K slightly if necessary, we also have ζ (i * −kn)∆n +c ∈ K with probability approaching 1. Then, we can verify Assumption 2.1(ii) following the same argument as in part (a). The verification of Assumption 2.1(iii) is also similar. Finally, we verify the condition in Theorem 2.1(b) pertaining to the conditional CDFs. Note that Q 1,n (x) = F ζ (i * −kn)∆n (x) and Q 2,n (x) = F ζ (i * −kn)∆n +c (x). It is then easy to see that 2 (Q 1,n (x) − Q 2,n (x)) 2 dQ n (x) ≥ F ζ (i * −kn)∆n (x) − F ζ (i * −kn)∆n +c (x) 2 dF ζ (i * −kn)∆n (x). Since ζ (i * −kn)∆n takes values in the compact set K, Assumption 2.2(iii) implies that the lower bound in the above display is bounded away from zero. Hence, (Q 1,n (x) − Q 2,n (x)) 2 dQ n (x) > δ n for any real sequence δ n = o(1). We have now verified all conditions for Theorem 2.1(b), which proves the assertion of part (b) of Theorem 2.2. Proof of Theorem 2.3. This proof follows from similar arguments to those used to prove Theorem 2.1. For the sake of brevity, we focus on the only substantial difference, which is how we establish that P(E c n ) = o(1). Recall that E n denotes the event where the ordered values of (U n,i ) i∈In and ( Y n,i ) i∈In correspond to the same permutation of I n . In the case of this proof, this result follows from P(E c n ) ≤ P(∪ i∈In { Y n,i = U n,i }) ≤ i∈In P( Y n,i = U n,i ) = o(1), where the first inequality follows from E c n ⊆ ∪ i∈In { Y n,i = U n,i } and the convergence follows from the assumption that P( Y n,i = U n,i ) = o k −1 n uniformly in i ∈ I n . Proof of Theorem 2.4. (a) We prove this assertion by applying Theorem 2.3(a). We shall verify the conditions in Theorem 2.3 for Y n,i = Y n,i , U n,i = g(ζ (i * −kn)∆n , n,i ), and G n = F (i * −kn)∆n . By assumption, the variables ( n,i ) i∈In are i.i.d. and independent of G n . Hence, the variables (U n,i ) i∈In are G n -conditionally i.i.d. It remains to verify that P (Y n,i = U n,i ) = o k −1 n uniformly in i ∈ I n . By repeating the localization argument used in the proof of Theorem 2.2, we can strengthen Assumption 2.4 with T 1 = ∞ without loss of generality. In particular, ζ t takes values in some compact subset K ⊆ Z. Note that for each i ∈ I n , n,i is independent of ζ i∆n , ζ (i * −kn)∆n . By Assumption 2.4(i), we thus have P (Y n,i = U n,i |G n ) ≤ K ζ i∆n − ζ (i * −kn)∆n . Then, by Assumption 2.4(ii), we further have P (Y n,i = U n,i ) ≤ K (k n ∆ n ) 1/2 . The condition P (Y n,i = U n,i ) = o k −1 n then follows from k 3 n ∆ n = o(1). By Theorem 2.3(a), we have E[φ n ] → α as asserted. (b) We prove this assertion by applying Theorem 2.3(b). We verify the conditions in Theorem 2.3 for Y n,i = Y n,i , G n = F (i * −kn)∆n , and U n,i = g(ζ (i * −kn)∆n , n,i ) if i ∈ I 1,n , g(ζ (i * −kn)∆n + c, n,i ) if i ∈ I 2,n . Following the same argument as in part (a), we see that (U n,i ) i∈I j,n are G n -conditionally i.i.d. for each j ∈ {1, 2}, and P(Y n,i = U n,i ) = o k −1 n uniformly in i ∈ I n . Assumption 2.2(iii) also ensures that P( (Q 1,n (x) − Q 2,n (x)) 2 dQ n (x) > δ n ) → 1 for any real sequence δ n = o(1). By Theorem 2.3(b), we have that E[φ n ] → 1, as asserted. As explained in Remark 2.2, our main results extend beyond the Cramér-von Mises statistic in (1). We characterize the relevant class of test statistics by the following high-level assumption. Assumption 5.1. T n ≡ Ψ n ((Y n,i ) i∈In ), where (Ψ n ) n∈N is a sequence of functions that satisfies the following conditions: (a) T n is a rank statistic, i.e., for any (Y n,i ) i∈In and (Y n,i ) i∈In with sign(Y n,i − Y n,j ) = sign(Y n,i − Y n,j ) for all i, j ∈ I n , Ψ n ((Y n,i ) i∈In ) = Ψ n ((Y n,i ) i∈In ). (b) For any (Y n,i ) i∈In and (Y n,i ) i∈In , |Ψ n ((Y n,i ) i∈In ) − Ψ n ((Y n,i ) i∈In )| = O p (D n /k n ) with D n = |{i ∈ I n : Y n,i = Y n,i }|. Assumption 5.1 is satisfied for a large class of test statistics, which includes the Cramér-von Mises and Kolmogorov-Smirnov statistics. In fact, the proof of Theorem 2.1 shows that the Cramérvon Mises statistic satisfies Assumption 5.1, and an analogous argument can be used to extend this to the Kolmogorov-Smirnov statistic. We now briefly describe the assumption. Assumption 5.1(a) is an essential ingredient to our methodology. In turn, Assumption 5.1(b) is a mild regularity condition that limits the influence that a few sample observations can have on the test statistic, and is only required to establish our consistency result. The result proves Theorem 2.1 for any test statistic that satisfies Assumption 5.1. Since Theorem 2.1 is the key to all of the results in the paper, this effectively implies that our findings extend to the class of statistics characterized by Assumption 5.1, as claimed in Remark 2.2. Theorem 5.1. Under Assumptions 2.1 and 5.1 (instead of (1)), (a) If the variables (U n,i ) i∈In have the same G n -conditional distribution, we have E[φ n ] → α. (b) Let T n (U ) denote the test statistic but applied to (U n,i ) i∈In instead of (Y n,i ) i∈In . If k n → ∞ and P( T n (U ) > δ n ) → 1 for any real sequence δ n = o(1), we have E[φ n ] → 1. Proof. This proof follows closely that of Theorem 2.1, which has two steps. Step 1 remains unchanged, as it only relies on Assumption 2.1 and the fact that T n is a rank statistic, imposed in Assumption 5.1(a). Part (a) of Step 2 also remains unchanged, as it is entirely based on Step 1. To complete this proof, it then suffices to cover the analog of part (b) of Step 2. We begin by considering the asymptotic behavior of T * n (U ), i.e., the 1 − α quantile of T n (Uπ), conditional on the sample, where the randomness comes from the realization ofπ. As in the proof of Theorem 2.1, we rely on the coupling construction based on Chung and Romano (2013, Section 5 .3), which produces a random sequence (U n,i ) i∈In such that (i) U n,i = U n,i for all i in some random subset I n ⊆ I n ; (ii) D n = |{i ∈ I n : U n,i = U n,i }| satisfies E[D n ] = O(k 1/2 n ); and (iii) (U n,i ) i∈In are G n -conditionally i.i.d. Then, for any fixed arbitrary permutation π, | T n (U π ) − T n (U π )| = |Ψ n ((U n,π(i) ) i∈In ) − Ψ n ((U n,π(i) ) i∈In )| = O p (D n /k 1/2+ε where the second equality relies on |{i ∈ I n : U n,π(i) = U n,π(i) }| ≤ D n and Assumption 5.1(b), and the last equality relies on E[D n ] = O(k 1/2 n ), k n → ∞, and Markov's inequality. We can then repeat the arguments in the proof of Theorem 2.1 to conclude that T n (U π ) = o p (1) for any fixed arbitrary permutation π. By this with (32), we conclude that T n (U π ) = o p (1). Since π was arbitrarily chosen, it then follows that T * n (U ) = o p (1). From the assumption in part (b) and (33), it is easy to see that T n (U ) > T * n (U ) with probability approaching one. We can then repeat the remaining arguments in the proof of Theorem 2.1 to complete this proof. Illiquidity and Stock Returns: Cross-Section and Time-Series Effects Return Volatility and Trading Volume: An Information Flow Interpretation of Stochastic Volatility Tests for Parameter Instability and Structural Change With Unknown Change Point Permutation tests in change point analysis Estimating and Testing Linear Models with Multiple Structural Changes Non-Gaussian Ornstein-Uhlenbeck-Based Models and some of Their Uses in Financial Economics Econometrics of Testing for Jumps in Financial Economics Using Bipower Variation Public News Announcements Estimation of Jump Tails Testing Continuity of a Density via g-order statistics in the Regression Discontinuity Design Robust Nonparametric Confidence Intervals for Regression-Discontinuity Designs Approximate Permutation Tests and Induced Order Statistics in the Regression Discontinuity Design Randomization inference in the regression discontinuity design: An application to party advantages in the US Senate Comparing inference approaches for RD designs: A reexamination of the effect of Head Start on child mortality Exact and Robust Conformal Inference Methods for Predictive Machine Learning With Dependent Data Tests of Equality Between Sets of Coefficients in Two Linear Regressions Exact and Asymptotically Robust Permutation Tests The Fed and Interest Rates -A High-Frequency Identification Long Memory in Continuous Time Stochastic Volatility Models Stochastic Limit Theory Robust Permutation Tests for Correlation and Regression Coefficients Continuous Record Asymptotics for Rolling Sample Variance Estimators Identification and Estimation of Treatment Effects with a Regression-Discontinuity Design The Large-Sample Power of Tests Based on Permutations of Observations Extensions of some classical methods in change point analysis Permutation principle and bootstrap in change point analysis Optimal Bandwidth Choice for the Regression Discontinuity Estimator Regression Discontinuity Designs: A Guide to Practice Discretization of Processes Testing equality of spectral densities using randomization techniques Block permutation principles for the change analysis of dependent data Permutation principles for the change analysis of stochastic processes under strong invariance Regression Discontinuity Designs in Economics Testing Statistical Hypothesis Efficient Estimation of Integrated Volatility Functionals under General Volatility Dynamics Jump Regressions Testing the Dimensionality of Policy Shocks Generalized Method of Integrated Moments for High-frequency Data High-Frequency Identification of Monetary Non-Neutrality: The Information Effect Identification in Macroeconomics Chapter 46 Unit Roots, Structural Breaks and Trends Regression-discontinuity Analysis: An alternative to the Ex Post Facto Experiment Realized Laplace Transforms for Pure-jump Semimartingales Weak Convergence and Empirical Processes