key: cord-0144175-6iy03pvz authors: Ammy-Driss, Ayoub; Garcin, Matthieu title: Efficiency of the financial markets during the COVID-19 crisis: time-varying parameters of fractional stable dynamics date: 2020-07-21 journal: nan DOI: nan sha: 2f46d725cd297191f7ba74c36eff3f2b55068894 doc_id: 144175 cord_uid: 6iy03pvz This paper investigates the impact of COVID-19 on financial markets. It focuses on the evolution of the market efficiency, using two efficiency indicators: the Hurst exponent and the memory parameter of a fractional L'evy-stable motion. The second approach combines, in the same model of dynamic, an alpha-stable distribution and a dependence structure between price returns. We provide a dynamic estimation method for the two efficiency indicators. This method introduces a free parameter, the discount factor, which we select so as to get the best alpha-stable density forecasts for observed price returns. The application to stock indices during the COVID-19 crisis shows a strong loss of efficiency for US indices. On the opposite, Asian and Australian indices seem less affected and the inefficiency of these markets during the COVID-19 crisis is even questionable. The COVID-19 pandemic has strongly affected many persons, either for medical reasons or for the economic aftermath of the various prophylactic measures decided by governments, in particular lockdowns. The evolution of financial markets during the pandemic provides an illustration of the economic impact of these measures. According to several empirical studies, financial markets have indeed been strongly disturbed during this period [4, 6, 40, 70] . The question of the reaction of financial markets to a crisis is not specific to the COVID-19 pandemic. For example, we can cite a study of the impact of crisis in the 80s and the 90s on a model of dynamic for a stock market [21] . In general, these studies focus on variations of several statistics, such as jump intensity, implied volatility, parameters of factor models, divergence of price return densities, etc. To the best of our knowledge, no paper focuses on measuring the impact of the COVID-19 pandemic on market efficiency. Market efficiency is the ability of market prices to reflect all the available information, so that no arbitrage is possible. In other words, if markets are efficient, price returns are not correlated with each other and investors are not able to statistically determine what is more profitable between selling and buying a financial asset. Even if this dogma is sometimes questionable in calm periods, we wonder if it can resist to a crisis. We also wonder if we can observe regional disparities, hypothetically related to the magnitude of the outbreak in these regions, and how fast financial markets recover. The Hurst exponent is a widespread indicator of market efficiency. The long-range feature associated to a Hurst exponent above 1/2 is indeed traditionally interpreted as indicating predictability of a time series. It is used in finance [58, 16, 63, 8] but also in many other fields such as meteorology, for models of temperature [22] . It is often related to a model of dynamic with a specific fractal property, namely the fractional Brownian motion (fBm), introduced by Mandelbrot and van Ness [61] . The fractal property states that the variance of increments of duration τ is τ 2H σ 2 , where H is the Hurst exponent and σ 2 the variance of increments of duration 1. The fBm assumes that increments follow a Gaussian distribution. The fractal property of the fBm is then obtained by introducing a positive (respectively negative) correlation among increments if the Hurst exponent is above (resp. below) 1/2. In the case where the Hurst exponent is 1/2, the fBm is simply a standard Brownian motion (Bm), that is with independent increments. In the fBm framework, this value of the Hurst exponent is thus the only one consistent with the efficient market hypothesis (EMH). However, the approach using the fBm assumes Gaussian price returns. It is not very realistic, since the presence of fat tails in the distribution of price returns is well documented [54, 9, 68, 37] . As an alternative to the Gaussian distribution, the alpha-stable distribution is appealing because it includes the Gaussian distribution as a particular case and entails fat tails, whose amplitude is directly related to α, a parameter of this distribution. When combining alpha-stable distributions and dependence between the successive price returns, we can get the fractional Lévy-stable motion (fLsm), which is thus a non-Gaussian extension of the fBm [78, 79, 80] . In this framework, the Hurst exponent is to be decomposed in m + 1/α, where m is a memory parameter. If m = 0, adjacent price returns are independent and the market is efficient. We thus use this memory parameter as an alternative efficiency indicator. The estimation of the fLsm partly relies on the estimation of alpha-stable distributions. Many estimation methods exist for this kind of distribution [2, 15] . In particular, we will focus on McCulloch's method, which is based on empirical quantiles [62] . In this paper, we are in addition interested in the dynamic estimation of this distribution, because we want to depict the chronology of the crisis, day after day. Several articles deal with the estimation of time-varying non-parametric densities [43, 40] . The case of parametric densities is simpler as it consists in estimating timevarying parameters. In both the non-parametric and parametric cases, the estimation at a given date takes into account the estimation at the previous date updated by the new observation. The balance between the previous estimation and the new observation is tuned by a discount factor. Several rules are possible for the selection of this free parameter. We focus on the minimization of a criterion related to arguments coming from the field of validation of density forecast [40, 26] . In the empirical part of the paper, we study the evolution of the two indicators of market efficiency, H and m for several stock indices, with a significance analysis. We have discovered that the Hurst exponent H detects less often a market inefficiency than does the memory parameter m of an fLsm. We foster the use of this parameter as efficiency indicator instead of the Hurst exponent. It indeed improves the standard Hurst approach insofar as it filters the kurtosis of price returns, which biases the Hurst indicator. Besides the analysis of the impact of the COVID-19 on market efficiency, the innovative aspects of this paper include a selection rule for the discount factor of a dynamic parametric distribution, an estimation method of dynamic Hurst exponents, and the introduction of the memory parameter of an fLsm as efficiency indicator. The rest of the paper is organized as follows. Section 2 introduces the estimation of dynamic alpha-stable distributions, along with the selection rule of the discount factor. Section 3 provides some elements on market efficiency and details how indicators are built. Section 4 empirically studies the impact of COVID-19 on market efficiency. Section 5 concludes. The alpha-stable distribution is a generalization of the Gaussian distribution, appreciated for entailing fat tails. For this reason, it has been widely invoked in signal processing [77, 38, 39] , with applications for example in medicine [76, 80] or in finance [54, 9, 68, 37] . We present below the estimation of a time-varying alpha-stable distribution, which we will apply later in this paper to financial time series. For this purpose, we present first the static estimation as well as the various representations of alpha-stable distributions. Regarding the dynamic distribution, a free parameter is to be selected. It is the discount factor. We propose a rule of selection in the last subsection. Four parameters are used to depict a random variable following a stable distribution: X ∼ S α (γ, β, µ). The parameter α ∈ (0, 2] is the one we will mostly be interested in. It determines the thickness of the tails. The parameter β ∈ [−1, 1] is a skewness parameter. If α = 2 and β = 0, we retrieve the Gaussian distribution. The last two parameters stand for the location (µ ∈ R) and the scale (γ > 0) of the distribution. We do not have any analytic expression for the probability density of X, but we can characterize the stable distribution by the mean of its characteristic function: We could use the Fourier transform to get the pdf from the characteristic function [76] , but the above parameterization is not totally satisfactory insofar as the pdf is not continuous in the parameters, in particular when α = 1 [67, 2] . Indeed, when β > 0, the density is shifted right when α < 1 and left when α > 1, with a shift toward +∞ (respectively −∞) when α tends toward 1 by below (resp. above) [67] . For applications to data and interpretation of the coefficients, this parameterization is thus to be avoided. For this reason, Nolan has proposed to use Zolotarev's (M) parameterization [84] , which is also often called the S 0 parameterization. The characteristic function corresponding to X ∼ S 0 α (γ, β, µ 0 ) is [67, 2] : (1) This alternative parameterization is not far from the S α one. The only difference is about the location parameter, which, in this new setting, corrects the shift exposed above for values of α close to 1: A Fourier transform makes it possible to get the pdf of a standard variable S 0 α (1, β, 0) [67] : We also obtain the pdf of a variable X ∼ S 0 α (γ, β, µ 0 ) with γ = 1 and µ 0 = 0, by the mean of a translation, a scaling, and the substitution s = γt, starting from the characteristic function provided in equation (1): The formula of p contains an integral on an unbounded interval. For this reason, other formulations have been proposed [2, 57, 47, 15] . However, in the application to financial series, we find values of α far from 0, so that the truncated integral converges rapidly. We also get the corresponding cdf by numerically integrating f (x; α, γ, β, µ 0 ). Many estimation methods of stable distributions exist [2, 15] . Some of them focus on the sole α parameter, using for instance a regression of extreme quantiles along with the extreme value theory [23] . Other methods make it possible to estimate all the parameters. In this class of methods, we can cite the estimation using L-moments, provided that α > 1 [2, 44] , empirical quantiles [62] , the empirical characteristic function [74, 69, 51, 50, 80] , and the maximum likelihood method [27] . This last method relies on the knowledge of the pdf, which may be approximated as exposed in the previous subsection, using Nolan's work [67] . We focus on the method using empirical quantiles, which has been introduced first by Fama and Roll, with the following assumptions: α > 1, β = 0, and µ = 0 [31] . This method is asymptotically biased. McCulloch proposed an extended version of the method, in which he corrected the asymptotic bias [62] . This version is also less restrictive with respect to the parameters, insofar as it only requires to have α ≥ 0.6. The McCulloch's method consists in linking the four parameters to five empirical quantiles of probability levels 5%, 25%, 50%, 75%, and 95%. To this end, we first have to define two intermediate quantities: where Q(p) is the theoretical quantile of probability p for a S α (γ, β, µ) variable [62, 2, 15] . Neither v α nor v β depend on γ and µ, so that α and β are functions of v α and v β : α = φ 1 (v α , v β ) and β = φ 2 (v α , v β ). In practice, φ 1 and φ 2 are provided by tables [62] . Replacing the theoretical quantiles Q(p) by empirical quantiles Q(p), we get estimators v α for v α and v β for v β , as well as the following estimators for α and β: The estimation of γ and µ relies on two other intermediate quantities which only depend on the already estimated α and β. For simplicity, we introduce the variable ζ defined by: The intermediate quantities are: They are such that v γ = φ 3 (α, β) and v ζ = φ 4 (α, β). Their estimators v γ and v ζ are obtained by replacing the quantiles by empirical ones. We thus have the following estimators for γ and ζ: The deduction of the estimator of µ from ζ is straightforward using equation (3), as well as the version µ 0 of the location parameter in the parameterization S 0 using equation (2) . In particular µ 0 = ζ as soon as α = 1. , so that we get time-varying α and β parameters. Time-varying quantiles make it also possible to define the last two McCulloch statistics and finally to fully estimate a dynamic alpha-stable probability distribution. The subject of estimating dynamic quantiles is largely handled by the econometric literature. The favoured appraoch is based on quantile autoregression [49, 24, 41] , like in the application to valueat-risk known as CAViaR [28] . A drawback of quantile regression is that different quantiles may cross: the monotonicity of quantiles is not necessarily preserved. The dynamic additive quantile, while keeping the autoregressive approach, deals with this limitation [42] . All the quantile autoregressions are based on a model of dynamic and we prefer to introduce a method in which we do not specify the evolution of quantiles with respect to previously estimated ones. The inspiration of such a model-free approach comes from the non-parametric statistics literature, in which we can estimate for example time-varying moments or even time-varying probability densities [43] . In this perspective, we simply consider that the price returns X t are alpha-stable random variables, each following its own alpha-stable distribution F t , defined by parameters α t , β t , ω t , and µ t . In each distribution F t , we only observe one variable X t . This is not enough for estimating F t . For this reason, we add another assumption on the dynamic of this time-varying distribution. We indeed consider that it evolves smoothly, so that price returns close in time will follow a close distribution. Thanks to this assumption, we can use several price returns at times close to t in order to estimate F t or its related quantiles Q t (p). We could estimate these quantiles Q t (p) by their empirical versions. For example, if one considers 100 observations, the empirical quantile of probability p is the 100p -th smallest of the 100 observations. In other words, each of the 100 observations is associated with a probability 1/100. However, price returns closer in time induce a lower bias in the estimation of Q t (p), so that the probability associated with these recent returns in the cdf related to Q t should be higher. Thus, we use an exponentially-weighted quantile estimator (EWQ) and the corresponding discrete probability P EW Q t , such that P EW Q t (x) is the probability for the price return at time t to be equal to x [65] . To this end, we estimate the EWQ-style discrete probability function by assigning to each observation X i , for i ∈ 1, t , a probability p ω t,i depending on a discount factor ω ∈ (0, 1), so that P EW Q t (X i ) = p ω t,i . The probability p ω t,i follows a standard form of exponential weight: This expression ensures that the probabilities sum to 1, as required for probabilities: For ω = 1 and big values of t, ω t is close to zero and we have the approximation p ω t,i ≈ (1 − ω)ω t−i . In addition to the discrete probability P EW Q t , we can define the cdf F EW Q t associated with the EWQ: We can also write recursively this cdf [65] : More precisely, for each new observation X t , the EWQ-style cdf associated with each past observations is discounted at the constant rate ω, whereas the new observation is provided with the highest probability, 1 − ω. The rationale is the following: the more recent the observation, the more likely its future occurrence. These discrete probabilities make it possible to calculate easily time-varying quantiles, as generalized inverse functions of F EW Q t . Compared to quantiles determined from dynamic kernel densities, which suffer from a high algorithmic complexity [83, 43] , the EWQ approach seems computationally more performing. Indeed, for extreme quantiles, the algorithmic complexity of our method is less than linear with respect to the number of observations considered at each date. We now describe precisely the procedure for calculating the EWQs. We begin with a first estimation, at date t 0 , of the quantile of probability p, Q ω t0 (p), using the EWQ approach. To estimate the quantiles, we build the matrix M ω t0 containing the past observed price returns till t 0 , {X i } i≤t0 , sorted in descending order, along with their corresponding historical EWQ-style probability: . . . . . . . . . . . . where X πt 0 (i) is the i-th order statistic among the t 0 first observations and is obtained with the help of a permutation π t0 : min Thanks to the definition of the generalized quantile and to equation (4), we get the following estimator for the quantile of probability p: which is simply the lowest price return X such that the cumulated probability associated to lower returns, F EW Q t0 (X), reaches p. Iteratively, we can update this quantile in the following manner. We suppose we are given the probability distribution till time t − 1 and that we want to estimate a quantile at time t. We first apply a probability decay to M ω t−1 by a simple matrix product, so that we get a new matrix M ω t−1 containing sorted past observations till time t − 1 along with their new probability at time t: Then, we insert the new observation of time t in the matrix M ω t−1 thanks to a binary search in its first column. In the inserted line, we write the corresponding probability 1 − ω in the second column. If we write I t the position of the new observation, we get the new probability distribution matrix M ω t by: . . . . . . . . . . . . We can then calculate the empirical quantile in a manner similar to equation (5): In order to diminish the algorithmic complexity of this method, if we are looking for a quantile above the probability 0.5, we prefer the following definition of the quantile, which is mathematically consistent with the one provided above: Finally, using McCulloch's method, these time-varying estimations of quantiles make it possible to estimate the dynamic parameters of the stable distribution, that we note α ω t , β ω t , γ ω t , and µ ω t . We stress the fact that the EWQ method is only intended to estimate non-parametric quantiles in order to infer time-varying parameters. In particular, in what follows, the time-varying cdf on which we focus is not the non-parametric EWQ-style cdf but the parametric alpha-stable cdf with the time-varying parameters estimated above. The above dynamic estimation of quantiles and of the parameters of stable distributions relies on a free parameter, the discount factor ω, which depicts how fast the dynamic distribution evolves. If ω is close to 1, the distribution is almost constant. If ω is lower, the evolution of the distribution is faster and the description of the last observations will be more accurate. Nevertheless, this accuracy may be excessive and the evolution of the distribution may be non-significant. Indeed, in the extreme situation where ω is close to zero, the distribution will be very narrow and centered on the last observation, with a big divergence between two successive distributions. In order to find a good balance between accuracy and robustness, we decide to select the ω maximizing the ability of the density f ω t estimated at date t to forecast the density of X t+1 , the price return at time t + 1. Several definitions of what a good density forecast is are possible. Indeed, the true density at time t + 1 is never observed, so that we can only rely on one observation drawn in this density. In the non-parametric literature about time-varying densities, we find for instance a selection rule for the free parameter of the densities based on the maximization of a likelihood criterion [43] . We think that this criterion does not take properly into account the possibility of the occurrence of extreme events. The alternative solution we follow is based on an adaptation of a method coming from the literature of density forecast evaluation [40] . Indeed, even if f ω t varies with t, so that we are provided with only one observation in this distribution, a simple transformation of each price return defines a distribution which remains the same through time. This transformation is the probability integral transform (PIT), usually introduced in the perspective of density forecast evaluation [26] : where F ω t−1 is the cdf corresponding to f ω t−1 . In this literature, two conditions are required for the PITs: the Z ω t must follow a uniform distribution in [0, 1] and they must be independent from each other. The translation of these rules to the selection of free parameters in density estimation leads to two properties regarding ω [40] : uniformity of the PITs Z ω t0 , ..., Z ω T : ω is to be selected so as to minimize the divergence between their empirical distribution and a uniform distribution, independence of the PITs: ω is to be selected so as to minimize the discrepancy, that is, for each subinterval of [t 0 , T ] of size greater than a threshold ν, the divergence between the empirical distribution of the corresponding PITs and a uniform distribution is to be minimized. In the perspective of the selection of a free parameter of a time-varying distribution, we can define the above divergence as a Kolmogorov-Smirnov statistic with an adaptation to compare directly divergences of distributions estimated on samples of different sizes [40] . Moreover, the minimal size ν of the subintervals considered is intended to be a threshold above which the asymptotic framework required by the Kolmogorov-Smirnov statistic is satisfied. We consider ν = 22 days, so that we expect the PITs to be uniform for scales larger than one month. As a consequence, the criterion to be minimized is: in which k is the standard Kolmogorov-Smirnov statistic with respect to the uniform distribution: where [40] . In equation (7), (u − s)/(t − s) is the empirical cdf of the PITs, whereas the sorted PIT Z ω ρ(u;s,t) is the theoretical uniform cdf. Finally, the optimal discount factor is defined as the solution of the following equation: We can easily apply the above method to the case of an alpha-stable distribution. We only have to pay attention to the definition of the PIT in equation (6). Indeed, it relies on the cdf, which does not follow a simple expression for alpha-stable distributions. Nevertheless, as exposed in Subsection 2.1, numerical methods make it possible to calculate both the pdf and the cdf of an alpha-stable distribution. Therefore, the estimated cdf x → F ω t−1 (x) in equation (6) is the numerically evaluated cdf of an alpha-stable distribution with estimated parameters: The market efficiency is a usual assumption in finance, which is in particular invoked when pricing derivatives. The EMH states that asset price series follow a random walk [30] . Investors and market makers update their expectations and their quotes at each instant, using the available information, so that it is not possible to beat the market. The EMH is convenient, because working with independent price returns makes the financial mathematics easier. But practitioners know that the EMH is not very realistic. The asset management industry in general aims at performing statistical arbitrages, whatever the transaction time scale, whether it is less than a second or more than a month. Maybe the asset manager will not win at each time, but in average he should win. Standard models relying on the EMH, such as the widespread geometric Brownian motion, are not consistent with the existence of statistical arbitrages. We can also stress other unrealistic components of the EMH, such as the availability of the same information for every investor and market maker, as well as the rationality of all the agents, or the fact that we can substitute two assets provided that they are equally risky. Several alternatives to the EMH aim at describing more realistically the financial markets. It is the case of the fractal market hypothesis [73] and of the adaptive market hypothesis (AMH) [60] . The AMH is a good compromise between the standard EMH and the reality of statistical arbitrage. It states that a model can predict the market in average. But such a model can provide investors with performing forecasts during a limited time only, because other investors and market makers will progressively adapt their own models and decisions to this performing model. The AMH thus leads to a long-term efficiency of the market and allows statistical arbitrages for small time scales only. Besides, it is worth noting that the price series of some assets are not far from the EMH. It is the case for major stock indices. But empirical studies show that these indices have fluctuating efficiency and may encounter a loss of efficiency during financial crisis [3, 59] . Since the decisions of market makers and investors may not be the same if markets are efficient or if they aren't, it is important for them to determine from time series of prices whether the markets are efficient or not. We can cite many statistical indicators of market efficiency [75] . Some of them look for a predictability of price returns [19] , using for example the amplitude of the parameters of a timevarying AR model [46, 66] . Other indicators measure a deviation from a random walk, using for example variance ratios [18] or a combination of several statistics such as fractal dimension and entropy [53, 52, 32] . But the most widespread indicator of market efficiency seems to be the Hurst exponent [58, 16, 63, 34] . The Hurst exponent is an indicator of long-range memory. Mandelbrot and van Ness introduced the fBm as a model consistent with a given Hurst exponent and extending the standard Bm by making the increments dependent on each other [61] . For this reason, the fBm is a popular model in finance, provided that practitioners do not want to comply with the EMH. In the fBm, the Hurst exponent H is also linked to the fractal property of the series, insofar as the variance of the increments of duration τ is τ 2H times the variance of increments of duration 1. Some estimators of the Hurst exponent use this fractal property instead of the long-range memory. However, given the fractal property estimated on the dataset, the fBm is not the only possible model. Other models may indeed have the same estimated Hurst exponent as the fBm but, relying on another specification, they may lead to other conclusions regarding the dependence of the increments and thus regarding the efficiency of the markets [1] . This fact has been documented for instance for foreign exchange rates, for which the stationarity of the time series biases the estimation of the Hurst exponent in the perspective of an fBm [35, 36] . Indicators of market efficiency should not bypass the diversity of models featuring a fractal property and consistent with the estimated Hurst exponent. Extensions of the fBm include some specificities for the Hurst exponent: it may vary deterministically through time, as in the multifractional Brownian motion [71, 11, 20, 34] , it may be a random process, as in the multifractional process with random exponent [5, 14, 33, 37] , or it may even be asymmetric [17, 81] . In what follows, after a presentation of the Hurst exponent in the perspective of the fBm, we will focus on a model in which increments are not necessarily Gaussian. In this framework, the fLsm is a natural extension of the fBm, in which increments follow an alpha-stable distribution [79, 82, 80, 37] . The fractal property of this process takes into account both the dependence between increments and the tail parameter of the alpha-stable distribution. This model thus makes it possible to disentangle the kurtosis of price returns and the efficiency of the market. It provides us with richer information than the sole Hurst exponent. This refinement is not superfluous. We will indeed see in the empirical section that the conclusions regarding the efficiency of stock indices are not the same if we consider the Hurst exponent or the fLsm-based indicator of market efficiency. The Hurst exponent was originally introduced by Harold Edwin Hurst as an indicator of long-range memory that could be obtained thanks to the rescaled range (R/S) analysis [45] . Later, alternative estimation methods appeared, such as the detrended fluctuation analysis [72] , or the absolutemoment method [11, 10, 12, 34] , which is related to the notion of generalized Hurst exponent (GHE) [7, 25] . The absolute-moment method is a method mainly used by the community of statisticians of stochastic processes, because, contrary to the R/S analysis, it is strongly related to a stochastic process, namely the fBm. The fBm is a generalization of the standard Bm. Increments of the fBm are Gaussian, since the fBm is a fractional integral or a fractional derivative of a Bm, W t [61] : The two parameters of the fBm are the Hurst exponent H, and the volatility parameter σ. If H = 1/2, the fBm is a Bm. If H > 1/2 (respectively H < 1/2), the fBm is the fractional integral (resp. fractional derivative) of order H −1/2 (resp. 1/2−H) of a Bm; increments are thus positively (resp. negatively) correlated. The absolute-moment method uses another definition of the fBm, which is consistent with the integral form provided above. Indeed, an fBm is also the only zero-mean Gaussian process, with zero at the origin, such that, for s, t ≥ 0: From this covariance, we get the self-similarity property: for k > 0. This property states that the k-order absolute moment of the increments of duration |t − s| is proportional to |t − s| kH . Comparing two scales thus makes it possible to estimate H. If we focus on the two smallest scales, we indeed get the following estimator, for a time series of log-prices X 1 , X 2 , ..., X t : which converges almost surely toward H [11, 10] . We could use this estimator of H as an indicator of market efficiency. But, we are mostly interested here in evaluating the evolution of market efficiency through time. We need therefore a time-varying version of this estimator. This question is not new and the solution put forward in the literature is often based on the estimation in sliding windows [20, 13] , possibly with a smoothing of the raw series of Hurst exponents as a post-processing [34] . But we prefer a method more consistent with the smoothing applied above for estimating the parameters of a distribution, that is with an exponential weighting, insofar as it overweights more recent observations and is thus a more relevant picture of the current state of the market. The closest method in the literature is a timevarying GHE using the exponentially-weighted moving average [64] . The main difference with the method we propose is that the GHE is always based on a linear regression of log-absolute moments of increments on several log time scales. Contrary to the GHE, we focus on only two scales, so that we get a simpler closed-form estimator: Beyond this simple formula, a more efficient implementation method is possible. Given the one-step it consists of the following recurrence: We will discuss in Section 3.2 the selection of the parameter k in equation (9) . The question of the optimal choice of the discount factor ω in the estimator H ω k,t is also to be addressed. We could imagine to adapt the method exposed in section 2.4 to the case of an fBm. But it sounds more relevant to use the same discount factor for all the statistics of our work. We will thus simply use the discount factor chosen for estimating the time-varying stable distribution. The fLsm is a generalization of the fBm in which increments follow an alpha-stable distribution, which admits the Gaussian distribution as a particular case. The fLsm is defined as the fractional integral or fractional derivative of a symmetric Lévy-stable motion [78, 79, 80] : where L α,γ (t) is a symmetric α-stable process of scale parameter γ and In other words, increments follow a symmetric stable law and, if and only if H − 1/α > 0, nonoverlapping increments are positively dependent, that is with a positive codifference or a positive covariation [56] . The parameter H controls the scaling behaviour of the process, in the same manner as in the fBm, and the parameter α controls the thickness of the tails. The lower α, the fatter the tails. If α = 2, increments are Gaussian and the fLsm is an fBm. When comparing an fBm and an fLsm, the fractal feature of the latter is not obtained only by adjusting the dependence of the increments but, in addition, by tuning the kurtosis of the underlying law. Therefore, we can write the Hurst exponent as the combination of a tail component, 1/α, and a memory parameter, m: In this framework, the Hurst exponent is not the most relevant indicator of market efficiency and one should instead use m. As we have proposed time-varying estimators both for α and for H, we write the following time-varying estimator for m: where α ω t is the estimator of the parameter of the stable law defined in section 2.3. Efficient markets then correspond to m ω t close to zero. The multifractional multistable motion is a model allowing this kind of dynamic with smoothly time-varying parameters α t and γ t . It is a localisable process in the sense that such a process admits at each time a local form, also called tangent process, which in our case is an fLsm [29, 55] . We also have to select an appropriate k for the estimator of H in equation (11) . A widespread choice is k = 2, because it minimizes the variance of the estimator. However, for α-stable variables, the absolute moments are only defined for k < α [48] . We can mitigate this theoretical constraint by noting that the statistics of equation (9) are finite even for higher values of k. In the empirical part of this paper, we first follow the widespread choice k = 2. Then, we confirm the results with a lower value of k, namely k = 0.5, which is consistent with the minimal estimated α appearing in Figure 5 . We apply the above method to ten stock indices of various regions: USA (S&P 500, S&P 100), Europe (EURO STOXX 50, Euronext 100, DAX, CAC 40), Asia (Nikkei, KOSPI, SSE 180), and Australia (S&P/ASX 200). We have used data from Yahoo finance in the time interval between the 1st May 2015 and the 29th June 2020. The first date at which we estimate stable densities and the parameters of an fBm and an fLsm, that is t 0 , is the 1st November 2019. The period of study includes the financial crisis sparked by the COVID-19 pandemic. We first determine for each stock index the optimal discount factor ω as in equation (8). The results are displayed in Table 1 . We observe that the optimal discount factors are close to 0.95, whatever the index considered. For the rest of the empirical study, we consider a common discount factor ω m , so that we can make fair comparisons between stock indices. We choose ω m = 0.956, which is the highest optimal discount factor measured for the various stock indices. This conservative choice limits the risk of providing a new observation with a too high weight in the dynamic estimators. A lower value could lead to spurious conclusions regarding the market efficiency for some stock indices. Using ω m , we are able to determine the dynamic pdf of daily price returns. We display in Figure 1 these densities for four stock indices corresponding to regions with a different timing in the growth of the outbreak: S&P 500, the French CAC 40, the Chinese SSE 180, and the S&P/ASX 200 indices. For each of these indices, we plot the pdf before the crisis, in November 2019, at the peak of the crisis, and a the end of our sample, late June 2020. The peak of the crisis is not the same for all the indices. We define the peak date as the one leading to the maximal value for |m|. This date is in March for the four indices. Exact dates are provided in Table 3 . For the four indices, the pdf at the peak shows fat tails and asymmetry. Late June, the pdf still has these features, except for SSE 180 index, for which the pdf is very similar to the one before the crisis, indicating a very fast recovery in China. The case of CAC 40 at the peak is also of interest, because the advent of fat tails and asymmetry is so abrupt that it does not crush the body of the pdf, contrary to other indices. In this paper, we are mostly interested in determining whether the financial markets are efficient during a financial crisis. For this purpose, we have introduced two indicators. The first one is the widespread Hurst exponent, estimated here in a dynamic fashion as exposed in Section 3.1. But the Hurst exponent H provided above is an indicator of dependence between price returns only if these price returns are Gaussian. In a more general framework, if we consider the possibility of fat tails by the mean of an alpha-stable distribution, we define another efficiency indicator by the memory parameter m of an fLsm, as exposed in Section 3.2. In the first approach, the market is efficient for H = 1/2. In the second approach, which is more accurate because it takes into account the kurtosis of price returns thanks to the α parameter, the market is efficient for m = 0. The null hypothesis H 0 is the efficiency of each market. In order to know for which threshold of efficiency indicator we can reject H 0 with a given confidence p, we perform a simulation. We consider that the right price model corresponding to H 0 is a geometric Bm. We thus simulate a time series following this model. The length of the simulated series used for estimating the first parameters in t 0 is the same as in our financial dataset. We then simulate 4,000 other dates. For each of these dates, we estimate H, m, and α dynamically, using the discount factor ω m . We consider that the bounds of the confidence interval with confidence level p, for the estimated H, m, and α, are the empirical quantiles of the corresponding parameters, estimated on the simulations, for probabilities (1 − p)/2 and (1 + p)/2. We display the time-varying Hurst exponent in Figure 2 for the four focal stock indices of our study. Before the crisis, which begins in February or March with respect to the region, the Hurst exponent is not significantly different from 1/2, so that we cannot reject H 0 during this period. During the crisis, we observe very low Hurst exponents for the S&P 500 index which make it possible to reject H 0 with a confidence of more than 99%. But for the three other indices, the drawdown of the Hurst exponent is much less significant, in particular for the SSE 180 index. According to this approach, the market becomes clearly inefficient if we consider the S&P 500 index, whereas we cannot ascertain the inefficiency of the three other indices. It is also worth noting that after a peak downward, the Hurst exponent reaches abnormally high values for the French and the Australian indices. It suggests a persistence of the crisis: a short mean-reverting phenomenon followed by positively correlated price returns. If we consider another indicator of market efficiency, namely the memory parameter m of an fLsm, the conclusions regarding the impact of COVID-19 on the market efficiency are not the same. In Figure 3 , we observe first that m is in general negative, even before the crisis. It indicates a dominating mean-reversion phenomenon in the stock markets. But it is in fact often not significantly different from 0. When the crisis occurs, m goes downward and H 0 is rejected with a confidence higher than 99%, whatever the stock index. The most significant drawdown of m is again for the S&P 500 index. The duration of the significant inefficiency varies among the indices. The longer period is for the S&P 500 index. The S&P/ASX 200 has a very short period of inefficiency. Using k = 0.5 instead of k = 2, as discussed in Section 3.2, leads to similar results. In particular, the loss of efficiency is very strong for the S&P 500 index, regardless of the indicator used, H or m. For the CAC 40 index, the Hurst exponent is never significantly below 1/2, but m becomes significantly negative during the first lock-down. We display the corresponding evolutions in Figure 4 . It is also interesting to track another important parameter of the fLsm, namely α, which depicts the size of the tails of the distribution of price returns. When α = 2, the price returns follow a Gaussian distribution. The lower α, the fatter the tails. For the American, French, and Australian indices, we observe in Figure 5 a negative impact of the crisis on α. It means that extreme events tend to occur more frequently and with a larger magnitude. This stylized fact is confirmed by another approach relying on non-parametric densities [40] . The evolution of α in the Chinese market is not similar to the three other indices and the values reached are less significantly different from 2. A progressive recovery toward high values of α is visible for the CAC 40 and the S&P/ASX 200 indices. For the S&P 500 index, we also observe an abrupt increase of α after the peak of the crisis, but it is of limited amplitude and α remains at a fairly low value. We display in Tables 2 and 3 the range of values reached during the period by the two efficiency indicators, H and m, for the ten stock indices considered. These tables confirm that the greatest impact of COVID-19 on market efficiency occurred for US indices, whatever the efficiency indicator. We also note that inefficiency always leads to negative values of m. Indeed, the observed upper bounds in the period are never significantly different from 0. On the contrary, for the Hurst exponent, we find downward peaks below 1/2 as well as upward peaks above 1/2, so that it is unclear whether inefficiency leads to high or low values for H. In fact, the presence of fatter tails during the crisis biases this efficiency indicator. So, focusing on the m indicator, we find fairly synchronized peaks for indices of the same region: on 10th March 2020 in the USA, on 16th March 2020 in Europe. Other regions seem less affected: the maximal |m| is indeed lower in Asia and Australia. However, the biased H indicator suggests a similar loss of efficiency in Europe and in Asia. This underpins again the relevance of the refinement of the efficiency indicator to take into account the kurtosis. The two efficiency indicators also lead to opposite conclusions when considering the situation at the end of the sample: according to H, markets are efficient again everywhere, whereas they are significantly inefficient in the USA and in Japan according to m. We have shown to which extent the stock markets become inefficient during the COVID-19 crisis. The efficiency is clearly rejected in the case of the S&P index. On the contrary, the Hurst exponent does not make it possible to conclude about a loss of efficiency for the CAC Table 3 : Range of values reached by the efficiency indicator m between November 2019 and June 2020 for ten stock indices. T is the 29th June 2020. An efficient market corresponds to m close to 0. memory parameter of an fLsm, we observe the occurrence of an inefficiency period almost at the beginning of the crisis, even though it is less noticeable for the Chinese and the Australian indices. We have also introduced in this paper the tools used for this analysis, namely the estimation of a dynamic stable distribution along with the estimation of the dynamic Hurst exponent and memory parameter of an fLsm. An important free parameter in this approach is the discount factor which is related to the speed at which the weight of past information decreases. We have used a selection rule based on the minimization of a criterion depicting the uniformity and the independence of the PITs, consistently with the literature on the validation of density forecasts. Optimizing a basket against the efficient market hypothesis. Quantitative finance Méthodes d'estimation pour des lois stables avec des applications en finance Has the 2008 financial crisis affected stock market efficiency? The case of Eurozone Forecasting the effect of COVID-19 on the S&P500. working paper Multifractional processes with random exponent The unprecedented stock market impact of COVID-19 Multifractality of self-affine fractals The structure of gold and silver spread returns CAPM, risk and portfolio selection in α-stable markets Identifying the multifractional function of a Gaussian process Elliptic Gaussian random processes Pathwise identification of the memory function of multifractional Brownian motion with application to finance Pointwise regularity exponents and market cross-correlations. International review of business research papers Pointwise regularity exponents and well-behaved residuals in stock markets Stable distributions The Hurst exponent over time: testing the assertion that emerging markets are becoming more efficient Asymmetric multifractal scaling behavior in the Chinese stock market: Based on asymmetric MF-DFA. Physica A: statistical mechanics and its applications Variance-ratio tests of random walk: an overview Evidence on the speed of convergence to market efficiency Identification of multifractional Brownian motion Parameter stability in the market model: tests and time varying parameter estimation with UK data How does temperature vary over time?: evidence on the stationary and fractal nature of temperature fluctuations Estimating the index of a stable distribution Time-varying quantiles. working paper Multi-scaling in finance Evaluating density forecasts, with applications to financial risk management On the asymptotic normality of the maximum-likelihood estimate when sampling from a stable distribution CAViaR: Conditional autoregressive value at risk by regression quantiles Localizable moving average symmetric stable and multistable processes Efficient capital markets: A review of theory and empirical work Parameter estimates for symmetric stable distributions Evolution of scaling behaviors in currency exchange rate series Modeling the time-changing dependence in stock markets Estimation of time-dependent Hurst exponents with variational smoothing and application to forecasting foreign exchange rates. Physica A: statistical mechanics and its applications Hurst exponents and delampertized fractional Brownian motions A comparison of maximum likelihood and absolute moments for the estimation of Hurst exponents in a stationary framework Fractal analysis of the multifractality of foreign exchange rates. Mathematical methods in economics and finance Probability density of the empirical wavelet coefficients of a noisy chaos Wavelet shrinkage of a noisy dynamical system with non-linear noise impact Estimation of time-varying kernel densities and chronology of the impact of COVID-19 on financial markets Bayesian time-varying quantile forecasting for value-at-risk in financial markets Dynamic quantile models Kernel density estimation for time series data L-moments: Analysis and estimation of distributions using linear combinations of order statistics Long-term storage capacity of reservoirs. Transactions of the American society of civil engineering International stock market efficiency: a non-Bayesian timevarying model approach Fast parallel α-stable distribution function evaluation and parameter estimation using OpenCL in GPGPUs Extrapolation of stable random fields Quantile autoregression Characteristic function based estimation of stable parameters Regression-type estimation of the parameters of stable laws On Bitcoin markets (in)efficiency and its evolution Measuring capital market efficiency: Global and local correlations structure Analysing financial returns by using regression models based on non-symmetric stable distributions An estimation of the stability and the localisability functions of multistable processes The long-range dependence of linear log-fractional stable motion A survey on computing Lévy stable distributions and a new MATLAB toolbox The long memory of the efficient market Financial crisis and stock market efficiency: Empirical evidence from Asian countries The adaptive markets hypothesis Fractional Brownian motions, fractional noises and applications Simple consistent estimators of stable distribution parameters. Communications in statistics -simulation and computation Is Hurst exponent value useful in forecasting financial time series? Asian social science Dynamical generalized Hurst exponent as a tool to monitor unstable periods in financial time series Exponentially weighted simultaneous estimation of several quantiles. World academy of science, engineering and technology On the evolution of cryptocurrency market efficiency. working paper An algorithm for evaluating stable densities in Zolotarev's (M) parameterization Modeling financial data with stable distributions The estimation of the parameters of the stable laws Regression approach for modeling COVID-19 spread and its impact on stock market. working paper Multifractional Brownian motion: definition and preliminary results. working paper Mosaic organization of DNA nucleotides Fractal market analysis: applying chaos theory to investment and analysis Estimation in univariate and multivariate stable distributions The dynamics of market efficiency. The review of financial studies Parameterization of the distribution of white and grey matter in MRI using the α-stable distribution Modelling with mixture of symmetric stable distributions using Gibbs sampling Stable non-Gaussian processes: stochastic models with infinite variance Simulation methods for linear fractional stable motion and FARIMA using the Fast Fourier Transform The Hurst exponent of heart rate variability in neonatal stress, based on a mean-reverting fractional Lévy stable motion. Fluctuation and noise letters Asymmetric detrended fluctuation analysis in neonatal stress Complete description of all self-similar models driven by Lévy stable noise Local linear quantile regression Translations of mathematical monographs The authors deeply thank Akin Arslan, Thomas Barrat, and Sarah Bouabdallah for their valuable help in the implementation of some of the methods described in this paper. MG thanks the participants of the Econphysics Colloquium 2021 in Lyon for useful comments.