key: cord-0191404-e2ajhtig authors: Barczy, Matyas; Ned'enyi, Fanni K.; Pap, Gyula title: On aggregation of subcritical Galton-Watson branching processes with regularly varying immigration date: 2019-06-02 journal: nan DOI: nan sha: 57bab3b1bc83bdff117f58150534bf3d58bc80a5 doc_id: 191404 cord_uid: e2ajhtig We study an iterated temporal and contemporaneous aggregation of $N$ independent copies of a strongly stationary subcritical Galton-Watson branching process with regularly varying immigration having index $alpha in (0, 2)$. Limits of finite dimensional distributions of appropriately centered and scaled aggregated partial sum processes are shown to exist when first taking the limit as $N to infty$ and then the time scale $n to infty$. The limit process is an $alpha$-stable process if $alpha in (0, 1) cup (1, 2)$, and a deterministic line with slope $1$ if $alpha = 1$. The field of temporal and contemporaneous (also called cross-sectional) aggregations of independent stationary stochastic processes is an important and very active research area in the empirical and theoretical statistics and in other areas as well. Robinson [26] and Granger [9] started to investigate the scheme of contemporaneous aggregation of random-coefficient autoregressive processes of order 1 in order to obtain the long memory phenomenon in aggregated time series. For surveys on aggregation of different kinds of stochastic processes, see, e.g., Pilipauskaitė and Surgailis [19] , Jirak [12, page 512] or the arXiv version of Barczy et al. [3] . of finite dimensional distributions of appropriately centered and scaled aggregated partial sum processes are shown to exist when first the number of copies N → ∞ and then the time scale n → ∞. Very recently, Pilipauskaitė et al. [18] extended the results of Puplinskaitė and Surgailis [22] (idiosyncratic case) deriving limits of finite dimensional distributions of appropriately centered and scaled aggregated partial sum processes when first the time scale n → ∞ and then the number of copies N → ∞, and when n → ∞ and N → ∞ simultaneously with possibly different rates. The above listed references are all about aggregation procedures for times series, mainly for randomized autoregressive processes. The present paper investigates aggregation schemes for some branching processes with low moment condition. Branching processes, especially Galton-Watson branching processes with immigration, have attracted a lot of attention due to the fact that they are widely used in mathematical biology for modelling the growth of a population in time. In Barczy et al. [4] , we started to investigate the limit behavior of temporal and contemporaneous aggregations of independent copies of a stationary multitype Galton-Watson branching process with immigration under third order moment conditions on the offspring and immigration distributions in the iterated and simultaneous cases as well. In both cases, the limit process is a zero mean Brownian motion with the same covariance function. In Barczy et al. [4, page 54], one can also find a suggestion for a possible application of aggregation of integer-valued autoregressive processes of order 1 (a special branching process), namely, for modelling migration. In this paper we study the limit behavior of temporal and contemporaneous aggregations of independent copies of a strongly stationary Galton-Watson branching process (X k ) k 0 with regularly varying immigration having index in (0, 2) (yielding infinite variance) in an iterated, idiosyncratic case, namely, when first the number of copies N → ∞ and then the time scale n → ∞. Our results are analogous to those of Puplinskaitė and Surgailis [22] . The present paper is organized as follows. In Section 2, first we collect our assumptions that are valid for the whole paper, namely, we consider a sequence of independent copies of (X k ) k 0 such that the expectation of the offspring distribution is less than 1 (so-called subcritical case). In case of α ∈ [1, 2), we additionally suppose the finiteness of the second moment of the offspring distribution. Under our assumptions, by Basrak et al. [5, Theorem 2.1.1] (see also Theorem E.1), the unique stationary distribution of (X k ) k 0 is also regularly varying with the same index α. In Theorem 2.1, we show that the appropriately centered and scaled partial sum process of finite segments of independent copies of (X k ) k 0 converges to an α-stable process. The characteristic function of the α-stable limit process is given explicitly as well. In Remarks 2.2 and 2.3, we collect some properties of the α-stable limit process in question, such as the support of its Lévy measure. The proof of Theorem 2.1 is based on a slight modification of Theorem 7.1 in Resnick [25] , namely, on a result of weak convergence of partial sum processes towards Lévy processes, see Theorem D.1, where we consider a different centering. In the course of the proof of Theorem 2.1 one needs to verify that the so-called limit measure of finite segments of (X k ) k 0 is in fact a Lévy measure. We determine these limit measures explicitly (see part (i) of Proposition E.3) applying an expression for the so-called tail measure of a strongly stationary regularly varying sequence based on the corresponding (whole) spectral tail process given in Planinić and Soulier [20, Theorem 3.1] . While the centering in Theorem 2.1 is the so-called truncated mean, in Corollary 2.4 we consider no-centering if α ∈ (0, 1), and centering with the mean if α ∈ (1, 2). In both cases the limit process is an α-stable process, the same one as in Theorem 2.1 plus some deterministic drift depending on α. Theorem 2.1 and Corollary 2.4 together yield the weak convergence of finite dimensional distributions of appropriately centered and scaled contemporaneous aggregations of independent copies of (X k ) k 0 as the number of copies tends to infinity, see Corollary 2.6. Theorem 2.7 contains our main result, namely, we determine the weak limit of appropriately centered and scaled finite dimensional distributions of temporal and contemporaneous aggregations of independent copies of (X k ) k 0 , where the limit is taken in a way that first the number of copies tends to infinity and then the time corresponding to temporal aggregation tends to infinity. It turns out that the limit process is an α-stable process if α ∈ (0, 1) ∪ (1, 2), and a deterministic line with slope 1 if α = 1. We consider different kinds of centerings, and we give the explicit characteristic function of the limit process as well. In Remark 2.8, we rewrite this characteristic function in case of α ∈ (0, 1) in terms of the spectral tail process of (X k ) k 0 . We close the paper with five appendices. In Appendix A we recall a version of the continuous mapping theorem due to Kallenberg [14, Theorem 3.27] . Appendix B is devoted to some properties of the underlying punctured space R d \ {0} and vague convergence. In Appendix C we recall the notion of a regularly varying random vector and its limit measure, and, in Proposition C.10, the limit measure of an appropriate positively homogeneous real-valued function of a regularly varying random vector. In Appendix D we formulate a result on weak convergence of partial sum processes towards Lévy processes by slightly modifying Theorem 7.1 in Resnick [25] with a different centering. In the end, we recall a result on the tail behavior and forward tail process of (X k ) k 0 due to Basrak et al. [5] , and we determine the limit measures of finite segments of (X k ) k 0 , see Appendix E. Finally, we summarize the novelties of the paper. According to our knowledge, studying aggregation of regularly varying Galton-Watson branching processes with immigration has not been considered before. In the proofs we make use of the explicit form of the (whole) spectral tail process and a very recent result of Planinić and Soulier [20, Theorem 3 .1] about the tail measure of strongly stationary sequences. We explicitly determine the limit measures of finite segments of (X k ) k 0 , see part (i) of Proposition E.3. In a companion paper, we will study the other iterated, idiosyncratic aggregation scheme, namely, when first the time scale n → ∞ and then the number of copies N → ∞. Let Z + , N, Q, R, R + , R ++ , R − , R −− and C denote the set of non-negative integers, positive integers, rational numbers, real numbers, non-negative real numbers, positive real numbers, non-positive real numbers, negative real numbers and complex numbers, respectively. For each d ∈ N, the natural basis in R d will be denoted by e 1 , . . . , e d . Put 1 d := (1, . . . , 1) ⊤ and S d−1 := {x ∈ R d : x = 1}, and denote by B(S d−1 ) the Borel σ-field of S d−1 . For a probability measure µ on R d , µ will denote its characteristic function, i.e., µ(θ) := R d e i θ,x µ(dx) for θ ∈ R d . Convergence in distributions and almost sure convergence of random variables, and weak convergence of probability measures will be denoted by D −→, a.s. −→ and w −→, respectively. Equality in distribution will be denoted by D =. We will use denote the space of all R d -valued càdlàg and continuous functions on R + , respectively. Let B(D(R + , R d )) denote the Borel σ-algebra on D(R + , R d ) for the metric defined in Chapter VI, (1.26) of Jacod and Shiryaev [10] . With this metric D(R + , R d ) is a complete and separable metric space and the topology induced by this metric is the so-called Skorokhod topology. For Let (X k ) k∈Z + be a Galton-Watson branching process with immigration. For each k, j ∈ Z + , the number of individuals in the k th generation will be denoted by X k , the number of offsprings produced by the j th individual belonging to the (k − 1) th generation will be denoted by ξ k,j , and the number of immigrants in the k th generation will be denoted by ε k . Then we have where we define 0 j=1 := 0. Here X 0 , ξ k,j , ε k : k, j ∈ N are supposed to be independent non-negative integer-valued random variables. Moreover, {ξ k,j : k, j ∈ N} and {ε k : k ∈ N} are supposed to consist of identically distributed random variables, respectively. For notational convenience, let ξ and ε be random variables such that ξ D = ξ 1,1 and ε D = ε 1 . If m ξ := E(ξ) ∈ [0, 1) and ∞ ℓ=1 log(ℓ) P(ε = ℓ) < ∞, then the Markov chain (X k ) k∈Z + admits a unique stationary distribution π, see, e.g., Quine [23] . Note that if m ξ ∈ [0, 1) and P(ε = 0) = 1, then ∞ ℓ=1 log(ℓ) P(ε = ℓ) = 0 and π is the Dirac measure δ 0 concentrated at the point 0. In fact, π = δ 0 if and only if P(ε = 0) = 1. Moreover, if m ξ = 0 (which is equivalent to P(ξ = 0) = 1), then π is the distribution of ε. In what follows, we formulate our assumptions valid for the whole paper. We assume that m ξ ∈ [0, 1) (so-called subcritical case) and ε is regularly varying with index α ∈ (0, 2), i.e., Then P(ε = 0) < 1 and ∞ ℓ=1 log(ℓ) P(ε = ℓ) < ∞, see, e.g., Barczy et al. [2, Lemma E.5] , hence the Markov process (X k ) k∈Z + admits a unique stationary distribution π. We suppose that X 0 D = π, yielding that the Markov chain (X k ) k∈Z + is strongly stationary. In case of α ∈ [1, 2), we suppose additionally that E(ξ 2 ) < ∞. By Basrak et al. [5, Theorem 2.1.1] (see also Theorem E.1), X 0 is regularly varying with index α, yielding the existence of a sequence (a N ) N ∈N in R ++ with N P(X 0 > a N ) → 1 as N → ∞, see, e.g., Lemma C.5. Let us fix an arbitrary sequence (a N ) N ∈N in R ++ with this property. In fact, a N = N 1/α L(N), N ∈ N, for some slowly varying continuous function L : R ++ → R ++ , see, e.g., Araujo and Giné [1, Exercise 6 on page 90]. Let X (j) = (X (j) k ) k∈Z + , j ∈ N, be a sequence of independent copies of (X k ) k∈Z + . We mention that we consider so-called idiosyncratic immigrations, i.e., the immigrations (ε One could study the case of common immigrations as well, i.e., when (ε t∈R + is a (k + 1)-dimensional α-stable process such that the characteristic function of the distribution µ k,α of X (k,α) 1 has the form with the convention 0 log(0) := 0, Note that C exists and is finite, since , and, by L'Hôspital's rule, lim u→0 u −2 (sin(u) − u) = 0, hence the integrand u −2 (sin(u) − u) can be extended to [0, 1] continuously, yielding that its integral on [0, 1] is finite. Note that the scaling and the centering in (2.1) do not depend on j or k, since the copies are independent and the process (X k ) k∈Z + is strongly stationary, and especially, E X The next two remarks are devoted to the study of some properties of µ k,α . 2.2 Remark. By the proof of Theorem 2.1 (see (3.4) ), it turns out that the Lévy measure of µ k,α is where the space R k+1 0 := R k+1 \ {0} and its topological properties are discussed in Appendix B. The radial part of ν k,α is u −α−1 du, and the spherical part of ν k,α is any positive constant multiple of the measure form a basis in R k+1 , hence there is no proper linear subspace V of R k+1 covering the support of ν k,α . Consequently, µ k,α is a nondegenerate measure in the sense that there are no a ∈ R k+1 and a proper linear subspace V of R k+1 such that a + V covers the support of µ k,α , see, e.g., Sato [27, Proposition 24.17 (ii) ]. ✷ 2.3 Remark. If α ∈ (0, 1), then, for each θ ∈ R k+1 , see the proof of Theorem 2.1. Consequently, the drift of µ k,α is − α 1−α 1 k+1 , see, e.g., Sato [27, Remark 14.6 ]. This drift is nonzero, hence µ k,α is not strictly α-stable, see, e.g., Sato [27, Theorem 14.7 (iv) and Definition 13.2]. The 1-stable probability measure µ k,1 is not strictly 1-stable, since the spherical part of its nonzero Lévy measure ν k,1 is concentrated on R k+1 + ∩ S k , and hence the condition (14.12) in Sato [27, Theorem 14.7 (v) ] is not satisfied. If α ∈ (1, 2), then, for each θ ∈ R k+1 , see the proof of Theorem 2.1. Consequently, the center of µ k,α is α α−1 1 k+1 , which is, in fact, the expectation of µ k,α , and it is nonzero, and hence µ k,α is not strictly stable, see, e.g., Sato [27, Theorem 14.7 (vi) and Definition 13.2] . All in all, µ k,α is not strictly α-stable, but α-stable for any α ∈ (0, 2). We also note that µ k,α is absolutely continuous, see, e.g., Sato [27, Theorem 27.4 and Proposition 14.5] . ✷ The centering in Theorem 2.1 can be simplified in case of α = 1. Namely, if α ∈ (0, 1], then for each t ∈ R ++ , by Lemma C.6, In a similar way, if α ∈ (1, 2), then for each t ∈ R ++ , This shows that in case of α ∈ (0, 1), there is no need for centering, in case of α ∈ (1, 2) one can center with the expectation as well, while in case of α = 1, neither non-centering nor centering with the expectation works even if the expectation does exist. More precisely, without centering in case of α ∈ (0, 1) or with centering with the expectation in case of α ∈ (1, 2), we have the following convergences. 2.4 Corollary. In case of α ∈ (0, 1), for each k ∈ Z + , we have as N → ∞, and, in case of α ∈ (1, 2), for each k ∈ Z + , we have Note that in case of α ∈ (1, 2), the scaling and the centering in (2.5) do not depend on j or k, since the copies are independent and the process (X k ) k∈Z + is strongly stationary, and especially, E X (j) k = E(X 0 ) = mε 1−m ξ for all j ∈ N and k ∈ Z + with m ε := E(ε), see, e.g., Barczy et al. [4, formula (14) ]. The next remark is devoted to study some distributional properties of the α-stable process X (k,α) t + α 1−α t1 k+1 t∈R + in case of α = 1. 2.5 Remark. The Lévy measure of the distribution of X (k,α) 1 + α 1−α 1 k+1 is the same as that of X (k,α) 1 , namely, ν k,α given in Remark 2.2. If α ∈ (0, 1), then the drift of the distribution of X (k,α) 1 1−α t1 k+1 t∈R + is strictly α-stable, see, e.g., Sato [27, Theorem 14.7 (iv) ]. If α ∈ (1, 2), then the center, i.e., the expectation of X (k,α) 1 + α 1−α 1 k+1 is 0, hence the process X (k,α) t + α 1−α t1 k+1 t∈R + is strictly α-stable see, e.g., Sato [27, Theorem 14.7 (vi) ]. All in all, X (k,α) t + α 1−α t1 k+1 t∈R + is strictly α-stable for any α = 1. We also note that for each t ∈ R ++ , the distribution of X (k,α) t + α 1−α t1 k+1 is absolutely continuous, see, e.g., Sato [27, Theorem 27.4 and Proposition 14.5] . ✷ Let Y (α) k k∈Z + be a strongly stationary process such that follows from the Kolmogorov extension theorem. Its strong stationarity is a consequence of (2.1) together with the strong stationarity of (X k ) k∈Z + . We note that the common distribution of Y (α) k , k ∈ Z + , depends only on α, it does not depend on m ξ , since its characteristic function has the form Theorem 2.1 and Corollary 2.4 have the following consequences for a contemporaneous aggregation of independent copies with different centerings. where (Y (k) ) k∈Z + is given by (2.6) . Limit theorems will be presented for the aggregated stochastic process with different centerings and scalings. We will provide limit theorems in an iterated manner such that first N, and then n converges to infinity. In case of α ∈ (0, 1), we have and in case of α ∈ (1, 2), we have for ϑ ∈ R. for ϑ ∈ R, where (Θ ℓ ) ℓ∈Z + is the (forward) spectral tail process of (X ℓ ) ℓ∈Z + given in (3.7) and (3.8) . Indeed, by (3.10), as desired. We also remark that, using (3.13), one can check that (2.11) does not hold in case of α ∈ (1, 2), which is somewhat unexpected in view of page 171 in Mikosch and Wintenberger [17] . ✷ 2.9 Remark. If α ∈ (0, 1), then the drift of the distribution of Z If α ∈ (1, 2), then the center, i.e., the expectation of Z All in all, the process Z Proof of Theorem 2.1. Let k ∈ Z + . We are going to apply Theorem D. The aim of the following discussion is to check condition (D.1) of Theorem D.1, namely By the assumption, we have N P(X 0 > a N ) → 1 as N → ∞, yielding also a N → ∞ as N → ∞, consequently, it is enough to show that where ν k,α is a Lévy measure on R k+1 0 . In fact, by Theorem E.2, (X 0 , . . . , X k ) ⊤ is regularly varying with index α, hence, by Proposition C.8, we know that where ν k,α is the so-called limit measure of (X 0 , . . . , X k ) ⊤ . Applying Proposition C.10 for the canonical projection p 0 : R k+1 → R given by p 0 (x) := x 0 for x = (x 0 , . . . , x k ) ⊤ ∈ R k+1 , which is continuous and positively homogeneous of degree 1, we obtain , hence ν k,α (T 1 ) 1. Moreover, by the strong stationarity of (X k ) k∈Z + , we have . In general, one does not know whether ν k,α is a Lévy measure on R k+1 0 or not. So, additional work is needed. We will determine ν k,α explicitly, using a result of Planinić and Soulier [20] . The aim of the following discussion is to apply Theorem 3.1 in Planinić and Soulier [20] in order to determine ν k,α , namely, we will prove that for each Borel measurable function Let (X ℓ ) ℓ∈Z be a strongly stationary extension of (X ℓ ) ℓ∈Z + . For each i, j ∈ Z with i j, by Theorem E.2, (X i , . . . , X j ) ⊤ is regularly varying with index α, hence, by the strong stationarity of (X k ) k∈Z and the discussion above, we know that where ν i,j,α := ν j−i,α is a non-null locally finite measure on R j−i+1 0 . According to Basrak and Segers [6, Theorem 2.1], there exists a sequence (Y ℓ ) ℓ∈Z of random variables, called the (whole) tail process of (X ℓ ) ℓ∈Z , such that Let K be a random variable with geometric distribution Especially, if m ξ = 0, then P(K = 0) = 1. If m ξ ∈ (0, 1), then we have where Y 0 is a random variable independent of K with Pareto distribution Indeed, as shown in Basrak et al. [5, Lemma 3.1] , (Y ℓ ) ℓ∈Z + is the forward tail process of (X ℓ ) ℓ∈Z . On the other hand, by Janssen and Segers [11, Example 6.2], (Y ℓ ) ℓ∈Z is the tail process of the stationary solution (X ′ ℓ ) ℓ∈Z to the stochastic recurrence equation Since the distribution of the forward tail process determines the distribution of the (whole) tail process (see Basrak and Segers [6, Theorem 3.1 (ii)]), it follows that (Y ℓ ) ℓ∈Z represents the tail process of (X ℓ ) ℓ∈Z . If m ξ = 0, then one can easily check that [20] is satisfied. Moreover, there exists a unique measure ν α on R Z endowed with the cylindrical σ-algebra . . , y j ) for y = (y ℓ ) ℓ∈Z ∈ R Z , see, e.g., Planinić and Soulier [20] . The measure ν α is called the tail measure of (X ℓ ) ℓ∈Z . If m ξ ∈ (0, 1), then, by (3.5), the (whole) spectral tail process Θ = (Θ ℓ ) ℓ∈Z of (X ℓ ) ℓ∈Z is given by If m ξ = 0, then, by (3.6), Let us introduce the so called infargmax functional I : is the first time when the supremum sup ℓ∈Z |y ℓ | is achieved, more precisely, We have P(I(Θ) = −K) = 1, hence the condition P(I(Θ) ∈ Z) = 1 of Theorem 3.1 in Planinić and Soulier [20] is satisfied. Consequently, we may apply Theorem 3.1 in Planinić and Soulier [20] for the nonnegative measurable function H : R Z → R + given by H(y) = f • p 0,k , where f : R k+1 → R + is a measurable function with f (0) = 0. By (3.2) in Planinić and Soulier [20] , we obtain where L denotes the backshift operator L : For each k ∈ Z + and u ∈ R + , on the event {K = 0}, by (3.7) and (3.8), we have Consequently, we obtain (3.2), and hence (3.1), so condition (D.1) is satisfied. The aim of the following discussion is to check condition (D.2) of Theorem D.1, namely for each j ∈ N and ℓ ∈ {0, . . . , k}. By Lemma C.6 with β = 2, we have hence, for all ε ∈ R ++ , using again that X 0 is regularly varying with index α, we have as N → ∞, and, as ε ↓ 0, we conclude (3.9). Consequently, we may apply Theorem D.1, and we obtain (2.1), where (X (k,α) t ) t∈R + is an α-stable process such that the characteristic function of the distribution µ k,α of X since it will turn out that the real and imaginary parts of the exponent in the last expression are well defined. If α ∈ (0, 1), then ∞ 0 (e ±ix − 1)x −1−α dx = Γ(−α)e ∓iπα/2 , see, e.g., (14.18) in Sato [27] and its complex conjugate, thus for each ϑ ∈ R ++ , In a similar way, for each ϑ ∈ R −− , Thus, for each ϑ ∈ R, and hence, for each θ ∈ R k+1 and j ∈ {0, . . . , k}, Consequently, Hence we obtain yielding the statement in case of α ∈ (0, 1). Note that the above calculation shows that (3.12) is valid for each α ∈ (0, 2). see, e.g., (14.19) in Sato [27] and its complex conjugate, thus for each ϑ ∈ R ++ , In a similar way, for each ϑ ∈ R −− , Thus, for each ϑ ∈ R, 13) and hence, for each θ ∈ R k+1 and j ∈ {0, . . . , k}, Consequently, we obtain (3.11) for all θ ∈ R k+1 , and, applying again (3.12), we conclude the statement in case of α ∈ (1, 2). Finally, we consider the case α = 1. For each ϑ ∈ R ++ , where C is given in (2.2), see, e.g., (14.20) in Sato [27] . Its complex conjugate has the form for ϑ ∈ R ++ , and hence Consequently, for each θ ∈ R k+1 and j ∈ {0, . . . , k}, Applying again (3.12), we have the statement in case of α = 1. ✷ Proof of Corollary 2.4. In case of α ∈ (0, 1), by (2.3) with t = 1, we have Next, we may apply Lemma A.2 with This follows, since, by (3.14), we obtain Applying Lemma A.2, we obtain , is a (k + 1)-dimensional α-stable process. By Theorem 2.1 and Remark 2.3, the characteristic function of X (k,α) 1 + α 1−α 1 k+1 has the form given in the theorem, and hence we conclude the statement in case of α ∈ (0, 1). In case of α ∈ (1, 2), by (2.4) with t = 1, we have (3.15) lim Next, we may apply Lemma A.2 with U, Φ and U (N ) , N ∈ N, as defined above, and with This follows, since, by (3.15), we obtain Applying Lemma A.2, we obtain hence, by Slutsky's lemma, (2.7) will be a consequence of (2.8). For each n ∈ N, by Corollary 2.6 and by the continuous mapping theorem, we obtain 2) . Consequently, in order to prove (2.8) and (2.10), we need to show that for each α ∈ (0, 1) ∪ (1, 2), we have For each α ∈ (0, 1) ∪ (1, 2), n ∈ N, d ∈ N, t 1 , . . . , t d ∈ R ++ with t 1 < . . . < t d and ϑ 1 , . . . , ϑ d ∈ R, we have with t 0 := 0 and For each α ∈ (0, 1) ∪ (1, 2), by the explicit form of the characteristic function of X (⌊nt d ⌋,α) 1 given in Theorem 2.1, We have hence for each α ∈ (0, 2), The aim of the following discussion is to show that for each α ∈ (0, 2) and ℓ ∈ {1, . . . , d}, Here, for each ℓ ∈ {1, . . . , d} and j ∈ {⌊nt ℓ−1 ⌋ + 1, . . . , ⌊nt ℓ ⌋}, ) . In case of α ∈ (0, 1], we have |x| α − |y| α |x + y| α |x| α + |y| α , x, y ∈ R. (3.19) In case of α ∈ (1, 2), by the mean value theorem and by (3.19) , we have |x + y| α − |x| α α|y| max{|x + y| α−1 , |x| α−1 } α|y|(|x| α−1 + |y| α−1 ), x, y ∈ R. Hence for each α ∈ (0, 2) and x, y ∈ R, we obtain |x| α − 2|y|(|x| α−1 + |y| α−1 ) |x + y| α |x| α + 2|y|(|x| α−1 + |y| α−1 ), so, by (3.18) and the squeeze theorem, to prove (3.17), it is enough to check that yielding (3.20) . In case of α ∈ (1, 2), for all x 1 , . . . , x k ∈ R, we have |x 1 + · · · + x k | α k α−1 (|x 1 | α + · · · + |x k | α ), hence, by (3.19) , for each α ∈ (0, 2), we obtain Consequently, we have yielding (3.21) . For each n ∈ N and for each j ∈ {⌊nt ℓ−1 ⌋ + 1, . . . , ⌊nt ℓ ⌋}, we have and hence 1 n as n → ∞, yielding (3.22). Thus we obtain (3.17). Next we show that for each ℓ ∈ {1, . . . , d}, we have as n → ∞. If ϑ ℓ = 0, then this readily follows from (3.18) and (3.21) . If ϑ ℓ = 0, then we show that there exists C ℓ ∈ R ++ such that for each n ∈ N and for each j ∈ {⌊nt ℓ−1 ⌋ + 1, . . . , ⌊nt ℓ ⌋} with j < ⌊nt ℓ ⌋ + 1 − C ℓ . First, observe that, by (3.18) , the inequality or equivalently, if Hence, for ϑ ℓ = 0, n ∈ N, and j ∈ {⌊nt ℓ−1 ⌋ + 1, . . . , ⌊nt ℓ ⌋} with j < ⌊nt ℓ ⌋ + 1 − C ℓ , we have (3.23). Moreover, for each n ∈ N and j ∈ {⌊nt ℓ−1 ⌋ + 1, . . . , ⌊nt ℓ ⌋}, by (3.18), we have Consequently, by (3.17) , as desired. We conclude for all α ∈ (0, 1) ∪ (1, 2) , By the continuity theorem, we obtain for all α ∈ (0, 1) ∪ (1, 2), as n → ∞, hence the continuous mapping theorem yields (3.16) , and we finished the proofs of (2.7), (2.8) and (2.10). Now we turn to prove (2.9). For each n ∈ N, by Corollary 2.6 and by the continuous mapping theorem, in case of α = 1, we obtain Consequently, in order to prove (2.9), we need to show that Since the limit in (3.25) is deterministic, by van der Vaart [28, Theorem 2.7, part (vi)], it is enough to show that for each t ∈ R + , we have For each n ∈ N, t ∈ R + , and ϑ ∈ R, we have By the explicit form of the characteristic function of X (⌊nt⌋,1) 1 given in Theorem 2.1, as n → ∞ for each ϑ ∈ R. Indeed, 1 n log(n) 1 ⌊nt⌋+1 , 1 ⌊nt⌋+1 = ⌊nt⌋ + 1 n log(n) → 0 as n → ∞, as n → ∞, and hence, by (3.27), as n → ∞. By the continuity theorem, we obtain (3.26), hence we finished the proof of (2.9). ✷ If ξ and ξ n , n ∈ N, are random elements with values in a metric space (E, d), then we also denote by ξ n D −→ ξ the weak convergence of the distributions of ξ n on the space (E, B(E)) towards the distribution of ξ on the space (E, B(E)) as n → ∞, where B(E) denotes the Borel σ-algebra on E induced by the given metric d. The following version of the continuous mapping theorem can be found for example in Theorem 3.27 of Kallenberg [14] . A.1 Lemma. Let (S, d S ) and (T, d T ) be metric spaces and (ξ n ) n∈N , ξ be random elements with values in S such that ξ n D −→ ξ as n → ∞. Let f : S → T and f n : S → T , n ∈ N, be measurable mappings and C ∈ B(S) such that P(ξ ∈ C) = 1 and lim n→∞ d T (f n (s n ), f (s)) = 0 if lim n→∞ d S (s n , s) = 0 and s ∈ C, s n ∈ S, n ∈ N. Then f n (ξ n ) We will use the following corollary of this lemma several times. Proof. First, we check that R d 0 furnished with the metric ̺ is a complete separable metric space. If (x n ) n∈N is a Cauchy sequence in R d 0 , then for all ε ∈ (0, 1), there exists an N ε ∈ N such that ̺(x n , x m ) < ε for n, m N ε . Hence x n − x m < ε and 1 xn − 1 xm < ε for n, m N ε , i.e., (x n ) n∈N and (1/ x n ) n∈N are Cauchy sequences in R d and in R, respectively. Consequently, there exists an x ∈ R d such that lim n→∞ x n − x = 0 and 1 xn being convergent as n → ∞, yielding that x = 0, and so x ∈ R d 0 . By the continuity of the norm, lim n→∞ ̺(x n , x) = 0, as desired. The separability of R d 0 readily follows, since R d 0 ∩ Q d is a countable everywhere dense subset of R d 0 . Next, we check that B ⊂ R d 0 is bounded with respect to the metric ̺ if and only if there exists ε ∈ R ++ such that B ⊂ {x ∈ R d 0 : x > ε}. If B ⊂ R d 0 is bounded, then there exists r > 0 such that ̺(x, e 1 ) < r, x ∈ B, yielding that | 1 x − 1| < r, x ∈ B, and then x > 1 1+r , x ∈ B, so one can choose ε = 1 1+r . If there exists ε > 0 such that Since R d 0 is locally compact, second countable and Hausdorff, one could choose a metric such that the relatively compact sets are precisely the bounded ones, see Kallenberg [15, page 18] . The metric ̺ does not have this property, but we do not need it. Write (R d 0 ) for the class of bounded Borel sets with respect to the metric ̺ given in (B.1). for the class of bounded, continuous functions f : is constructed as in Chapter 4 in Kallenberg [15] . The associated notion of vague convergence of a sequence (ν is called a ν-continuity set if ν(∂B) = 0, and the class of bounded ν-continuity sets will be denoted by (R d 0 ) ν . The following statement is an analogue of the portmanteau theorem for vague convergence, see, e.g., Kallenberg [13, 15.7.2] . B.2 Lemma. Let ν, ν n ∈ M(R d 0 ), n ∈ N. Then the following statements are equivalent: The following statement is an analogue of the continuous mapping theorem for vague convergence, see, e.g., Kallenberg [13, 15.7.3] . Write D f for the set of discontinuities of a function f : Then ν n (f ) → ν(f ) as n → ∞ for every bounded measurable function f : R d 0 → R + with bounded support satisfying ν(D f ) = 0. First, we recall the notions of slowly varying and regularly varying functions, respectively. C.1 Definition. A measurable function U : R ++ → R ++ is called regularly varying at infinity with index ρ ∈ R if for all c ∈ R ++ , In case of ρ = 0, we call U slowly varying at infinity. is regularly varying at infinity with index −α, and a tail-balance condition holds: where p + q = 1. C.3 Remark. In the tail-balance condition (C.1), the second convergence can be replaced by On the other hand, if Y is a random variable such that P(|Y | > x) ∈ R ++ for all x ∈ R ++ , the function R ++ ∋ x → P(|Y | > x) ∈ R ++ is regularly varying at infinity with index −α, and (C.2) holds, then the second convergence in the tail-balance condition (C.1) can be derived in a similar way. ✷ C.4 Lemma. (i) A non-negative random variable Y is regularly varying with index α ∈ R ++ if and only if P(Y > x) ∈ R ++ for all x ∈ R ++ , and the function R ++ ∋ x → P(Y > x) ∈ R ++ is regularly varying at infinity with index −α. (ii) If Y is a regularly varying random variable with index α ∈ R ++ , then for each β ∈ R ++ , |Y | β is regularly varying with index α/β. C.5 Lemma. If Y is a regularly varying random variable with index α ∈ R ++ , then there exists a sequence (a n ) n∈N in R ++ such that n P(|Y | > a n ) → 1 as n → ∞. If (a n ) n∈N is such a sequence, then a n → ∞ as n → ∞. Proof. We are going to show that one can choose a n := max{ a n , 1}, n ∈ N, where a n denotes the 1 − 1 n lower quantile of |Y |, namely, For each n ∈ N, by the definition of the infimum, there exists a sequence (x m ) m∈N in R such that x m ↓ a n as m → ∞ and P(|Y | > x m ) 1 n , m ∈ N. Letting m → ∞, using that the distribution function of |Y | is right-continuous, we obtain P(|Y | > a n ) 1 n , thus n P(|Y | > a n ) 1, and hence (C. 3) lim sup n→∞ n P(|Y | > a n ) 1. Moreover, for each n ∈ N, again by the definition of the infimum, we have 1 n < P(|Y | > a n −1), thus n P(|Y | > a n − 1) > 1, and hence (C. 4) lim inf n→∞ n P(|Y | > a n − 1) 1. We have a n → ∞ as n → ∞, since |Y | is regularly variable with index α ∈ R + (see part (ii) of Lemma C.4), yielding that |Y | is unbounded. Thus for each q ∈ (0, 1) and for sufficiently large n ∈ N, we have a n 1 1−q , and then a n − 1 q a n , and hence P(|Y | > a n − 1) P(|Y | > q a n ). Consequently, for each q ∈ (0, 1), using (C.4) and that |Y | is regularly varying with index α ∈ R ++ , we obtain 1 lim inf n→∞ n P(|Y | > a n − 1) lim inf n→∞ n P(|Y | > q a n ) = lim inf n→∞ P(|Y | > q a n ) P(|Y | > a n ) n P(|Y | > a n ) = q −α lim inf n→∞ n P(|Y | > a n ). Hence for each q ∈ (0, 1), we have lim inf n→∞ n P(|Y | > a n ) q α . Letting q ↑ 1, we get lim inf n→∞ n P(|Y | > a n ) 1, and hence by (C.3), we conclude lim n→∞ n P(|Y | > a n ) = 1. If (a n ) n∈N is a sequence in R ++ such that n P(|Y | > a n ) → 1 as n → ∞, then a n → ∞ as n → ∞, since |Y | is unbounded. ✷ C.6 Lemma. (Karamata's theorem for truncated moments) Consider a non-negative regularly varying random variable Y with index α ∈ R ++ . Then For Lemma C.6, see, e.g., Bingham et al. [7, pages 26-27] where w −→ denotes the weak convergence of finite measures on S d−1 . The probability measure ψ is called the spectral measure of Y . The following equivalent characterization of multivariate regular variation can be derived, e.g., from Resnick [24, page 69]. C.8 Proposition. A d-dimensional random vector Y is regularly varying with some index α ∈ R ++ if and only if there exists a non-null locally finite measure µ on R d 0 satisfying the limit relation where v −→ denotes vague convergence of locally finite measures on R d 0 (see Appendix B for the notion v −→). Further, µ satisfies the property µ(cB) = c −α µ(B) for any c ∈ R ++ and B ∈ B(R d 0 ) (see, e.g., Theorem 1.14 and 1.15 and Remark 1.16 in Lindskog [16] ). The measure µ in Proposition C.8 is called the limit measure of Y . Proof of Proposition C.8. Recall that a d-dimensional random vector Y is regularly varying with some index α ∈ R ++ if and only if on (R The next statement follows, e.g., from part (i) in Lemma C.3.1 in Buraczewski et al. [8] . C.9 Lemma. If Y is a regularly varying d-dimensional random vector with index α ∈ R ++ , then for each c ∈ R d , the random vector Y − c is regularly varying with index α. Recall that if Y is a regularly varying d-dimensional random vector with index α ∈ R ++ and with limit measure µ given in (C.5), and f : R d → R is a continuous function with f −1 ({0}) = {0} and it is positively homogeneous of degree β ∈ R ++ (i.e., f (cv) = c β f (v) for every c ∈ R ++ and v ∈ R d ), then f (Y ) is regularly varying with index α β and with limit measure µ(f −1 (·)), see, e.g., Buraczewski et al. [8, page 282 ]. Next we describe the tail behavior of f (Y ) for appropriate positively homogeneous functions f : R d → R. C.10 Proposition. Let Y be a regularly varying d-dimensional random vector with index α ∈ R ++ and let f : R d → R be a measurable function which is positively homogeneous of degree β ∈ R ++ , continuous at 0 and µ(D f ) = 0, where µ is the limit measure of Y given in (C.5) and D f denotes the set of discontinuities of f . Then µ( ∞) )), and f (Y ) is regularly varying with tail index α β . Proof. For all x ∈ R ++ , we have Next, we check that f −1 ((1, ∞) ) is a µ-continuity set being bounded with respect to the metric ̺ given in (B.1). Since f (0) = 0 (following from the positive homogeneity of f ), we have f −1 ((1, ∞)) ∈ B(R d 0 ). The continuity of f at 0 implies the existence of an ε ∈ R ++ such that for all x ∈ R d with x ε we have |f (x)| 1, thus x / ∈ f −1 ((1, ∞) ), hence x > ε}, i.e., f −1 ((1, ∞) ) is separated from 0, and hence, by Lemma B.1, f −1 ((1, ∞) ) is bounded in R d 0 with respect to the metric ̺. Further, we have Here µ(f −1 ({1})) = 0, since if, on the contrary, we suppose that µ(f −1 ({1})) ∈ (0, ∞], then for all u, v ∈ R ++ with u < v, we have where we used that µ(cB) = c −α µ(B), c ∈ R ++ , B ∈ B(R d 0 ) (see Proposition C. 8) , and that This leads us to a contradiction, since f −1 ((u, v)) is separated from 0 (can be seen similarly as for f −1 ((1, ∞) )), so, by Lemma B.1, it is bounded with respect to the metric ̺, and hence µ(f −1 ((u, v))) < ∞ due to the local finiteness of µ. Hence µ(∂ R d 0 (f −1 ((1, ∞) ))) = 0, as desired. Consequently, by portmanteau theorem for vague convergence (see Lemma B.2), we have as desired. ✷ We formulate a slight modification of Theorem 7.1 in Resnick [25] with a different centering. D.1 Theorem. Suppose that for each N ∈ N, X N,j , j ∈ N, are independent and identically distributed d-dimensional random vectors such that where ν is a Lévy measure on R d 0 such that ν({x ∈ R d 0 : | e ℓ , x | = 1}) = 0 for every ℓ ∈ {1, . . . , d}, and that Then we have where (X t ) t∈R + is a Lévy process such that the characteristic function of the distribution µ of X 1 has the form Proof. There exists r ∈ R ++ such that ν({x ∈ R d 0 : x = r}) = 0, since the function x > t}) is decreasing. By an appropriate modification of Theorem 7.1 in Resnick [25] , we obtain where ( X t ) t∈R + is a Lévy process such that the characteristic function of X 1 has the form Let us consider the decomposition for each t ∈ R ++ . Here for each ℓ ∈ {1, . . . , d}, we have where g ℓ : R d → R, g ℓ (x) := x ℓ (½ { x r} − ½ {|x ℓ | 1} ), x = (x 1 , . . . , x d ) ⊤ ∈ R d . For each ℓ ∈ {1, . . . , d}, the positive and negative parts g + ℓ and g − ℓ of the function g ℓ are bounded, measurable with a bounded support (following from Lemma B.1), and, due to ν({x ∈ R d 0 : | e ℓ , x | = 1}) = 0, ℓ ∈ {1, . . . , d}, and ν({x ∈ R d 0 : x = r}) = 0, the sets of discontinuity points D g + ℓ and D g − ℓ have ν-measure 0, i.e., ν(D g + ℓ ) = ν(D g − ℓ ) = 0. Consequently, by (D.1) and Lemma B.3, we have N E(g ℓ (X N,1 )) = N E(g + ℓ (X N,1 )) − N E(g − ℓ (X N,1 )) → ν(g + ℓ ) − ν(g − ℓ ) = ν(g ℓ ) ∈ R as N → ∞, since ν(g + ℓ ), ν(g − ℓ ) ∈ R + due to the fact that ν is a Lévy measure. Next, we may apply Lemma A.2 with where Φ(U ) t = X t + t d ℓ=1 ν(g ℓ )e ℓ = X t , t ∈ R + , is a d-dimensional Lévy process, since where π denotes the unique stationary distribution of the Markov chain (X k ) k∈Z + , and consequently, π is also regularly varying with index α. Note that in case of α = 1 and m ε = ∞ Basrak et al. [5, Theorem 2.1.1] assume additionally that ε is consistently varying (or in other words intermediate varying), but, eventually, it follows from the fact that ε is regularly varying. Let (X k ) k∈Z be a strongly stationary extension of (X k ) k∈Z + . Basrak et al. [5, Lemma 3.1] described the so-called forward tail process of the strongly stationary process (X k ) k∈Z , and hence, due to Basrak and Segers [6, Theorem 2.1], the strongly stationary process (X k ) k∈Z is jointly regularly varying. E.2 Theorem. The finite dimensional conditional distributions of (x −1 X k ) k∈Z + with respect to the condition X 0 > x converge weakly to the corresponding finite dimensional distributions of (m k ξ Y ) k∈Z + as x → ∞, where Y is a random variable with Pareto distribution P(Y y) = (1 − y −α )½ [1,∞) (y), y ∈ R. Consequently, the strongly stationary process (X k ) k∈Z is jointly regularly varying with index α, i.e., all its finite dimensional distributions are regularly varying with index α. The process (m k ξ Y ) k∈Z + is the so called forward tail process of (X k ) k∈Z . Moreover, there exists a (whole) tail process of (X k ) k∈Z as well. By the proof of Theorem 2.1 and Proposition C.10, we obtain the following results. E.3 Proposition. For each k ∈ Z + , (i) the limit measure ν k,α of (X 0 , . . . , X k ) ⊤ given in (3. 3) takes the form where ν k,α is given by (3.4) and (ii) the tail behavior of X 0 + · · · + X k is given by lim x→∞ P(X 0 + · · · + X k > x) Proof. (i). In the proof of Theorem 2.1, we derived ν k,α = ν k,α / ν k,α ({x ∈ R k+1 0 : x 0 > 1}). Consequently, where, using Proposition C.10 with the 1-homogeneous function R k+1 ∋ x → x , we have ν k,α ({x ∈ R k+1 0 : x > 1}) = lim x→∞ P( (X 0 , . . . , X k ) ⊤ > x) P( (X 0 , . . . , X k ) ⊤ > x) = 1, and, by (3.4) , (ii). Applying Proposition C.10 for the 1-homogeneous functions R k+1 ∋ x → x 0 and R k+1 ∋ x → x 0 + · · · + x k and formula (3.4), we obtain lim x→∞ P(X 0 + · · · + X k > x) P(X 0 > x) = lim x→∞ P( (X 0 , . . . , X k ) ⊤ > x) P(X 0 > x) P(X 0 + · · · + X k > x) P( (X 0 , . . . , X k ) ⊤ > x) as desired. ✷ The central limit theorem for real and Banach valued random variables On tail behaviour of stationary secondorder Galton-Watson processes with immigration Iterated limits for aggregation of randomized INAR(1) processes with Poisson innovations On aggregation of multitype Galton-Watson branching processes with immigration Heavy-tailed branching process with immigration Regularly varying multivariate time series 27 of Encyclopedia of Mathematics and its Applications Stochastic models with powerlaw tails. The equation X = AX +B. Springer Series in Operations Research and Financial Engineering Long memory relationships and the aggregation of dynamic models Limit Theorems for Stochastic Processes Markov tail chains Limit theorems for aggregated linear processes Random measures Foundations of modern probability. Probability and its Applications Random measures, theory and applications Multivariate extremes and regular variation for stochastic processes The cluster index of regularly varying sequences with applications to limit theory for functions of multivariate Markov chains Joint temporal and contemporaneous aggregation of random-coefficient AR(1) processes with infinite variance Joint temporal and contemporaneous aggregation of random-coefficient AR(1) processes. Stochastic Process The tail process revisited Aggregation of random-coefficient AR(1) process with infinite variance and common innovations Aggregation of a random-coefficient AR(1) process with infinite variance and idiosyncratic innovations The multi-type Galton-Watson process with immigration Point processes, regular variation and weak convergence Heavy-tail phenomena Statistical inference for a random coefficient autoregressive model Translated from the 1990 Japanese original of Cambridge Series in Statistical and Probabilistic Mathematics