key: cord-0050055-qf4exhuj authors: Barczy, Mátyás; Nedényi, Fanni K.; Pap, Gyula title: On Aggregation of Subcritical Galton–Watson Branching Processes with Regularly Varying Immigration date: 2020-09-17 journal: Lith Math J DOI: 10.1007/s10986-020-09492-8 sha: 15156de6ad267a3db34019f1f56816b088344b9f doc_id: 50055 cord_uid: qf4exhuj Abstract. We study an iterated temporal and contemporaneous aggregation of N independent copies of a strongly stationary subcritical Galton–Watson branching process with regularly varying immigration having index α ∈ (0, 2). We show that limits of finite-dimensional distributions of appropriately centered and scaled aggregated partial-sum processes exist when first taking the limit as N → ∞and then the time scale n→ ∞. The limit process is an α-stable process if α ∈ (0, 1) ∪ (1, 2) and a deterministic line with slope 1 if α = 1. of Puplinskaitė and Surgailis [22] (idiosyncratic case) deriving limits of finite-dimensional distributions of appropriately centered and scaled aggregated partial-sum processes when first the time scale n → ∞ and then the number of copies N → ∞, and when n → ∞ and N → ∞ simultaneously with possibly different rates. The listed references are all about aggregation procedures for times series, mainly for randomized autoregressive processes. According to our knowledge, this question has not been studied before in the literature. In the present paper, we investigate aggregation schemes for some branching processes with low-moment condition. Branching processes, especially Galton-Watson branching processes with immigration, have attracted a lot of attention due to the fact that they are widely used in mathematical biology for modeling the growth of a population in time. In Barczy et al. [3] , we started to investigate the limit behavior of temporal and contemporaneous aggregations of independent copies of a stationary multitype Galton-Watson branching process with immigration under third-order moment conditions on the offspring and immigration distributions in the iterated and simultaneous cases as well. In both cases the limit process is a zero-mean Brownian motion with the same covariance function. As of 2020, modeling the COVID-19 contamination of the population of a certain region or country is of great importance. Multitype Galton-Watson processes with immigration have been frequently used to model the spreading of a number of diseases, and they can be applied for this new disease as well. For example, Yanev et al. [29] applied a two-type Galton-Watson process with immigration to model the numbers of detected COVID-19-infected and undetected COVID-19-infected people in a population. The temporal and contemporaneous aggregation of the first coordinate process of the two-type branching process in question would mean the total number of detected infected people up to some given time point across several regions. In this paper, we study the limit behavior of temporal and contemporaneous aggregations of independent copies of a strongly stationary Galton-Watson branching process (X k ) k 0 with regularly varying immigration having index in (0, 2) (yielding infinite variance) in an iterated idiosyncratic case, namely, when first the number of copies N → ∞ and then the time scale n → ∞. Our results are analogous to those of Puplinskaitė and Surgailis [22] . The present paper is organized as follows. In Section 2, we first collect our assumptions that are valid for the whole paper, namely, we consider a sequence of independent copies of (X k ) k 0 such that the expectation of the offspring distribution is less than 1 (so-called subcritical case). In case of α ∈ [1, 2), we additionally suppose the finiteness of the second moment of the offspring distribution. Under our assumptions, by Basrak et al. [6, Thm. 2.1.1] (see also Theorem D1), the unique stationary distribution of (X k ) k 0 is also regularly varying with the same index α. In Theorem 1, we show that the appropriately centered and scaled partial-sum process of finite segments of independent copies of (X k ) k 0 converges to an α-stable process. The characteristic function of the α-stable limit process is given explicitly as well. The proof of Theorem 1 is based on a slight modification of Theorem 7.1 in Resnick [25] , namely, on a result of weak convergence of partial-sum processes toward Lévy processes; see Theorem C1, where we consider a different centering. In the course of the proof of Theorem 1, we need to verify that the so-called limit measures of finite segments of (X k ) k 0 are in fact Lévy measures. We determine these limit measures explicitly (see part (i) of Proposition D1) applying an expression for the so-called tail measure of a strongly stationary regularly varying sequence based on the corresponding (whole) spectral tail process given by Planinić and Soulier [20, Thm. 3.1] . Whereas the centering in Theorem 1 is the so-called truncated mean, in Corollary 1, we consider noncentering if α ∈ (0, 1) and centering with the mean if α ∈ (1, 2). In both cases the limit process is an α-stable process, the same one as in Theorem 1 plus some deterministic drift depending on α. Theorem 1 and Corollary 1 together yield the weak convergence of finite-dimensional distributions of appropriately centered and scaled contemporaneous aggregations of independent copies of (X k ) k 0 toward the corresponding finitedimensional distributions of a strongly stationary subcritical autoregressive process of order 1 with α-stable innovations as the number of copies tends to infinity; see Corollary 2 and Proposition 1. Theorem 2 contains our main result: we determine the weak limit of appropriately centered and scaled finite-dimensional distributions of temporal and contemporaneous aggregations of independent copies of (X k ) k 0 , where the limit is taken so that first the number of copies tends to infinity and then the time corresponding to temporal aggregation tends to infinity. It turns out that the limit process is an α-stable process if α ∈ (0, 1) ∪ (1, 2) and a deterministic line with slope 1 if α = 1. We consider different kinds of centerings, and we give the explicit characteristic function of the limit process as well. In Remark 2, we rewrite this characteristic function in case of α ∈ (0, 1) in terms of the spectral tail process of (X k ) k 0 . We close the paper with four appendices. Appendix A is devoted to some properties of the underlying punctured space R d \ {0} and vague convergence. In Appendix B, we recall the notion of a regularly varying random vector and its limit measure, and in Proposition D1 the limit measure of an appropriate positively homogeneous real-valued function of a regularly varying random vector. In Appendix C, we formulate a result on weak convergence of partial-sum processes toward Lévy processes by slightly modifying Theorem 7.1 in Resnick [25] with a different centering. In the end, we recall a result on the tail behavior and forward tail process of (X k ) k 0 due to Basrak et al. [6] , and we determine the limit measures of finite segments of (X k ) k 0 ; see Appendix D. Finally, we summarize the novelties of the paper. According to our knowledge, studying aggregation of regularly varying Galton-Watson branching processes with immigration has not been considered before. In the proofs, we make use of the explicit form of the (whole) spectral tail process and a very recent result of Planinić and Soulier [20, Thm. 3 .1] about the tail measure of strongly stationary sequences. We explicitly determine the limit measures of finite segments of (X k ) k 0 ; see part (i) of Proposition D1. For brevity of the paper, we omit some (simple) proofs and calculation steps. However, all these details are included in our arXiv version Barczy et al. [4] of this paper. In a companion paper, we will study another iterated idiosyncratic aggregation scheme, namely, when first the time scale n → ∞ and then the number of copies N → ∞. Let Z + , N, Q, R, R + , R ++ , R − , R −− , and C denote the set of nonnegative integers, positive integers, rational numbers, real numbers, nonnegative real numbers, positive real numbers, nonpositive real numbers, negative real numbers and complex numbers, respectively. For each d ∈ N, the natural basis in R d is denoted by e 1 , . . . , e d . Put 1 d := (1, . . . , 1) and S d−1 := {x ∈ R d : x = 1}, where x denotes the Euclidean norm of x ∈ R d , and denote by B(S d−1 ) the Borel σ-field of S d−1 . For a probability measure μ on R d , μ denotes its characteristic function, that is, μ(θ) := R d e i θ,x μ(dx) for θ ∈ R d . Convergence in distribution and almost sure convergence of random variables and weak convergence of probability measures is denoted by D →, a.s. w →, respectively. Equality in distribution is denoted by the Borel σ-algebra on D(R + , R d ) for the metric defined by Jacod and Shiryaev [11, Chap. VI, (1.26) ]. With this metric, D(R + , R d ) is a complete separable metric space, and the topology induced by this metric is the socalled Skorokhod topology. For R d -valued stochastic processes Let (X k ) k∈Z+ be a Galton-Watson branching process with immigration. For k, j ∈ Z + , we denote the number of individuals in the kth generation by X k , the number of offsprings produced by the jth individual belonging to the (k − 1)th generation by ξ k,j , and the number of immigrants in the kth generation by ε k . Then we have where 0 j=1 := 0. Here {X 0 , ξ k,j , ε k : k, j ∈ N} are supposed to be independent nonnegative integervalued random variables. Moreover, {ξ k,j : k, j ∈ N} and {ε k : k ∈ N} are supposed to consist of identically distributed random variables, respectively. For notational convenience, let ξ and ε be independent random variables such that ξ If m ξ := E(ξ) ∈ [0, 1) and ∞ =1 log( )P(ε = ) < ∞, then the Markov chain (X k ) k∈Z+ admits a unique stationary distribution π; see, for example, Quine [23] . Note that if m ξ ∈ [0, 1) and P(ε = 0) = 1, then ∞ =1 log( )P(ε = ) = 0, and π is the Dirac measure δ 0 concentrated at the point 0. In fact, π = δ 0 if and only if P(ε = 0) = 1. Moreover, if m ξ = 0 (which is equivalent to P(ξ = 0) = 1), then π is the distribution of ε. In what follows, we formulate our assumptions valid for the whole paper. We assume that m ξ ∈ [0, 1) (so-called subcritical case) and ε is regularly varying with index α ∈ (0, 2), that is, P(ε > x) ∈ R ++ for all x ∈ R ++ , and Then P(ε = 0) < 1 and ∞ =1 log( )P(ε = ) < ∞ (see, e.g., Barczy et al. [2, Lemma E.5] ), and hence the Markov process (X k ) k∈Z+ admits a unique stationary distribution π. We suppose that X 0 D = π, yielding that the Markov chain (X k ) k∈Z+ is strongly stationary. In case of α ∈ [1, 2), we additionally suppose that E(ξ 2 ) < ∞. By Basrak et al. [6, Thm. 2.1.1] (see also Theorem D1) X 0 is regularly varying with index α, yielding the existence of a sequence (a N ) N ∈N in R ++ with N P(X 0 > a N ) → 1 as N → ∞; see, for example, Lemma B2. Let us fix an arbitrary sequence (a N ) N ∈N in R ++ with this property. In fact, a N = N 1/α L(N ), N ∈ N, for some slowly varying continuous function L : R ++ → R ++ ; see, e.g., Araujo and Giné [1, p. 90, Exercise 6] . Let X (j) = (X (j) k ) k∈Z+ , j ∈ N, be a sequence of independent copies of (X k ) k∈Z+ . We mention that we consider so-called idiosyncratic immigrations, that is, the immigrations (ε are independent. One could study the case of common immigrations as well, that is, when (ε ) t∈R+ is a (k + 1)-dimensional α-stable process such that the characteristic function of the distribution μ k,α of X (k,α) 1 has the form Moreover, for θ ∈ R k+1 , with 0 log(0) := 0, The next remark is devoted to some properties of μ k,α . Remark 1. By the proof of Theorem 1 (see (3.4) ) it turns out that the Lévy measure of μ k,α is where the space R k+1 0 := R k+1 \ {0} and its topological properties are discussed in Appendix A. The radial part of ν k,α is u −α−1 du, and the spherical part of ν k,α is any positive constant multiple of the measure k form a basis in R k+1 , and hence there is no proper linear subspace V of R k+1 covering the support of ν k,α . Consequently, μ k,α is a nondegenerate measure in the sense that there are no a ∈ R k+1 and a proper linear subspace V of R k+1 such that a + V covers the support of μ k,α ; see, for example, Sato [27, Prop. 24.17(ii) ]. The centering in Theorem 1 can be simplified in case of α = 1. Namely, if α ∈ (0, 1], then for each t ∈ R ++ , by Lemma B3, In a similar way, if α ∈ (1, 2), then for each t ∈ R ++ , This shows that in case of α ∈ (0, 1), there is no need for centering; in case of α ∈ (1, 2), we can center with the expectation as well, whereas in case of α = 1, neither noncentering nor centering with the expectation works even if the expectation does exist. More precisely, without centering in case of α ∈ (0, 1) or with centering with the expectation in case of α ∈ (1, 2), we have the following convergences. k ) k∈Z+ be a strongly stationary process such that The existence of (Y (α) k ) k∈Z+ follows from the Kolmogorov extension theorem. Its strong stationarity is a consequence of Theorem 1 together with the strong stationarity of (X k ) k∈Z+ . We note that the common distribution of Y (α) k , k ∈ Z + , depends only on α; it does not depend on m ξ , since its characteristic function has the form Proposition 1. For each α ∈ (0, 2), the strongly stationary process Y (α) k k∈Z+ is a subcritical autoregressive process of order 1 with autoregressive coefficient m ξ and α-stable innovations, namely, k ) k∈Z+ is a strongly stationary time-homogeneous Markov process. Theorem 1 and Corollary 1 have the following consequences for a contemporaneous aggregation of independent copies with different centerings. where (Y (k) ) k∈Z+ is given by (2.2). We will present limit theorems for the aggregated stochastic process ( k ) t∈R+ with different centerings and scalings and in an iterated manner such that first N and then n converge to infinity. and for ϑ ∈ R. for ϑ ∈ R, where (Θ ) ∈Z+ is the (forward) spectral tail process of (X ) ∈Z+ given in (3.7) and (3.8) . We also remark that (2.7) does not hold in case of α ∈ (1, 2), which is somewhat unexpected in view of Mikosch and Wintenberger [17, p. 171 ]. Proof of Theorem 1. Let k ∈ Z + . We are going to apply Theorem C1 with d = k + 1 and X N, The aim of the following discussion is checking condition (C.1) of Theorem C1, namely By the assumption we have N P(X 0 > a N ) → 1 as N → ∞, yielding also a N → ∞ as N → ∞, and, consequently, it suffices to show that where ν k,α is a Lévy measure on R k+1 0 . In fact, by Theorem D2, (X 0 , . . . , X k ) is regularly varying with index α, and hence by Proposition B1 we know that where ν k,α is the so-called limit measure of (X 0 , . . . , X k ) . Applying Proposition D1 for the canonical pro- , which is continuous and positively homogeneous of degree 1, we obtain Lith. Math. J., Online First, 2020 and hence ν k,α (T 1 ) 1. Moreover, by the strong stationarity of (X k ) k∈Z+ we have and thus since X 0 is regularly varying with index α, and hence ν k,α (T 1 ) ∈ (0, 1], as desired. Consequently, (3.2) holds with ν k,α = ν k,α / ν k,α (T 1 ). In general, we do not know whether ν k,α is a Lévy measure on R k+1 0 or not. So, additional work is needed. We will determine ν k,α explicitly using a result of Planinić and Soulier [20] . The aim of the following discussion is applying Theorem 3.1 in Planinić and Soulier [20] to determine ν k,α , namely, we will prove that for each Borel-measurable function f : R k+1 Let (X ) ∈Z be a strongly stationary extension of (X ) ∈Z+ . For all i, j ∈ Z with i j, by Theorem D2, (X i , . . . , X j ) is regularly varying with index α, and hence by the strong stationarity of (X k ) k∈Z and the previous discussion we know that where ν i,j,α := ν j−i,α is a nonzero locally finite measure on R j−i+1 0 . According to Basrak and Segers [7, Thm. 2.1], there exists a sequence (Y ) ∈Z of random variables, called the (whole) tail process of (X ) ∈Z , such that Let K be a random variable with geometric distribution Especially, if m ξ = 0, then P(K = 0) = 1. If m ξ ∈ (0, 1), then we have where Y 0 is a random variable independent of K with Pareto distribution , where p i,j denotes the canonical projection p i,j : R Z → R j−i+1 given by p i,j (y) := (y i , . . . , y j ) for y = (y ) ∈Z ∈ R Z ; see, for example, Planinić and Soulier [20] . The measure ν α is called the tail measure of (X ) ∈Z . If m ξ ∈ (0, 1), then by (3.5) the (whole) spectral tail process Θ = (Θ ) ∈Z of (X ) ∈Z is given by If m ξ = 0, then by (3.6) We have P(I(Θ) = −K) = 1, and hence the condition P(I(Θ) ∈ Z) = 1 of Theorem 3.1 in Planinić and Soulier [20] is satisfied. Consequently, we may apply Theorem 3.1 in Planinić and Soulier [20] for the nonnegative measurable function H : R Z → R + given by H(y) = f • p 0,k , where f : R k+1 → R + is a measurable function with f (0) = 0. By (3.2) in Planinić and Soulier [20] we obtain where L denotes the backshift operator L : R Z → R Z given by L(y) = (L(y) k ) k∈Z := (y k−1 ) k∈Z for y = (y k ) k∈Z ∈ R Z . Using P(I(Θ) = −K) = 1, we obtain For all k ∈ Z + and u ∈ R + , on the event {K = 0}, by (3.7) and (3.8) we have and hence, using P( The measure ν k,α is a Lévy measure on R k+1 0 , since (3.4) implies Consequently, we obtain (3.2) and hence (3.1), so condition (C.1) is satisfied. The aim of the following discussion is checking condition (C.2) of Theorem C1, namely, for all j ∈ N and ∈ {0, . . . , k}. By Lemma B3 with β = 2 we have and hence for all ε ∈ R ++ , using again that X 0 is regularly varying with index α, we have as N → ∞, and taking the limit as ε ↓ 0, we conclude (3.9). Consequently, we may apply Theorem C1, and we obtain the desired convergence, where (X (k,α) t ) t∈R+ is an α-stable process such that the characteristic function of the distribution μ k,α of X (k,α) 1 has the form given in Theorem 1. Indeed, (3.4) is valid for each Borel-measurable function f : R k+1 0 → C as well, for which the real and imaginary parts of the right-hand side of (3.4) are well defined. Hence for all θ ∈ R k+1 , by (C.3), since the real and imaginary parts of the exponent in the last expression are well defined. The rest is a standard calculation as we can see in our arXiv preprint Barczy et al. [4] . Proof of Corollary 1. It follows by the continuous mapping theorem; the reader can find the details in our arXiv preprint Barczy et al. [4] . Proof of Proposition 1. It is a consequence of Theorem 1 and (2.2); for a detailed proof, see our arXiv preprint Barczy et al. [4] . Proof of Corollary 2. It follows from Theorem 1 and Corollary 1 using the continuous mapping theorem. Proof of Theorem 2. In case of α ∈ (0, 1), by (2.1) with t = 1 we have and hence, by Slutsky's lemma, (2.3) is a consequence of (2.4). For each n ∈ N, by Corollary 2 and the continuous mapping theorem we obtain in case of α ∈ (0, 1) and ∈ (1, 2) . Consequently, to prove (2.4) and (2.6), we need to show that for each α ∈ (0, 1) ∪ (1, 2), we have with t 0 := 0 and For each α ∈ (0, 1)∪(1, 2), by the explicit form of the characteristic function of X ( ntd ,α) 1 given in Theorem 1 we have We further have and hence for each α ∈ (0, 1) ∪ (1, 2), The aim of the following discussion is showing that for all α ∈ (0, 1) ∪ (1, 2) and ∈ {1, . . . , d}, Here for all ∈ {1, . . . , d} and j ∈ { nt −1 + 1, . . . , nt }, In case of α ∈ (0, 1], we have In case of α ∈ (1, 2), by the mean value theorem and (3.13) we have Hence for all α ∈ (0, 2) and x, y ∈ R, we obtain so, by (3.12) and the squeeze theorem, to prove (3.11) , it suffices to check that yielding (3.14) . In case of α ∈ (1, 2), for all x 1 , . . . , x k ∈ R, we have |x 1 + · · · + x k | α k α−1 (|x 1 | α + · · · + |x k | α ), and hence by (3.13) for each α ∈ (0, 2), we obtain Consequently, we have yielding (3.15) . For each n ∈ N and for each j ∈ { nt −1 + 1, . . . , nt }, we have yielding (3.16). Thus we obtain (3.11). Next, we show that for each ∈ {1, . . . , d}, If ϑ = 0, then this readily follows from (3.12) and (3.15 ). If ϑ = 0, then we show that there exists C ∈ R ++ such that for each n ∈ N and each j ∈ { nt −1 + 1, . . . , nt } with j < nt + 1 − C . First, observe that, by (3.12), the inequality Hence, for ϑ = 0, n ∈ N, and j ∈ { nt −1 + 1, . . . , nt } with j < nt + 1 − C , we have (3.17) . Moreover, for all n ∈ N and j ∈ { nt −1 + 1, . . . , nt }, by (3.12) we have Consequently, by (3.11) we have as desired. We conclude that for all α ∈ (0, 1) ∪ (1, 2), By the continuity theorem we obtain that for all α ∈ (0, 1) ∪ (1, 2), Now we turn to prove (2.5). For each n ∈ N, by Corollary 2 and the continuous mapping theorem, in case of α = 1, we obtain Consequently, to prove (2.5), we need to show that Since the limit in (3.19 ) is deterministic, by van der Vaart [28, Thm. 2.7, part (vi)] it suffices to show that for each t ∈ R + , we have (3.20) For all n ∈ N, t ∈ R + , and ϑ ∈ R, we have By the explicit form of the characteristic function of X ( nt ,1) 1 given in Theorem 1, as n → ∞ for each ϑ ∈ R. Indeed, and By the continuity theorem we obtain (3.20) , and hence we have finished the proof of (2.5). d(x, y) := x − y , x, y ∈ R d 0 , respectively. For the proof of Lemma A1, see our arXiv preprint Barczy et al. [4] . Since R d 0 is locally compact, second countable, and Hausdorff, we can choose a metric such that the relatively compact sets are precisely the bounded ones; see Kallenberg [15, p. 18] . The metric does not have this property, but we do not need it. Write (R d 0 ) for the class of bounded Borel sets with respect to the metric given in (A.1). A measure ν on is constructed as in Kallenberg [15, Chap. 4] . The associated notion of vague convergence of is called a ν-continuity set if ν(∂B) = 0, and the class of bounded ν-continuity sets is denoted by (R d 0 ) ν . The following statement is an analogue of the portmanteau theorem for vague convergence; see, for example, Kallenberg [14, 15.7.2] . Lemma A2. Let ν, ν n ∈ M(R d 0 ), n ∈ N. Then the following statements are equivalent: First, we recall the notions of slowly varying and regularly varying functions, respectively. In case of ρ = 0, we call U slowly varying at infinity. for all x ∈ R ++ , the function R ++ x → P(|Y | > x) ∈ R ++ is regularly varying at infinity with index −α, and the following tail-balance condition holds: Remark B1. In the tail-balance condition (B.1) the second convergence can be replaced by see our arXiv preprint Barczy et al. [4] . Lemma B1. (i) A nonnegative random variable Y is regularly varying with index α ∈ R ++ if and only if P(Y > x) ∈ R ++ for all x ∈ R ++ and the function R ++ x → P(Y > x) ∈ R ++ is regularly varying at infinity with index −α. (ii) If Y is a regularly varying random variable with index α ∈ R ++ , then for each β ∈ R ++ , |Y | β is regularly varying with index α/β. If Y is a regularly varying random variable with index α ∈ R ++ , then there exists a sequence (a n ) n∈N in R ++ such that nP(|Y | > a n ) → 1 as n → ∞. If (a n ) n∈N is such a sequence, then a n → ∞ as n → ∞. Lemma B3 [Karamata's theorem for truncated moments]. Consider a nonnegative regularly varying random variable Y with index α ∈ R ++ . Then where w → denotes the weak convergence of finite measures on S d−1 . The probability measure ψ is called the spectral measure of Y . where v → denotes the vague convergence of locally finite measures on R d 0 (see Appendix A for the notion v →). Further, μ satisfies the property μ(cB) = c −α μ(B) for any c ∈ R ++ and B ∈ B(R d 0 ) (see, e.g., Theorems 1.14 and 1.15 and Remark 1.16 in Lindskog [16] ). The measure μ in Proposition B1 is called the limit measure of Y . For the proof of Proposition B1, see our arXiv preprint Barczy et al. [4] . The next statement follows, for example, from part (i) in Lemma C.3.1 in Buraczewski et al. [9] . Lemma B4. If Y is a regularly varying d-dimensional random vector with index α ∈ R ++ , then for each c ∈ R d , the random vector Y − c is regularly varying with index α. Recall that if Y is a regularly varying d-dimensional random vector with index α ∈ R ++ and limit measure μ given in (B.2), and f : R d → R is a continuous function with f −1 ({0}) = {0} that is positively homogeneous of degree β ∈ R ++ (i.e., f (cv) = c β f (v) for every c ∈ R ++ and v ∈ R d ), then f (Y ) is regularly varying with index α/β and limit measure μ(f −1 (·)); see, for example, Buraczewski et al. [9, p. 282 ]. Next we describe the tail behavior of f (Y ) for appropriate positively homogeneous functions f : R d → R. Proposition B2. Let Y be a regularly varying d-dimensional random vector with index α ∈ R ++ , and let f : R d → R be a measurable function that is positively homogeneous of degree β ∈ R ++ , continuous at 0 and such that μ(D f ) = 0, where μ is the limit measure of Y given in (B.2) , and D f denotes the set of discontinuities of f . Then μ( and f (Y ) is regularly varying with tail index α/β. We formulate a slight modification of Theorem 7.1 in Resnick [25] with a different centering. Proof. See our arXiv preprint Barczy et al. [4] . Appendix D: Tail behavior of (X k ) k∈Z + (X k ) k∈Z + (X k ) k∈Z + where π denotes the unique stationary distribution of the Markov chain (X k ) k∈Z+ , and, consequently, π is also regularly varying with index α. Note that in case of α = 1 and m ε = ∞, Basrak et al. [6, Thm. 2.1.1] additionally assume that ε is consistently varying (or, in other words, intermediate varying), but, eventually, it follows from the fact that ε is regularly varying. Let (X k ) k∈Z be a strongly stationary extension of (X k ) k∈Z+ . Basrak et al. [6, Lemma 3.1] described the socalled forward tail process of the strongly stationary process (X k ) k∈Z , and hence, due to Basrak and Segers [7, Thm. 2.1], the strongly stationary process (X k ) k∈Z is jointly regularly varying. Theorem D2. The finite-dimensional conditional distributions of (x −1 X k ) k∈Z+ with respect to the condition X 0 > x converge weakly to the corresponding finite-dimensional distributions of (m k ξ Y ) k∈Z+ as x → ∞, where Y is a random variable with Pareto distribution P(Y y) = (1 − y −α )1 [1,∞) (y), y ∈ R. Consequently, the strongly stationary process (X k ) k∈Z is jointly regularly varying with index α, that is, all its finite-dimensional distributions are regularly varying with index α. The process (m k ξ Y ) k∈Z+ is the so-called forward tail process of (X k ) k∈Z . Moreover, there exists a (whole) tail process of (X k ) k∈Z as well. By the proof of Theorem 1 and Proposition D1 we obtain the following results. Proposition D1. For each k ∈ Z + , (i) the limit measure ν k,α of (X 0 , . . . , X k ) given in (3. 3) takes the form where ν k,α is given by (3.4) , and (ii) the tail behavior of X 0 + · · · + X k is given by lim x→∞ P(X 0 + · · · + X k > x) Proof. (i) In the proof of Theorem 1, we derived ν k,α = ν k,α / ν k,α ({x ∈ R k+1 0 : x 0 > 1}). Consequently, , where using Proposition D1 with the 1-homogeneous function R k+1 x → x , we have ν k,α x ∈ R k+1 0 : x > 1 = lim x→∞ P( (X 0 , . . . , X k ) > x) P( (X 0 , . . . , X k ) > x) = 1, and, by (3.4) , (ii) Applying Proposition D1 to the 1-homogeneous functions R k+1 x → x 0 and R k+1 x → x 0 + · · · + x k and formula (3.4), we obtain lim x→∞ P(X 0 + · · · + X k > x) P(X 0 > x) = lim x→∞ P( (X 0 , . . . , X k ) > x) P(X 0 > x) P(X 0 + · · · + X k > x) P( (X 0 , . . . , X k ) > x) = ν k,α ({x ∈ R k+1 0 : x 0 + · · · + x k > 1}) ν k,α ({x ∈ R k+1 0 : x 0 > 1}) = ν k,α x ∈ R k+1 0 : x 0 + · · · + x k > 1 , which yields the statement as in part (i). The Central Limit Theorem for Real and Banach Valued Random Variables On tail behaviour of stationary second-order Galton-Watson processes with immigration On aggregation of multitype Galton-Watson branching processes with immigration On aggregation of subcritical Galton-Watson branching processes with regularly varying immigration Iterated limits for aggregation of randomized INAR(1) processes with Poisson innovations Heavy-tailed branching process with immigration Regularly varying multivariate time series Regular Variation Stochastic Models With Power-Law Tails. The equation X = AX + B, Springer Ser Long memory relationships and the aggregation of dynamic models Limit Theorems for Stochastic Processes Markov tail chains Limit theorems for aggregated linear processes Random Measures Multivariate Extremes and Regular Variation for Stochastic Processes The cluster index of regularly varying sequences with applications to limit theory for functions of multivariate Markov chains Joint temporal and contemporaneous aggregation of randomcoefficient AR(1) processes with infinite variance Joint temporal and contemporaneous aggregation of random-coefficient AR(1) processes, Stochastic Processes Appl The tail process revisited Aggregation of random-coefficient ar(1) process with infinite variance and common innovations Aggregation of a random-coefficient AR(1) process with infinite variance and idiosyncratic innovations The multi-type Galton-Watson process with immigration Point processes, regular variation and weak convergence Heavy-Tail Phenomena Statistical inference for a random coefficient autoregressive model, Scand Lévy Processes and Infinitely Divisible Distributions Stochastic modeling and estimation of COVID-19 population dynamics