key: cord-0812144-c4hrgoab authors: Gao, Shuaibin; Hu, Junhao; Tan, Li; Yuan, Chenggui title: Strong convergence rate of truncated Euler-Maruyama method for stochastic differential delay equations with Poisson jumps date: 2021-04-13 journal: Front Math China DOI: 10.1007/s11464-021-0914-9 sha: eae897e8509d9fa2dc6bc66c4f6c1bb85803737b doc_id: 812144 cord_uid: c4hrgoab We study a class of super-linear stochastic differential delay equations with Poisson jumps (SDDEwPJs). The convergence and rate of the convergence of the truncated Euler-Maruyama numerical solutions to SDDEwPJs are investigated under the generalized Khasminskii-type condition. Since the establishment of stochastic differential equations (SDEs) driven by Brownian motions, many scholars have contributed to study properties of SDEs, for example, [1, 3, 16, 20] and references therein. We can observe that the stochastic systems are widely applied in many fields such as biology, chemistry, finance, and economy. When studying the realistic models, it is found that the real state is not only related to the present state, but also related to the past state. Stochastic differential delay equations (SDDEs) are used to describe such systems [4, 5, 23] . Moreover, if an emergency occurs, its impact on the systems must be taken into account. For example, the sudden outbreak of the new coronavirus has a huge impact on the global economy, leading to a shock in the stock market. Hence, SDEs with jumps which take both the continuous and discontinuous random effects into consideration are studied to analyze these situations [10, 19, 28] . In this paper, we will take the delay and jumps into the consideration, i.e., we shall study SDDEs with jumps [15, 29, 30] . Generally speaking, the true solutions of many equations cannot be calculated, so it is meaningful to investigate the numerical solutions. For instance, the explicit Euler-Maruyama (EM) schemes are very popular to approximate the true solutions [16] . However, when the coefficients grow super-linearly, Hutzenthaler et al. [13] proved that the pth moments of the EM approximations diverge to infinity for all p ∈ [1, ∞). Thus, many implicit methods have been put forward to approximate the solutions of the equations with nonlinear growing coefficients [2, 11, 24, 27] . In addition, since the explicit schemes require less computation, some modified EM methods have also been established for nonlinear stochastic equations [14, 18, 25, 26] . In particular, the truncated EM method was originally proposed by Mao [21] with drift and diffusion coefficients growing super-linearly. The rate of convergence of the truncated EM method was obtained in [22] . Afterward, there are many papers to study the truncated EM method for stochastic equations whose coefficients grow super-linearly, and we refer to [7] [8] [9] 12, 17] and references therein. Additionally, there are many results on the numerical solutions for SDE with jumps and SDDEs with jumps. For example, the convergence in probability of the EM method for SDDEs with jumps was discussed in [15] . The strong convergence of the EM method for SDDEs with jumps as well as the modified split-step backward Euler method approximation to the true solution was presented in [30] . The semi-implicit Euler method for SDDEs with jumps is convergent with strong order 1/2 [29] . However, there are few papers studying the numerical solutions of the super-linear SDDEs with Poisson jumps (SDDEwPJs) whose all three coefficients might grow super-linearly. Therefore, in this paper, we will investigate the strong convergence rate of the truncated EM method for super-linear SDDEwPJs in L p (p > 0) sense. This paper is organized as follows. We will introduce some necessary notations in Section 2. The rate of convergence in L p for p 2 will be discussed in Section 3. In Section 4, the rate of convergence in L p for 0 < p < 2 will be presented. Section 5 contains an example to illustrate that our main result could cover a large class of super-linear SDDEwPJs. Throughout this paper, unless otherwise specified, we use the following notations. If A is a vector or matrix, its transpose is denoted by A T . For x ∈ R n , |x| denotes its Euclidean norm. If A is a matrix, we let |A| = tr(A T A) be its trace norm. By A 0 and A < 0, we mean A is non-positive and negative definite, respectively. If both a, b are real numbers, then Let a denote the largest integer which does not exceed a. Let τ > 0 and R + = [0, +∞). Denote by C ([−τ, 0]; R n ) the family of continuous functions ϕ from [−τ, 0] to R n with the norm If H is a set, denote by I H its indicator function; that is, Let C stand for a generic positive real constant whose value may change in different appearances. Let (Ω, F , {F t } t 0 , P) be a complete probability space with a filtration {F t } t 0 satisfying the usual conditions (i.e., it is increasing and right continuous while F 0 contains all P-null sets). Let E denote the probability expectation with respect to P. For p > 0, . . , B m (t)) T be an m-dimensional Brownian motion defined on the probability space. Let N (t) be a scalar Poisson process with the compensated Poisson process N (t) = N (t) − λt, where the parameter λ > 0 is the jump intensity. Furthermore, we assume that B(t) and N (t) are independent. In this paper, we study the truncated EM method for super-linear SDDEwPJs of the form with the initial value In order to estimate the convergence rate of the truncated EM method, we assume that the initial value ξ is γ-Hölder continuous, which is a standard constraint. Assumption 2.1 There exist constants K > 0 and γ ∈ (0, 1] such that 3 Rate of convergence in L p (p 2) Now, in order to obtain the rate of convergence for the truncated EM method for (2.1) in L p (p 2) sense, we need to impose the following assumptions on coefficients. Assumption 3.1 There exist constants K 1 > 0 and β 0 such that and for any x, y, x, y ∈ R n . By Assumption 3.1, we obtain and for any x, y ∈ R n . Before stating the next assumption, we need more notations. Let U denote the family of continuous functions U : R n × R n → R + such that for any b > 0, there exists a constant κ b > 0 satisfying for any x, x ∈ R n with |x| ∨ |x| b. Assumption 3.2 There exist constants K 2 > 0, η > 2, and U ∈ U such that Remark 1 We use an example to illustrate the necessity of setting U (·, ·). Let We could observe that there is no K 2 satisfying but Assumption 3.2 is satisfied. The detailed proof will be provided in Section 5. Assumption 3.3 There exist constants K 3 > 0 and p > η > 2 such that By using the standard method, we could derive that the moment of the true solution is bounded as follows. Lemma 3.4 Let Assumptions 3.1 and 3.3 hold. Then SDDEwPJs (2.1) has a unique global solution x(t). In addition, for any q ∈ [2, p], To our best knowledge, there are few results about the strong convergence of the super-linear SDDEwPJs. On the other hand, the truncated EM method developed in [21] is a kind of useful tool to deal with the super-linear terms. To define the truncated EM scheme, we first choose a strictly increasing continuous function ϕ : R + → R + such that ϕ(r) → ∞ as r → ∞ and sup |x|∨|y| r (|f (x, y)| ∨ |g(x, y)|) ϕ(r), ∀ r 1. Let ϕ −1 denote the inverse function of ϕ. Thus, ϕ −1 is a strictly increasing continuous function from [ϕ(1), ∞) to [1, ∞) . Then, we choose K 0 (1 ∨ ϕ(1)) and a strictly decreasing function α : For example, we could choose for some ε ∈ (0, 1/4]. For a given step size ∆ ∈ (0, 1], define the truncated mapping π ∆ : R n → R n by where we let x/|x| = 0 when x = 0. It is easy to see that the truncated mapping π ∆ maps x to itself when |x| ϕ −1 (α(∆)) and to ϕ −1 (α(∆))x/|x| when |x| ϕ −1 (α(∆)). We now define the truncated functions By definition, we could easily find that (3.6) Moreover, we could obtain that By Assumption 3.1, we derive that for any x, y ∈ R n , If Assumption 3.3 hold, then, for any ∆ ∈ (0, 1], x, y ∈ R n , one has Let us now introduce the discrete-time truncated EM numerical scheme to approximate the true solution of (2.1). Without loss of generality, we assume that τ is a positive number. For some positive integer M, we take step size ∆ = τ /M. Obviously, when we choose M sufficiently large, ∆ will become sufficiently small. Define t k = k∆ for k = −M, −M + 1, . . . , −1, 0, 1, 2, . . . . Set X ∆ (t k ) = ξ(t k ) for k = −M, −M + 1, . . . , −1, 0 and then form As usual, there are two kinds of the continuous-time truncated EM solutions. The first one is The second one is defined as follows: It is easy to see that Additionally, x ∆ (t) is an Itô process on t 0 with its Itô differential We now prepare some useful lemmas. Before stating the next lemma, we define where c p is a positive constant which is independent of ∆. Proof Fix any p 2. Then by the Hölder inequality and the Burkholder-Davis-Gundy inequality, we derive from (3.6) that By the characteristic function's argument [6] , for ∆ ∈ (0, 1], we could get where c 0 is a positive constant independent of ∆. Then, by (3.1), we have Thus, we derive that When 0 < p < 2, an application of Jensen's inequality yields that We complete the proof. Lemma 3.6 Let Assumptions 3.1 and 3.3 hold. Then, for any q ∈ (2, p], we have sup Proof For any ∆ ∈ (0, 1] and t > 0, by Itô's formula and (3.9), we derive that We now handle A 1 , A 2 , and A 3 , respectively. First, we could see that and Then we get By Lemma 3.5, (3.4), (3.6) , and Young's inequality, we have By (3.6), we obtain that This, together with (3.4) and Lemma 3.5, implies Combining (3.20) and (3.22) together, we derive that By (3.1), one can see that An application of Gronwall's inequality yields that where C is independent of ∆. Since this inequality holds for any ∆ ∈ (0, 1], the desired result follows. The proof is therefore complete. and Proof By Lemma 3.6 and (3.13), we get (3.25) . Then for any q ∈ (0, 2), Jensen's inequality gives that We complete the proof. By using Lemmas 3.4, 3.6, and the Chebyshev inequality, we can immediately have the following lemma. (3.28) Then we have Let us now discuss the rate of convergence in L 2 sense for the truncated EM solutions to the true solution. Theorem 3.9 Let Assumptions 2.1 and 3.1-3.3 hold. Suppose that there exists a real number q ∈ (2, p) such that q > (1 + β)η. Then, for any ∆ ∈ (0, 1], we have (3.31) In particular, let Then it holds for any ∆ ∈ (0, 1] that and We write ρ ∆,L = ρ for simplicity. Note that for q ∈ (2, η), we have q > (1 + β)q. By Itô's formula, for any t ∈ [0, T ] and ∆ ∈ (0, 1], we have (3.35) First, we estimate I 1 . Note that Then we have (3.36) By Assumptions 2.1, 3.2, and (3.2), we derive that and κ(s) is defined as before. As for I 12 , we have By Young's inequality, Hölder's inequality, Assumption 3.1, Lemma 3.6, and (3.7), we have · [E|ξ(κ(s))| q ] 2/(q−2β) ) (q−2β)/q ds. Then by Chebyshev's inequality, we get (3.37) We can use the same technique to handle I 122 . By Young's inequality, Hölder's inequality, Lemmas 3.6, 3.7, (3.8), and the inequality 2q/(q −2β) 2, we obtain Let us now estimate I 2 . By Assumptions 2.1, 3.1, and Lemma 3.7, we obtain Combining (3.35)-(3.39) together, one can see that E|e ∆ (T ∧ ρ)| 2 C((ϕ −1 (α(∆))) 2β+2−q + (α(∆)) 2 ∆ + ∆ (q−2β)/q + ∆ 2γ ). The desired results (3.30) and (3.31) follow by letting L → ∞ and using Lemmas 3.7 and 3.8. In particular, by the definition of ϕ, we can derive (3.33) and (3.34). The proof is therefore complete. In the following remark, we get the optimal convergence rate in L 2 when imposing the stronger condition on q. Remark 2 In Theorem 3.9, if there exists a real number q ∈ ((1 + β)η, p) such that q > (1 + β)/ε, then, for any ∆ ∈ (0, 1], we have (3.40) and For q > (1 + β)/ε, we can derive that Then, by (3.33) and (3.34), we get (3.40) and (3.41), respectively. To obtain the rate of convergence in L p sense for p > 2, we need to replace Assumption 3.2 with the following assumption. Assumption 3.10 There exist constants K 2 > 0 and η ∈ (2, p) such that Many techniques used in Theorem 3.9 are applied to give the following theorem, so we omit some similar proof procedures. Theorem 3.11 Let Assumptions 2.1, 3.1, 3.3, and 3.10 hold. Suppose that there exists a real number q ∈ (2, p) such that q > (1 + β) η. Then, for any p ∈ (2, η) and ∆ ∈ (0, 1], we have and Proof Let e ∆ (t) and ρ be the same as before. Note that for q ∈ (p, η), we have q > (1 + β)q. By Itô's formula, for any t ∈ [0, T ] and ∆ ∈ (0, 1], we get Then we derive that By Young's inequality, Assumptions 2.1, 3.10, and (3.2), we obtain that Moreover, Similar to (3.37) and (3.38), we derive that and (3.47) In addition, we derive from Assumptions 2.1, 3.1, and Lemma 3.7 that Combining (3.44)-(3.48) together yields that E|e ∆ (s ∧ ρ)| p ds + (ϕ −1 (α(∆))) pβ+p−q + (α(∆)) p ∆ p/2 + ∆ (q−pβ)/q + ∆ pγ . E|e ∆ (T ∧ ρ)| p C((ϕ −1 (α(∆))) pβ+p−q + (α(∆)) p ∆ p/2 + ∆ (q−pβ)/q + ∆ pγ ). We can get (3.42), (3.43) by letting L → ∞ and using Lemmas 3.7, 3.8. The proof is complete. 4 Convergence in L p for 0 < p < 2 In this section, we will discuss the convergence and the rate of the convergence of the truncated EM method for (2.1) in L p for 0 < p < 2. To achieve this goal, we need to impose the following assumptions on coefficients. Assumption 4.1 There exists a positive constant K L such that with |x| ∨ |y| ∨ |x| ∨ |y| L. Assumption 4.2 There exist constants K 5 > 0, K 6 0, and σ > 2 such that We could get the following lemma in the similar way as Lemma 3.4 was proved. In the previous section, the jump coefficient h is linear growth, but in Assumptions 4.1 and 4.2, h is allowed to grow super-linearly. Thus, we need to truncate all the three coefficients. In the same way in Section 3, we first choose a strictly increasing continuous function ϕ : R + → R + such that ϕ(r) → ∞ as r → ∞ and sup |x|∨|y| r (|f (x, y)| ∨ |g(x, y)| ∨ |h(x, y)|) ϕ(r), ∀ r 1. Choose K 0 and α : (0, 1] → (0, ∞) as in (3.4) . For a given step size ∆ ∈ (0, 1], the truncated mapping π ∆ is defined as (3.5) , and the truncated functions are define as follows: f ∆ (x, y) = f (π ∆ (x), π ∆ (y)), g ∆ (x, y) = g(π ∆ (x), π ∆ (y)), h ∆ (x, y) = h(π ∆ (x), π ∆ (y)), It is easy to see that |f ∆ (x, y)|∨|g ∆ (x, y)|∨|h ∆ (x, y)| ϕ(ϕ −1 (α(∆))) α(∆), ∀ x, y ∈ R n . (4.2) Moreover, if Assumption 4.2 holds, then it holds for any ∆ ∈ (0, 1], x, y ∈ R n that Since |h ∆ (x, y)| α(∆), similar to Lemma 3.5, we have the following lemma. and where c p is a positive constant which is independent of ∆. As a result, The following lemma states that the numerical solution is bounded in mean square. Proof Since the proof is similar to that of Lemma 3.6, we only highlight how to deal with the jump term. By Itô's formula and (4.3), we derive that, for any ∆ ∈ (0, 1] and t ∈ [0, T ], Moreover, by (3.4) and Lemma 4.4, we have Therefore, we have We could observe that the right-hand-side term is nondecreasing in t, and hence, An application of Gronwall's inequality yields that where C is independent of ∆. We complete the proof. Since the boundedness of the numerical solution, the estimates of stopping times in Lemma 3.8 still hold. Now, we are going to state the convergence of the truncated EM method for SDDEwPJs in L p for 0 < p < 2. Proof Let e ∆ (t) = x(t)−x ∆ (t) for t 0 and ∆ ∈ (0, 1]. Define ρ ∆,L = τ L ∧τ ∆,L . We write ρ ∆,L = ρ for simplicity. Obviously, Let δ > 0 be arbitrary. By Young's inequality, we have Hence, (4.11) E|e ∆ (T )| 2 C. Inserting (4.12) and (4.13) into (4.11) yields that (4.14) Let ε be arbitrary. We choose δ sufficiently small such that and choose L sufficiently large such that Thus, Moreover, we could use the similar technique in the proof of [9, Theorem 3.5] to prove that Combining (4.10), (4.15) , and (4.16) together, we have Hence, we get the desired result (4.8) . Then combining (4.8) and Lemma 4.4 yields (4.9). We complete the proof. Next, in order to estimate the rate of the convergence at time T, we have to impose an extra condition. Assumption 4.7 There exists a positive constant K 7 such that Here, U (·, ·) is defined as before. (4.17) where ρ ∆,L := τ L ∧ τ ∆,L and τ L , τ ∆,L are defined in Lemma 3.8. Proof Let e ∆ (t) = x(t) − x ∆ (t) for t 0 and ∆ ∈ (0, 1]. We write ρ ∆,L = ρ for simplicity. For 0 s t ∧ ρ, we observe that Recalling the definition of f ∆ , g ∆ , and h ∆ , we obtain for 0 s t ∧ ρ that By Itô's formula and Assumption 4.7, for any t ∈ [0, T ], we have By Assumption 2.1 and Lemma 4.4, similar to the proof of Theorem 3.9, we derive that These imply that The required assertion follows by the Gronwall inequality. Then, for any T > 0, we have and (4.20) Proof We use the notations of e ∆ (t) and ρ ∆,L as before. We write ρ ∆,L = ρ for simplicity. By (4.10) and (4.14), one can see that for any ∆ ∈ (0, 1), L > ξ , and δ > 0. Choosing we obtain By condition (4.18), we derive that Using Lemma 4.8, one has Combining Lemma 4.4 and (4.19) together, we can derive (4.20) . We complete the proof. If we impose an additional condition: assume that there exist constants K 8 > 0 and β ∈ [0, 1) such that for any x, y, x, y ∈ R n , then we could obtain better convergence rate. However, our main result can cover more equations without this condition (4.21). In this section, we give an example to illustrate our theories. Consider the super-linear scalar SDDEwPJs Let η = 3. In the same way, we can derive that η − 1 2 |g(x, y) − g(x, y)| 2 = 1 2 |x| 3/2 + y − 1 2 |x| 3/2 + y 2 1 2 ||x| 3/2 − |x| 3/2 | 2 + 2|y − y| 2 9 8 |x − x| 2 (|x| 1/2 + |x| 1/2 ) 2 + 2|y − y| 2 9 2 |x − x| 2 + 9 4 |x − x| 2 (|x| 2 + |x| 2 ) + 2|y − y| 2 . |x − x| 2 (|x| 2 + |x| 2 ) + 1 4 |y − y| 2 (|y| 2 + |y| 2 ). Therefore, Assumption 3.2 is satisfied with U (x, x) = 1 4 |x − x| 2 (|x| 2 + |x| 2 ). For p > 3, we get that x T f (x, y) + p − 1 2 |g(x, y)| 2 = x − 5x 3 + 1 8 |y| 5/4 + 2x + p − 1 2 1 2 |x| 3/2 + y 2 C(1 + |x| 2 + |y| 2 ). Thus, Assumption 3.3 is satisfied as well. Additionally, it is easy to see that sup |x|∨|y| r (|f (x, y)| ∨ |g(x, y)| ∨ |h(x, y)|) 5r 3 , ∀ r 1. Hence, we can choose ϕ(r) = 5r 3 . This means ϕ −1 (r) = (r/5) 1/3 . In order for q (1 + β)/ε to hold, we set p = 26 such that Assumption 3.3 be satisfied. Then q ∈ ((1 + β)η, p) =⇒ q ∈ (9, 26). We choose q = 25. Moreover, let ε = 1/8. Then q (1 + β)/ε is satisfied. So α(∆) = K 0 ∆ −1/8 . By Theorem 3.9, we have which means that the L 2 -convergence rate of the truncated EM method for SDDEwPJs (5.1) is (2γ) ∧ 3 4 . On the other hand, let us verify Assumptions 4.1, 4.2, and 4.7. By (5.2) and (5.3), we find that Assumption 4.1 is satisfied. In addition, we have 2x T f (x, y) + |g(x, y)| 2 + λ(2x T h(x, y) + |h(x, y)| 2 ) = 2x − 5x 3 + 1 8 |y| 5/4 + 2x + 1 2 |x| 3/2 + y 2 + λ(2x(x + y) + |x + y| 2 ) C(1 + |x| 2 + |y| 2 ). Thus, Assumption 4.2 is satisfied. By (5.4) and (5.5), we obtain 2(x − x) T (f (x, y) − f (x, y)) + |g(x, y) − g(x, y)| 2 + 2λ(x − x) T (h(x, y) − h(x, y)) + λ|h(x, y) − h(x, y)| 2 11(|x − x| 2 + |y − y| 2 ) − 11 4 |x − x| 2 (|x| 2 + |x| 2 ) + 11 4 |y − y| 2 (|y| 2 + |y| 2 ) + 5λ|x − x| 2 + 3λ|y − y| 2 12(|x − x| 2 + |y − y| 2 ) − 11 4 |x − x| 2 (|x| 2 + |x| 2 ) + 11 4 |y − y| 2 (|y| 2 + |y| 2 ). Hence, Assumption 4.7 is satisfied with U (x, x) = 11 4 |x − x| 2 (|x| 2 + |x| 2 ). Choose ϕ(r) = 5r 3 and c 2 = (1/5) 1/3 . Let 0 < p < 2/(12γ + 1) and define Then condition (4.18) is satisfied, that is, α(∆) ϕ(c 2 ([(α(∆)) p ∆ p/4 ] ∨ ∆ pγ ) −1/(2−p) ). By Theorem 4.9, we have which means that the L p -convergence (p ∈ (0, 2)) rate of the truncated EM method for SDDEwPJs (5.1) is p(( 1 4 − ε) ∧ γ). Modeling with Itô Stochastic Differential Equations Preserving positivity in solutions of discretised stochastic differential equations Stochastic Differential Equations: Theory and Applications Numerical analysis of explicit one-step methods for stochastic delay differential equations Exponential stability in p-th mean of solutions, and of convergent Euler-type solutions, of stochastic delay differential equations Convergence rate of numerical solutions to SFDEs with jumps The partially truncated Euler-Maruyama method for highly nonlinear stochastic delay differential equations with Markovian switching The truncated EM method for stochastic differential equations with Poisson jumps The truncated Euler-Maruyama method for stochastic differential delay equations Numerical methods for nonlinear stochastic differential equations with jumps Strong convergence of Euler-type methods for nonlinear stochastic differential equations Convergence rate and stability of the truncated Euler-Maruyama method for stochastic differential equations Strong and weak divergence in finite time of Euler's method for stochastic differential equations with non-globally Lipschitz continuous coefficients Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients Stochastic differential delay equations with jumps, under nonlinear growth condition Numerical Solution of Stochastic Differential Equations Strong convergence rates of modified truncated EM method for stochastic differential equations Strong convergence of the stopped Euler-Maruyama method for nonlinear stochastic differential equations On the asymptotic stability and numerical analysis of solutions to nonlinear stochastic differential equations with jumps Stochastic Differential Equations and Applications The truncated Euler-Maruyama method for stochastic differential equations Convergence rates of the truncated Euler-Maruyama method for stochastic differential equations Stochastic differential delay equations Balanced implicit methods for stiff stochastic system A note on tamed Euler approximations Euler approximations with varying coefficients: the case of superlinearly growing diffusion coefficients Numerical simulation of a strongly nonlinear Ait-Sahalia-type interest rate model Convergence rates of truncated theta-EM scheme for SDDEs The semi-implicit Euler method for stochastic differential delay equation with jumps Numerical methods for nonlinear stochastic delay differential equations with jumps Acknowledgements The authors would like to thank the associate editor and referees for their helpful comments and suggestions. This work was supported by the National Natural