key: cord-0168077-stgo8x6t authors: Czy.zewska, Natalia; Morkisz, Pawel M.; Przybylowicz, Pawel title: Approximation of solutions of DDEs under nonstandard assumptions via Euler scheme date: 2021-06-07 journal: nan DOI: nan sha: 20732572368a2fd24cba36b39fb8b8c1c7940827 doc_id: 168077 cord_uid: stgo8x6t We deal with approximation of solutions of delay differential equations (DDEs) via the classical Euler algorithm. We investigate the pointwise error of the Euler scheme under nonstandard assumptions imposed on the right-hand side function $f$. Namely, we assume that $f$ is globally of at most linear growth, satisfies globally one-side Lipschitz condition but it is only locally H"older continuous. We provide a detailed error analysis of the Euler algorithm under such nonstandard regularity conditions. Moreover, we report results of numerical experiments. In this paper we deal with the problem of approximation of solutions z : [0, +∞) → R d of delay differential equations (DDEs) of the following form z ′ (t) = f (t, z(t), z(t − τ )), t ∈ [0, (n + 1)τ ], with a constant time lag τ ∈ (0, +∞). Here η ∈ R d , n ∈ Z + is a (finite and fixed) horizon parameter and a right-hand side function f : [0, +∞) × R d × R d → R d that satisfies suitable regularity conditions. We investigate the error of the Euler scheme in the case when the right-hand side function is not necessarily globally Lipschitz continuous. There are two main streams of common used methods for approximation of solutions of delay differential equations: connected with the theory of solving ordinary differential equations (ODE) and the theory of solving partial differential equations (PDE). The former group benefits e.g. from Runge-Kutta methods [4, Chapter 9] and [1, 7, 28] (also special cases as Euler, midpoint or Heun methods), continuous Runge-Kutta methods [3, 7] , linear multistep methods [3, 18] (also Adams-Bashforth methods) or predictor-corrector methods, cf. [7] . The latter takes advantages from reformulating DDE into PDE and for instance uses the infinitesimal generator approach [8] , an optimal asymptotic homotopy method [5] or method of lines [6, 7, 20] . All of the above and classical literature for solving DDEs [12, 13, 26] assumes mainly global Lipschitz regularity of the right-hand side function f of the underlying DDE (1.1), alike to classical literature for ordinary differential equations [9, 17, 13] . Meanwhile, it turns out that real world applications need nonstandard assumptions. For example, a phase change of metallic materials can be described by a delay differential equation due to delay in the response to the change in processing conditions [11, 14, 23, 24, 25, 27] . A novel case with local Lipschitz condition and uniform boundedness of the right-hand side function was studied in [1] . In [11, 24] authors weakened this assumptions by considering a case when a right-hand side function is one-dimensional, locally Hölder continuous and monotone. In this paper we generalize the techniques used in [11] and study the error of the classical Euler scheme for a multidimensional case, when a right-hand side function is locally Key words and phrases. delay differential equations, one-side Lipschitz condition, locally Hölder continuous right-hand side function, Euler scheme. Hölder continuous and satisfies one-side Lipschitz conditions. According to our best knowledge till now there were no results in the literature on error analysis for the Euler scheme under such nonstandard assumptions. Our paper can be viewed as a first step into that direction. Moreover, we believe that our approach can be adopted for other numerical schemes, especially those of higher order. The main contributions of the paper are as follows. (i) We provide detailed and rigorous theoretical analysis of the error of the Euler scheme under nonstandard assumptions on the right-hand side function f . In particular, we show dependence of the error of the Euler scheme on Hölder exponents of the righthand side function f (see Theorem 4.2) . (ii) We perform many numerical experiments that confirm our theoretical conclusions. The paper is organized as follows. The statement of the problem, basic notions and definitions are given in Section 2. In this section we also recall the definition of the Euler scheme. Auxiliary analytical properties of DDE (1.1) (such as existence, uniqueness of the solution) under nonstandard assumptions are settled in Section 3. Proof of the main result, that states the upper bound on the error of the Euler algorithm, is given in Section 4. Section 5 contains results of theoretical experiments performed for three exemplary DDEs. In Appendix (Section 6) we gathered and proved some auxiliary results concerning properties of solutions of ODEs in the case when the right-hand side function is of at most linear growth satisfies one-side Lipschitz condition, and is only locally Hölder continuous (Lemma 6.1). We also show the error estimate for the Euler scheme when applied to ODEs under such regularity conditions (Lemma 6.2). These results are used in the proof of Theorem 4.2. We denote x ∧ y = min{x, y}, x ∨ y = max{x, y}, and x + = max{0, x} for all x, y ∈ R. For x, y ∈ R d we take x, y = d k=1 x k y k and x = x, x 1/2 . For the right-hand side function f : [0, +∞) × R d × R d → R d in the equation (1.1) we impose the following assumptions: (F4) There exist L ∈ (0, +∞), α, β 1 , β 2 , γ ∈ (0, 1] such that for all t 1 , t 2 ∈ [0, +∞), The assumptions above, especially (F3),(F4), are inspired by the real-life model describing evolution of dislocation density, see [11, 14, 23, 24, 25, 27] for detailed description and discussion of the DDE involved. In Lemma 3.1 below we prove that under the assumptions (F1)-(F3) the equation (1.1) has unique solution z = z(t) on the whole interval [−τ, (n + 1)τ ]. After [7] we recall the definition of the Euler scheme for DDEs of the form (1.1). For the fixed horizon parameter n ∈ Z + the Euler scheme that approximates a solution z = z(t) of (1.1) for t ∈ [0, (n + 1)τ ] is defined recursively for subsequent intervals in the following way ( [7] ). This iterative way of solving DDEs is often called a method of steps (see [26] ). We fix the discretization parameter N ∈ Z + and set Note that for each j the sequence {t j k } N k=0 provides uniform discretization of the subinterval [jτ, (j + 1)τ ]. Discrete approximation of z in [0, τ ] is defined by Let us assume that the approximations y j−1 k ≈ z(t j−1 k ), k = 0, 1, . . . , N , have already been defined in [(j − 1)τ, jτ ]. (Notice that for j = 1 it was done in (2.2) and (2.3).) Then for j = 2, 3, . . . , n we take So as the output we obtain the sequence {y j k } k=0,1,...,N , j = 0, 1, . . . , n that provides a discrete approximation of the values {z(t j k )} k=0,1,...,N , j = 0, 1, . . . , n. The aim is to investigate the error of Euler scheme under the (mild and nonstandard) assumptions (F1)-(F4), i.e.: upper bound on the following quantity Unless otherwise stated, all constants appearing in the estimates will only depend on τ , η, n, d, K, H, L, α, β 1 , β 2 , and γ, but not on the discretization parameter N . Moreover, we use the same symbol to denote different constants. Remark 2.1. For a time-dependent delay, i.e. τ = τ (t) or a state-dependent delay, i.e. τ = τ (t, z(t)) one can notice that it is not possible to use methods of steps. When we deal with a variable delay it is highly unlikely to hit the approximation mesh {t j k } 0≤j≤n 0≤k≤N . Because of that, it is required to use an ODE solver with additional dense output (called continuous extension, cf. [4, Chapter 9] ). However, the direct imitation of the method of steps with a vanishing lag case (a DDE is said to have a vanishing lag at the point t * if τ (t, z(t)) → 0 as t → t * ) encounters a natural barrier beyond which the solution cannot proceed [3] . In the following lemma we show, by using results from the Appendix, that the delay differential equation (1.1) has a unique solution under the assumptions (F1)-(F3). Note that the assumptions are weaker than those known from the standard literature. Namely, we use only one-side Lipschitz assumption and local Hölder condition for the right-hand side function f instead of a global Lipschitz continuity. Let us introduce following notation. By φ j = φ j (t) we denote the solution z of (1.1) on the interval t ∈ [jτ, (j + 1)τ ] for j ∈ Z + ∪ {0}. Let φ −1 (t) := η for all t ∈ [−τ, 0]. Lemma 3.1. Let η ∈ R d and let f satisfy (F1)-(F3). Moreover, fix τ ∈ (0, +∞) and n ∈ Z + ∪ {0}. Then the equation (1.1) has a unique solution z ∈ C 1 ([0, (n + 1)τ ]; R d ). (3.1) Moreover, then there exist K 0 , K 1 , . . . , K n ≥ 0 such that for j = 0, 1, . . . , n and, for all t, s ∈ [jτ, (j + 1)τ ] Proof. We proceed by induction with respect to j. For j = 0, the equation (1.1) can be written as with the initial condition z(0) = η. Denoting by we get, by the properties of f namely (F1) and (F2), that withK 0 = K(1 + η ), and by (F3) for all y 1 , y 2 ∈ R d and t ∈ [0, τ ] and we set K −1 := η . In that way it depends only on values of η , K, τ . Let us now assume that there exists 0 ≤ j ≤ n − 1 such that the statement of the lemma holds for the solution φ j : [jτ, (j + 1)τ ] → R d . Consider the equation We get by the induction assumption and from the properties of f that , and for all t ∈ [(j + 1)τ, (j + 2)τ ], y 1 , y 2 ∈ R d Hence, by Lemma 6.1 we get that there exists a unique continuously differentiable solution φ j+1 : From the above inductive construction we see that the solution of (1.1) is continuous. Moreover, due to the continuity of f , φ j , and φ j−1 we get for any 0 ≤ j ≤ n − 1 that Hence, the solution of (1.1) is continuously differentiable and the proof is completed. . Additionally, fix τ ∈ (0, +∞) and n ∈ Z + ∪ {0}. For j = 0, 1, . . . , n consider the functions g j : . Then the following holds: (iv) There existL 0 ,L 1 . . . ,L n ∈ [0, +∞) such that for all j = 0, 1, . . . , n, Proof. Conditions (i), (ii), and (iii) follow from the proof of Lemma 3.1. By the assumption (F4) and Lemma 3.1 we get for all t 1 , t 2 ∈ [jτ, (j + 1)τ ], y 1 , y 2 ∈ R d that In this section we provide a proof of the main result that consists of the upper bound on the error (2.6) for the Euler algorithm. In the proof we shall use the following lemma. Proof. We proceed by induction. Note that where y 0 0 = η and g 0 (t, y) = f (t, y, η), so by Lemmas 3.2, 6.2 we get that max whereK 0 =K 0 (K, η, τ ). Now, let us assume that there exist j = 0, 1, . . . , n − 1 and which is obviously satisfied for j = 0. By (2.5) and (F2) assumption we get for k = 0, 1, . . . , N − 1 that where y j+1 . This completes the proof. Below we stated and prove the main result of the paper. and for j = 1, 2, . . . , n where φ j = φ j (t) is the solution of (1.1) on the interval [jτ, (j + 1)τ ]. In particular, if γ = 1 then for j = 1, 2, . . . , n Proof. For t ∈ [0, τ ] we approximate the solution z of (1.1) by the Euler method where g 0 (t, y) = f (t, y, η). Applying Lemmas 6.2, 3.2 to ξ := η, g : Therefore (4.7) is proved. Starting from the interval [τ, 2τ ] we proceed by induction. Namely, for j = 1 and t ∈ [τ, 2τ ] the DDE (1.1) reduces to the following ODE We approximate the solution z of (4.13) by the auxiliary Euler schemẽ 14) and from (4.12) we get and max Therefore, we have for k = 0, 1, . . . , N and we need to estimate ỹ 1 k − y 1 k . Let us denote by e 1 k :=ỹ 1 k − y 1 k , k = 0, 1, . . . , N, (4.20) where, by (4.14), e 1 0 =ỹ 1 0 − y 1 0 = 0. From (4.15) and (2.5) we have for k = 0, 1, . . . , N − 1 that L (4.23) From (4.21) we obtain that where while from the assumption (F3) and from Lemma 3.1 we get The fact above together with (4.25) implies Hence, by using (4.24), (4.26), and (4.27) we obtain Moreover, by the Cauchy-Schwarz inequality Hence, combining (4.28) with (4.29) we have whereĈ 1 := 2H + (1 + K 0 ). For N ≥ 2⌈τ ⌉ we have that h ∈ (0, 1/2), and by the Fact 6.6 we obtain e 1 for k = 0, 1, . . . , N − 1. Recall the well-known facts that for all ̺ ∈ (0, 1] and x, y ≥ 0 it holds and Thereby, from the assumption (F4) and by Lemma 3.1 we have the following estimate whereC 1 := L(1 + 2K 1 )C γ 2 (1 + η ) γ . Hence, by (4.31), (4.33) and (4.34) e 1 for all h ∈ (0, 1 2 ), k = 0, 1, . . . , N − 1. We stress thatD 1 , D 1 , M 1 do not depend on N . Since e 1 0 2 = 0, from the discrete Gronwall's inequality we get for all k = 0, 1, . . . , N e 1 From (4.19) and (4.36) we get for k = 0, 1, . . . , N where C 1 does not depend on N . Let us now assume that there exist j ∈ {1, 2, . . . , n − 1} and C j , which does not depend on N , such that for all k = 0, 1, . . . , N it holds (For j = 1 the statement has already been proven.) For t ∈ [(j + 1)τ, (j + 2)τ ] we consider the following ODE z ′ (t) = g j+1 (t, z(t)), t ∈ [(j + 1)τ, (j + 2)τ ], (4.39) with the initial value z((j + 1)τ ) = φ j ((j + 1)τ ) = φ j (t j N ) = φ j (t j+1 0 ) and g j+1 (t, y) = f (t, y, φ j (t − τ )). We approximate (4.39) by the following auxiliary Euler schemẽ Hence, from the induction assumption (4.38) Applying Lemmas 3.1, 3.2 and 6.2 for ξ := φ j (t j N ), g := g j+1 , [a, b] := [(j + 1)τ, (j + 2)τ ], , (4.45) and, in particular by using (4.38) and Lemma 4.1, we get whereC j+1 := L(1 + 2K j+1 )C γ j . Hence, from (4.44), (4.45), and (4.46) we have for h ∈ (0, 1 2 ) and k = 0, 1, . . . , N e j+1 where D j+1 , M j+1 ,D j+1 does not depend on N . Since e j+1 0 2 = 0, by the discrete Gronwall's lemma we get that for all k = 0, 1, . . . , N e j+1 Therefore, from (4.43) and (4.47) we get for k = 0, 1, . . . , N where C j+1 does not depend on N . This ends the proof. Instead of (F4) we can consider the following general assumption: (F4*) There exist L ≥ 0, p, q ∈ Z + , α, β i , γ j ∈ (0, 1] for i = 1, . . . , p, j = 1, . . . , q such that for all t 1 , t 2 ∈ [0, +∞), y 1 , Because of the notational simplicity we considered only the case when (F4) is assumed. However, we stress that the general case, when (F4*) is assumed, can be treated by using the same proof technique as used in the proof of Theorem 4.2. We have conducted the numerical experiments with the Python implementations of the method described in (2.4) and (2.5) . The numerical experiments were carried on using Intel ® Xeon ® CPU E5-2650 v4 @ 2.20GHz. The exact solutions for the analyzed problems are not known so we proceeded as follows to estimate the empirical rate of convergence. Theorem 4.2 provides the theoretical convergence rate of the Euler method, hence, the approximation computed by the Euler method on the dense mesh will be used as the referential solution (1000 times more points than the densest mesh used, i.e. for N = 9 · 10 3 · 2 13 and N = 9 · 10 3 · 2 10 for SIR model). The results are presented in the figures comprising the test results for convergence rate, on the y axis there is − log 10 (err) and on the x axis the log 10 N , where err and N are given by (2.6) . Then, the slope of the linear regression corresponds to the empirical convergence rate and is always presented. The empirical convergence rate for presented examples is close to one. It can indicate that in the considered case lower bounds of the error are not sharp. 5.1. Metal phase change model. Firstly, we consider a modified model describing a phase change of metallic materials from [25, Chapter 3.3] , see also [11, 14, 23, 24, 27] . It can be described by delay differential equation due to delay in the response to the change in processing conditions. We consider the following case Figure 1 . Note that the solution graph is similar to the solutions from [11, 24, 27] . The test results for this case are given in Figure 2 . The computation time is linear with respect to the number of computation points and for N · n = 90 equals 0.0023s and for N · n = 368640 equals 5.775s. We also tested equation (5.1) with different values of the parameters, exemplary solutions are presented in Figure 3 . The test results regarding the convergence rate for all examples are always close to one. We also consider a similar case with subtle changes, i.e. f (t, y, z) = A − B · sgn(y) · |y| − C · sgn(y) · |y| ̺ · |z| + D · y · z, (5.2) where ̺, γ ∈ (0, 1], A, B, C > 0, and D ∈ R. This function (5.2) also satisfies assumptions (F1)-(F4), see Fact 6.5. For the numerical experiment we take the same values of parameters A, B, C, D, ̺, γ, τ, z 0 , t 0 , n like in the (5.1) case. The approximated solution of (5.2) is presented in Figure 1 and the test results for this case in Figure 2 . As before the computation time behaves linear with respect to the number of computation points and for N · n = 90 equals 0.002s and for N · n = 368640 equals 5.3525s. Model of releasing mature cells into the blood stream. Another example is modeling the release of mature cells into the blood stream, so called Mackey-Glass equation (see Example 1.1.7, page 7 in [7] and originally introduced in [21]), classical test problem where the method behaves properly. The solution of (5.3) is presented in Figure 4 and the test results for this case in Figure 5 . As before the computation time is linear with respect to the number of computation points and for N · n = 900 equals 0.01793s and for N · n = 3686400 equals 25.6793s. of the disease. By [22] , we consider the following multidimensional delay differential equation As initial value vector we take η = (35280000, 20, 0, 0, 0, 0, 0, 0). It can be noticed that in (5.4) we have to deal with 4 different values of delay. In order to compute the approximation we take one common delay τ = 0.5 and we compute a solution with that value of time lag for n = 480 (which is equivalent to 240 days). Simultaneously we had to slightly change an algorithm to take proper delay values as arguments in the right-hand side function. A solution of SIR model can be analyzed in Figure 6 (without presenting values of S because of the big difference in order of magnitude) and the convergence rate is presented in Figure 7 . The computation time is linear with respect to the number of computation points and for N · n = 8640 equals 0.586s and for N · n = 4423680 equals 85.902s. This section consist of some auxiliary results for solutions of ordinary differential equations and its Euler approximation that are used in the paper, especially for proving the main Theorem 4.2. Lemma 6.1. Let us consider the following ODE z ′ (t) = g(t, z(t)), t ∈ [a, b], z(a) = ξ, (6.1) where −∞ < a < b < +∞, ξ ∈ R d and g : [a, b] × R d → R d satisfies the following conditions (G3) There exists H ∈ R such that for all t ∈ [a, b], y 1 , y 2 ∈ R d y 1 − y 2 , g(t, y 1 ) − g(t, y 2 ) ≤ H y 1 − y 2 2 . Then we have what follows. and for all t, s ∈ [a, b] z(t) − z(s) ≤K|t − s|, (6.3) Proof. Since the right-hand side function g is continuous and it is of at most linear growth, Peano's theorem guarantees existence of the C 1 -solution (e.g. see Theorem 70.4, page 292 in [16] ). Now, we show that the uniqueness follows from the one-sided Lipschitz assumption (G3). Namely, let us assume that (6.1) has two solutions z = z(t) and x = x(t) with the same initial-value z(a) = x(a) = ξ. By (G3) we have for all t ∈ [a, b] that Integrating two sides of the preceding inequality we get Hence, from the Grownall's lemma we obtain that ϕ(t) = 0 for all t ∈ [a, b], which in turns implies that z(t) = x(t) for all t ∈ [a, b]. Note that, by the assumption (G2), for all t ∈ [a, b] it holds Again by the Gronwall's lemma we obtain for all t ∈ [a, b] that This implies (6.2). By (G2) and (6.2) we obtain for all t, s ∈ [a, b] −a) . Hence, the proof of (6.3) is completed. As in (6.6) we get that for all t ∈ [a, b] which implies that By using Gronwall's lemma we get (6.5). Lemma 6.2. Let us consider the following ordinary differential equation where −∞ < a < b < +∞, η ∈ R d and g : [a, b] × R d → R d satisfies the following conditions: (G3) There exists H ∈ R such that for all t ∈ [a, b], y 1 , y 2 ∈ R d y 1 − y 2 , g(t, y 1 ) − g(t, y 2 ) ≤ H y 1 − y 2 2 . (G4) There exist L ≥ 0, p ∈ N, α, β 1 , β 2 , . . . , β p ∈ (0, 1], such that for all t 1 , t 2 ∈ [a, b], Let us consider the Euler method based on equidistant discretization. Namely, for n ∈ Z + , ∆ ∈ [0, +∞) we set h = (b − a)/n, t k = a + kh, k = 0, 1, . . . , n, and let y 0 ∈ R d be any vector from the ball B(ξ, ∆) = {y ∈ R d : ξ − y ≤ ∆}. We take y k+1 = y k + h · g(t k , y k ), k = 0, 1, . . . , n − 1. Then the following holds (i) There existsC 1 =C 1 (b − a, K) ∈ (0, +∞) such that for all n ∈ Z + , ∆ ∈ [0, +∞), ξ ∈ R d y 0 ∈ B(ξ, ∆) we have max 0≤k≤n y k ≤C 1 (1 + ξ )(1 + ∆). (6.14) (ii) There existsC 2 =C 2 (b − a, L, K, H, p, α, β 1 , . . . , β p ) ∈ (0, +∞) such that for all n ∈ Z + , ∆ ∈ [0, +∞), ξ ∈ R d y 0 ∈ B(ξ, ∆) we have . From the above considerations we see that C 4 depends only on α, β 1 , . . . , β p , p, L, K, b − a. By (6.19) and (6.21) we get Let us denote e k = z(t k ) − y k , k = 0, 1, . . . , n. (6.23) Of course e 0 = ξ − y 0 ≤ ∆. Hence, we arrive at the following recursive inequality for k = 0, 1, . . . , n − 1. Applying the Gronwall's lemma we get when H + > 0 for k = 0, 1, . . . , n, and when H + = 0 for k = 0, 1, . . . , n. By elementary calculations we arrive at (6.15). The following fact is well-known, see, for example, pages 3-4 in [15] . f (t, y, z) = A − B · sgn(y) · |y| − C · sgn(y) · |y| ̺ · |z| γ + D · y · |z| γ . Then the function f satisfies the assumptions (F1)-(F4). Proof. Let us define h 1 (y) = − sgn(y) · |y| = −y, h 2 (y) = − sgn(y) · |y| ̺ for all y ∈ R. Then we can write that f (t, y, z) = A + B · h 1 (y) + C · h 2 (y) · |z| γ + D · y · |z| γ . therefore h 2 ∈ C(R). In particular, this implies that f ∈ C([0, ∞) × R × R), so f satisfies (F1). By Lemma 6.3 we have |h 1 (y)| = |y| ≤ 1 + |y|, |h 2 (y)| = |y| ̺ ≤ 1 + |y| for all y ∈ R. Hence, for (t, y, z) ∈ [0, +∞) × R × R |f (t, y, z)| ≤ A + B · |h 1 (y)| + C · |h 2 (y)| · |z| γ + |D||y||z| γ ≤ A + B(1 + |y|) + C(1 + |y|)(1 + |z|) + |D|(1 + |y|)(1 + |z|) ≤ (A + B + C + |D|)(1 + |y|)(1 + |z|), and therefore f satisfies (F2). For all t ≥ 0,z, y 1 , y 2 ∈ R (y 1 − y 2 ) f (t, y 1 , z) − f (t, y 2 , z) = B(y 1 − y 2 ) h 1 (y 1 ) − h 1 (y 2 ) + C · |z| γ (y 1 − y 2 ) h 2 (y 1 ) − h 2 (y 2 ) + D · |z| γ (y 1 − y 2 ) 2 Since h 1 , h 2 are decreasing, it holds for all y 1 , y 2 ∈ R, i = 1, 2 that (y 1 − y 2 )(h i (y 1 ) − h i (y 2 )) ≤ 0. Moreover B, C, |z| γ ≥ 0, hence (y 1 − y 2 ) f (t, y 1 , z) − f (t, y 2 , z) ≤ D · |z| γ (y 1 − y 2 ) 2 ≤ |D| · (1 + |z|)(y 1 − y 2 ) 2 , and f satisfies (F3). For all y 1 , y 2 ∈ R we have that |h 1 (y 1 ) − h 1 (y 2 )| = |y 1 − y 2 |. We now justify that h 2 satisfies for all y 1 , y 2 ∈ R |h 2 (y 1 ) − h 2 (y 2 )| ≤ 2|y 1 − y 2 | ̺ . (6.31) When y 1 , y 2 < 0 or y 1 , y 2 ≥ 0, by Lemma 6.3, we have |h 2 (y 1 ) − h 2 (y 2 )| = |y 1 | ̺ − |y 2 | ̺ ≤ |y 1 − y 2 | ̺ ≤ 2|y 1 − y 2 | ̺ . For the case when y 1 < 0, y 2 ≥ 0 (the case y 1 ≥ 0, y 2 < 0 is analogous) we have |h 2 (y 1 ) − h 2 (y 2 )| = ||y 1 | ̺ + y ̺ 2 | = |y 1 | ̺ + y ̺ 2 ≤ 2|y 1 − y 2 | ̺ , since −y 1 > 0, |y 1 − y 2 | = y 2 + (−y 1 ) ≥ y 2 ≥ 0, |y 1 − y 2 | = y 2 + (−y 1 ) ≥ −y 1 = |y 1 | ≥ 0, and [0, +∞) ∋ x → x ̺ is increasing. Combining the facts above we obtain for all t 1 , t 2 ≥ 0, y 1 , y 2 , z 1 , z 2 ∈ R |f (t 1 , y 1 , z 1 ) − f (t 2 , y 2 , z 2 )| ≤ B|h 1 (y 1 ) − h 1 (y 2 )| + C |z 1 | γ · h 2 (y 1 ) − |z 2 | γ · h 2 (y 2 ) + |D| · y 1 |z 1 | γ − y 2 |z 2 | γ ≤ B|y 1 − y 2 | + C · |z 1 | γ · |h 2 (y 1 ) − h 2 (y 2 )| + C · |h 2 (y 2 )| · |z 1 | γ − |z 2 | γ + |D| · |y 2 | · |z 1 | γ − |z 2 | γ + |D| · |z 1 | γ · |y 1 − y 2 | ≤ B|y 1 − y 2 | + 2C(1 + |z 1 |)|y 1 − y 2 | ̺ + C(1 + |y 2 |) · |z 1 − z 2 | γ + |D| · |y 2 | · |z 1 − z 2 | γ + |D| · (1 + |z 1 |) · |y 1 − y 2 | ≤ max{B + |D|, 2C, C + |D|} (1 + |z 1 | + |z 2 |)(1 + |y 1 | + |y 2 |)|t 1 − t 2 | + (1 + |z 1 | + |z 2 |) · |y 1 − y 2 | + (1 + |z 1 | + |z 2 |) · |y 1 − y 2 | ̺ + (1 + |y 1 | + |y 2 |) · |z 1 − z 2 | γ . That ends the proof. The proof of the fact below is analogous to the proof of Fact 6.4 and is omitted. Fact 6.5. Let A, B, C ≥ 0, D ∈ R, ̺, γ ∈ (0, 1] and define a function f : [0, +∞)×R×R → R as follows f (t, y, z) = A − B · sgn(y) · |y| − C · sgn(y) · |y| ̺ · |z| + D · y · z. Then the function f satisfies the assumptions (F1)-(F4). The proof of the following fact is straightforward. Fact 6.6. For all h ∈ (0, 1 2 ) it holds 0 < 1 1 − h ≤ 1 + 2h ≤ 2. The authors declare that they have no conflict of interest The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request. Retarded differential equations A bibliography on the numerical solution of delay differential equations Issues in the numerical solution of evolutionary delay differential equations Delay differential equations. Recent advanced and new directions An optimal method for approximating the delay differential equations of noninteger order Numerical solution of constant coefficient linear delay differential equations as abstract Cauchy problems Numerical methods for delay differential equations Stability of Linear Delay Differential Equations. A Numerical Approach with MATLAB Numerical methods for ordinary differential equations Numerical approximation of the solutions of delay differential equations on an infinite interval using piecewise constant arguments On mathematical aspects of evolution of dislocation density in metallic materials Delay Equations. Functional-, Complex-, and Nonlinear Analysis Ordinary and delay differential equations A unified phenomenological description of work hardening and creep based on one parameter models Hölder and locally Hölder Continuous Functions, and Open Sets of Class C k , C k,λ . Birkhäuser Mathematical Analysis for Physicists (in Polish) Solving ordinary differential equations I. Nonstiff problems Delay-dependent stability of linear multistep methods for DAEs with multiple delays History of delay equations Method of lines approximation of delay differential equations Oscillation and chaos in physiological control systems Modeling and forecasting of COVID-19 spreading by delayed stochastic differential equations Kinetics of flow and strain-hardening Prediction of Distribution of Microstructural Parameters in Metallic Materials Described by Differential Equations with Recrystallization Term Computational Materials Engineering: Achieving high accuracy and efficiency in metals processing simulations An introduction to delay differential equations with applications to the life sciences Przybyłowicz, Sensivity analysis, identification and validation of the dislocation density-based model for metallic materials Stability of numerical methods for delay differential equations A brief history of information-based complexity Information-based complexity This research was partly supported by the National Science Centre, Poland, under project 2017/25/B/ST1/00945.We would like to thank two anonymous referees for their valuable comments and suggestions that helped to improve the presentation of the results and quality of this paper. Proof. Fix n ∈ Z + , ∆ ∈ [0, +∞), ξ ∈ R d and y 0 ∈ B(ξ, ∆). By the assumption (G2) we have that for all k = 0, 1, . . . , n − 1 y k+1 ≤ y k + h g(t k , y k ) ≤ (1 + hK) y k + hK and, since y 0 ∈ B(ξ, ∆), y 0 ≤ ∆ + ξ . Hence, by the discrete version of Gronwall's lemma we get that for all k = 0, 1, . . . , n thatThis proves (6.14) withC 1 = e K(b−a) .We now prove (6.15). For k = 0, 1, . . . , n − 1 we consider the following local ODEBy Lemma 6.1 there exists unique solution z k : [t k , t k+1 ] → R d of (6.16). From the assumption (G2) and by (6.14) we get for all t ∈ [t k , t k+1 ], k = 0, 1, . . . , n − 1 thatThe use of Gronwall's lemma yields. By (G2) and (6.17) we get for all t ∈ [t k , t k+1 ], k = 0, 1, . . . , n − 1. From Lemma 6.1 (ii) we arrive atfor k = 0, 1, . . . , n − 1. Using (G4), (6.14), (6.17) and (6.18) we getIt is easy to see that for all x ∈ R + ∪ {0}, ̺ ∈ (0, 1](1 + x) ̺ ≤ 2(1 + x). (6.20)Hence