key: cord-0059597-22927l3z authors: Backenköhler, Michael; Bortolussi, Luca; Großmann, Gerrit; Wolf, Verena title: Analysis of Markov Jump Processes under Terminal Constraints date: 2021-03-01 journal: Tools and Algorithms for the Construction and Analysis of Systems DOI: 10.1007/978-3-030-72016-2_12 sha: 801a14f6c13c9b7ffccbbbd5a216fee8c08591a1 doc_id: 59597 cord_uid: 22927l3z Many probabilistic inference problems such as stochastic filtering or the computation of rare event probabilities require model analysis under initial and terminal constraints. We propose a solution to this bridging problem for the widely used class of population-structured Markov jump processes. The method is based on a state-space lumping scheme that aggregates states in a grid structure. The resulting approximate bridging distribution is used to iteratively refine relevant and truncate irrelevant parts of the state-space. This way, the algorithm learns a well-justified finite-state projection yielding guaranteed lower bounds for the system behavior under endpoint constraints. We demonstrate the method’s applicability to a wide range of problems such as Bayesian inference and the analysis of rare events. Discrete-valued continuous-time Markov Jump Processes (MJP) are widely used to model the time evolution of complex discrete phenomena in continuous time. Such problems naturally occur in a wide range of areas such as chemistry [16] , systems biology [49, 46] , epidemiology [36] as well as queuing systems [10] and finance [39] . In many applications, an MJP describes the stochastic interaction of populations of agents. The state variables are counts of individual entities of different populations. Many tasks, such as the analysis of rare events or the inference of agent counts under partial observations naturally introduce terminal constraints on the system. In these cases, the system's initial state is known, as well as the system's (partial) state at a later time-point. The probabilities corresponding to this so-called bridging problem are often referred to as bridging probabilities [17, 19] . For instance, if the exact, full state of the process X t has been observed at time 0 and T , the bridging distribution is given by for all states x and times t ∈ [0, T ]. Often, the condition is more complex, such that in addition to an initial distribution, a terminal distribution is present. Such problems typically arise in a Bayesian setting, where the a priori behavior of a system is filtered such that the posterior behavior is compatible with noisy, partial observations [11, 25] . For example, time-series data of protein levels is available while the mRNA concentration is not [1, 25] . In such a scenario our method can be used to identify a good truncation to analyze the probabilities of mRNA levels. Bridging probabilities also appear in the context of rare events. Here, the rare event is the terminal constraint because we are only interested in paths containing the event. Typically researchers have to resort to Monte-carlo simulations in combination with variance reduction techniques in such cases [14, 26] . Efficient numerical approaches that are not based on sampling or ad-hoc approximations have rarely been developed. Here, we combine state-of-the-art truncation strategies based on a forward analysis [28, 4] with a refinement approach that starts from an abstract MJP with lumped states. We base this lumping on a grid-like partitioning of the state-space. Throughout a lumped state, we assume a uniform distribution that gives an efficient and convenient abstraction of the original MJP. Note that the lumping does not follow the classical paradigm of Markov chain lumpability [12] or its variants [15] . Instead of an approximate block structure of the transition-matrix used in that context, we base our partitioning on a segmentation of the molecule counts. Moreover, during the iterative refinement of our abstraction, we identify those regions of the state-space that contribute most to the bridging distribution. In particular, we refine those lumped states that have a bridging probability above a certain threshold δ and truncate all other macro-states. This way, the algorithm learns a truncation capturing most of the bridging probabilities. This truncation provides guaranteed lower bounds because it is at the granularity of the original model. In the rest of the paper, after presenting related work (Section 2) and background (Section 3), we discuss the method (Section 4) and several applications, including the computation of rare event probabilities as well as Bayesian smoothing and filtering (Section 5). The problem of endpoint constrained analysis occurs in the context of Bayesian estimation [41] . For population-structured MJPs, this problem has been addressed by Huang et al. [25] using moment closure approximations and by Wildner and Köppl [48] further employing variational inference. Golightly and Sherlock modified stochastic simulation algorithms to approximatively augment generated trajectories [17] . Since a statistically exact augmentation is only possible for few simple cases, diffusion approximations [18] and moment approximations [35] have been employed. Such approximations, however, do not give any guarantees on the approximation error and may suffer from numerical instabilities [43] . The bridging problem also arises during the estimation of first passage times and rare event analysis. Approaches for first-passage times are often of heuristic nature [42, 22, 8] . Rigorous approaches yielding guaranteed bounds are currently limited by the performance of state-of-the-art optimization software [6] . In biological applications, rare events of interest are typically related to the reachability of certain thresholds on molecule counts or mode switching [45] . Most methods for the estimation of rare event probabilities rely on importance sampling [26, 14] . For other queries, alternative variance reduction techniques such as control variates are available [5] . Apart from sampling-based approaches, dynamic finite-state projections have been employed by Mikeev et al. [34] , but are lacking automated truncation schemes. The analysis of countably infinite state-spaces is often handled by a predefined truncation [27] . Sophisticated state-space truncations for the (unconditioned) forward analysis have been developed to give lower bounds and rely on a trade-off between computational load and tightness of the bound [37, 28, 4, 24, 31] . Reachability analysis, which is relevant in the context of probabilistic verification [8, 38] , is a bridging problem where the endpoint constraint is the visit of a set of goal states. Backward probabilities are commonly used to compute reachability likelihoods [2, 50] . Approximate techniques for reachability, based on moment closure and stochastic approximation, have also been developed in [8, 9] , but lack error guarantees. There is also a conceptual similarity between computing bridging probabilities and the forward-backward algorithm for computing state-wise posterior marginals in hidden Markov models (HMMs) [40] . Like MJPs, HMMs are a generative model that can be conditioned on observations. We only consider two observations (initial and terminal state) that are not necessarily noisy but the forward and backward probabilities admit the same meaning. A population-structured Markov jump process (MJP) describes the stochastic interactions among agents of distinct types in a well-stirred reactor. The assumption of all agents being equally distributed in space, allows to only keep track of the overall copy number of agents for each type. Therefore the state-space is S ⊆ N n S where n S denotes the number of agent types or populations. Interactions between agents are expressed as reactions. These reactions have associated gains and losses of agents, given by non-negative integer vectors v − j and v + j for reaction j, respectively. The overall effect is given by A reaction between agents of types S 1 , . . . , S n S is specified in the following form: The propensity function α j gives the rate of the exponentially distributed firing time of the reaction as a function of the current system state x ∈ S. In population models, mass-action propensities are most common. In this case the firing rate is given by the product of the number of reactant combinations in x and a rate constant c j , i.e. In this case, we give the rate constant in (1) instead of the function α j . For a given set of n R reactions, we define a stochastic process {X t } t≥0 describing the evolution of the population sizes over time t. Due to the assumption of exponentially distributed firing times, X is a continuous-time Markov chain (CTMC) on S with infinitesimal generator matrix Q, where the entries of Q are otherwise. ( The probability distribution over time can be analyzed as an initial value problem. Given an initial state x 0 , the distribution 1 evolves according to the Kolmogorov forward equation where π(t) is an arbitrary vectorization (π(x 1 , t), π(x 2 , t), . . . , π(x |S| , t)) of the states. Let x g ∈ S be a fixed goal state. Given the terminal constraint Pr(X T = x g ) for some T ≥ 0, we are interested in the so-called backward probabilities Note that β(·, t) is a function of the conditional event and thus is no probability distribution over the state-space. Instead β(·, t) gives the reaching probabilities for all states over the time span of [t, T ]. To compute these probabilities, we can employ the Kolmogorov backward equation where we use the same vectorization to construct β(t) as we used for π(t). The above equation is integrated backwards in time and yields the reachability probability for each state x i and time t < T of ending up in x g at time T . The state-space of many MJPs with population structure, even simple ones, is countably infinite. In this case, we have to truncate the state-space to a reasonable finite subset. The choice of this truncation heavily depends on the goal of the analysis. If one is interested in the most "common" behavior, for example, a dynamic mass-based truncation scheme is most appropriate [32] . Such a scheme truncates states with small probability during the numerical integration. However, common mass-based truncation schemes are not as useful for the bridging problem. This is because trajectories that meet the specific terminal constraints can be far off the main bulk of the probability mass. We solve this problem by a state-space lumping in connection with an iterative refinement scheme. Consider as an example a birth-death process. This model can be used to model a wide variety of phenomena and often constitutes a sub-module of larger models. For example, it can be interpreted as an M/M/1 queue with service rates being linearly dependent on the queue length. Note, that even for this simple model, the state-space is countably infinite. The initial condition X 0 = 0 holds with probability one. The process' probability distribution given both initial and terminal constraints is formally described by the conditional probabilities for fixed initial state x 0 and terminal state x g . We call these probabilities the bridging probabilities. It is straight-forward to see that γ admits the factorization due to the Markov property. The normalization factor, given by the reachability We call each γ(·, t) a bridging distribution. From the Kolmogorov equations (5) and (7) we can obtain both the forward probabilities π(·, t) and the backward probabilities β(·, t) for t < T . We can easily extend this procedure to deal with hitting times constrained by a finite time-horizon by making the goal state x g absorbing. In Figure 1 we plot the forward, backward, and bridging probabilities for Model 1. The probabilities are computed on a [0, 100] state-space truncation. The approximate forward solutionπ shows how the probability mass drifts upwards towards the stationary distribution Poisson(100). The backward probabilities are highest for states below the goal state x g = 40. This is expected because upwards drift makes reaching x g more probable for "lower" states. Finally, the approximate bridging distributionγ can be recognized to be proportional to the product of forwardπ and backward probabilitiesβ. We first discuss the truncation of countably infinite state-spaces to analyze backward and forward probabilities (Section 4.1). To identify effective truncations we employ a lumping scheme. In Section 4.2, we explain the construction of macrostates and assumptions made, as well as the efficient calculation of transition rates between them. Finally, in Section 4.3 we present an iterative refinement algorithm yielding a suitable truncation for the bridging problem. Even in simple models such as a birth-death Process (Model 1), the reachable state-space is countably infinite. Direct analyzes of backward (6) and forward equations (4) are often infeasible. Instead, the integration of these differential equations requires working with a finite subset of the infinite state-space [37] . If states are truncated, their incoming transitions from states that are not truncated can be re-directed to a sink state. The accumulated probability in this sink state is then used as an error estimate for the forward integration scheme. Consequently, many truncation schemes, such as dynamic truncations [4] , aim to minimize the amount of "lost mass" of the forward probability. We use the same truncation method but base the truncation on bridging probabilities rather than the forward probabilities. When dealing with bridging problems, the most likely trajectories from the initial to the terminal state are typically not known a priori. Especially if the event in question is rare, obtaining a state-space truncation adapted to its constraints is difficult. We devise a lumping scheme that groups nearby states, i.e. molecule counts, into larger macro-states. A macro-state is a collection of states treated as one state in a lumped model, which can be seen as an abstraction of the original model. These macro-states form a partitioning of the state-space. In this lumped model, we assume a uniform distribution over the constituent microstates inside each macro-state. Thus, given that the system is in a particular macro-state, all of its micro-states are equally likely. This partitioning allows us to analyze significant regions of the state-space efficiently albeit under a rough approximation of the dynamics. Iterative refinement of the state-space after each analysis moves the dynamics closer to the original model. In the final step of the iteration, the considered system states are at the granularity of the original model such that no approximation error is introduced by assumptions of the lumping scheme. Computational efficiency is retained by truncating in each iteration step those states that contribute little probability mass to the (approximated) bridging distributions. We choose a lumping scheme based on a grid of hypercube macro-states whose endpoints belong to a predefined grid. This topology makes the computation of transition rates between macro-states particularly convenient. Mass-action reaction rates, for example, can be given in a closed-form due to the Faulhaber formulae. More complicated rate functions such as Hill functions can often be handled as well by taking appropriate integrals. Our choice is a scheme that uses n S -dimensional hypercubes. A macro-statē x i ( (i) , u (i) ) (denoted byx i for notational ease) can therefore be described by two vectors (i) and u (i) . The vector (i) gives the corner closest to the origin, while u (i) gives the corner farthest from the origin. Formally, where '≤' stands for the element-wise comparison. This choice of topology makes the computation of transition rates between macro-states particularly convenient: Suppose we are interested in the set of micro-states in macro-statex i that can transition to macro-statex k via reaction j. It is easy to see that this set is itself an interval-defined macro-statex i j − →k . To compute this macro-state we can simply shiftx i by v j , take the intersection withx k and project this set back. where the additions are applied element-wise to all states making up the macrostates. For the correct handling of the truncation it is useful to define a general exit statex This state captures all micro-states insidex i that can leave the state via reaction j. Note that all operations preserve the structure of a macro-state as defined in (10) . Since a macro-state is based on intervals the computation of the transition rate is often straight-forward. Under the assumption of polynomial rates, as it is the case for mass-action systems, we can compute the sum of rates over this transition set efficiently using Faulhaber's formula. We define the lumped transition functionᾱ for macro-statex and reaction j. As an example consider the following massaction reaction 2X . . , n} we can compute the corresponding lumped transition ratē eliminating the explicit summation in the lumped propensity function. For polynomial propensity functions α such formulae are easily obtained automatically. For non-polynomial propensity functions, we can use the continuous integral as an approximation. This is demonstrated on a case study in Section 5.2. Using the transition set computation (11) and the lumped propensity function (13) we can populate the Q-matrix of the finite lumping approximation: In addition to the lumped rate function over the transition statex i j − →k , we need to divide by the total volume of the lumped statex i . This is due to the assumption of a uniform distribution inside the macro-states. Using this Q-matrix, we can compute the forward and backward solution using the respective Kolmogorov equations (5) and (7). Interestingly, the lumped distribution tends to be less concentrated. This is due to the assumption of a uniform distribution inside macro-states. This effect is illustrated by the example of a birth-death process in Figure 2 . Due to this effect, an iterative refinement typically keeps an over-approximation in terms of state-space area. This is a desirable feature since relevant regions are less likely to be pruned due to lumping approximations. The iterative refinement algorithm (Alg. 1) starts with a set of large macro-states that are iteratively refined, based on approximate solutions to the bridging problem. We start by constructing square macro-states of size 2 m in each dimension for some m ∈ N such that they form a large-scale grid S (0) . Hence, each initial macro-state has a volume of (2 m ) n S . This choice of grid size is convenient because we can halve states in each dimension. Moreover, this choice ensures that all states have equal volume and we end up with states of volume 2 0 = 1 which is equivalent to a truncation of the original non-lumped state-space. An iteration of the state-space refinement starts by computing both the forward and backward probabilities (lines 2 and 3) via integration of (5) and (7), respectively, using the lumpedQ-matrix. Based on the resulting approximate forward and backward probabilities, we compute an approximation of the bridging distributions (line 4). This is done for each time-point in an equispaced grid on [0, T ]. The time grid granularity is a hyper-parameter of the algorithm. If the grid is too fine, the memory overhead of storing backwardβ (i) and forward solutionsπ (i) increases. 2 If, on the other hand, the granularity is too low, too much of the state-space might be truncated. Based on a threshold parameter δ > 0 states are either removed or split (line 7), depending on the mass assigned to them by the approximate bridging probabilitiesγ (i) t . A state can be split by the split-function which halves the state in each dimension. Otherwise, it is removed. Thus, each macro-state is either split into 2 n S new states or removed entirely. The result forms the next lumped state-space S (i+1) . The Q-matrix is adjusted (line 10) such that transition rates for S (i+1) are calculated according to (14) . Entries of truncated states are removed from the transition matrix. Transitions leading to them are re-directed to a sink state (see Section 4.1). After m iterations (we started with states of side lengths 2 m ) we have a standard finite state projection scheme on the original model tailored to computing an approximation of the bridging distribution. In Figure 3 we give a demonstration of how Algorithm 1 works to refine the state-space iteratively. Starting with an initial lumped state-space S (0) covering a large area of the state-space, repeated evaluations of the bridging distributions are performed. After five iterations the remaining truncation includes all states that significantly contribute to the bridging probabilities over the times [0, T ]. It is important to realize that determining the most relevant states is the main challenge. The above algorithm solves this problem by considering only those parts of the state-space that contribute most to the bridging probabilities. The truncation is tailored to this condition and might ignore regions that are likely in the unconditioned case. For instance, in Fig. 1 the bridging probabilities mostly remain below a population threshold of #X = 60 (as indicated by the lighter/darker coloring), while the forward probabilities mostly exceed this bound. Hence, in this example a significant portion of the forward probabilitieŝ π (i) t is captured by the sink state. However, the condition in line 7 in Algorithm 1 ensures that states contributing significantly toγ (i) t will be kept and refined in the next iteration. We present four examples in this section to evaluate our proposed method. A prototype was implemented in Python 3.8. For numerical integration we 3.7423e-05 9.5259e-08 Table 1 . Estimated reachability probabilities based on varying truncation thresholds δ: The true probability is 1.8625e-29. We also report the size of the final truncation and the accumulated size of all truncations during refinement iterations (overall states). used the Scipy implementation [47] of the implicit method based on backwarddifferentiation formulas [13] . The analysis as a Jupyter notebook is made available online 3 . We consider a simple model of two parallel Poisson processes describing the production of two types of agents. The corresponding probability distribution has Poisson product form at all time points t ≥ 0 and hence we can compare the accuracy of our numerical results with the exact analytic solution. We use the proposed approach to compute lower bounds for rare event probabilities. 4 The initial condition X 0 = (0, 0) holds with probability one. After t time units each species abundance is Poisson distributed with rate λ = t. We consider the final constraint of reaching a state where both processes exceed a threshold of 64 at time 20. Without prior knowledge, a reasonable truncation would have been 160×160. But our analysis shows that just 20% of the states are necessary to capture over 99.6% of the probability mass reaching the target event (cf. Table 1 ). Decreasing the threshold δ leads to a larger set of states retained after truncation as more of the bridging distribution is included (cf. Figure 4) . We observe an increase in truncation size that is approximately logarithmic in δ, which, in this example, indicates robustness of the method with respect to the choice of δ. Comparison to other methods The truncation approach that we apply is similar to the one used by Mikeev et al. [34] for rare event estimation. However, they used a given linearly biased MJP model to obtain a truncation. A general strategy to compute an appropriate biasing was not proposed. It is possible to adapt our truncation approach to the dynamic scheme in Ref. [34] where states are removed in an on-the-fly fashion during numerical integration. A finite state-space truncation covering the same area as the initial lumping approximation would contain 25,600 states. 5 The standard approach would be to build up the entire state-space for such a model [27] . Even using a conservative truncation threshold δ = 1e-5, our method yields an accurate estimate using only about a fifth (5450) of this accumulated over all intermediate lumped approximations. Mode switching occurs in models exhibiting multi-modal behavior [44] when a trajectory traverses a potential barrier from one mode to another. Often, mode switching is a rare event and occurs in the context of gene regulatory networks where a mode is characterized by the set of genes being currently active [30] . Similar dynamics also commonly occur in queuing models where a system may for example switch its operating behavior stochastically if traffic increases above or decreases below certain thresholds. Using the presented method, we can get both a qualitative and quantitative understanding of switching behavior without resorting to Monte-Carlo methods such as importance sampling. Exclusive Switch The exclusive switch [7] has three different modes of operation, depending on the DNA state, i.e. on whether a protein of type one or two is bound to the DNA. The parameter values are ρ = 1e-1, λ = 1e-3, β = 1e-2, γ = 8e-3, and α = 1e-1. Since we know a priori of the three distinct operating modes, we adjust the method slightly: The state-space for the DNA states is not lumped. Instead we "stack" lumped approximations of the P 1 -P 2 phase space upon each other. Special treatment of DNA states is common for such models [28] . To analyze the switching, we choose the transition from (variable order: P 1 , P 2 , D, D.P 1 , D.P 2 ) x 1 = (32, 0, 0, 0, 1) to x 2 = (0, 32, 0, 1, 0) over the time interval t ∈ [0, 10]. The initial lumping scheme covers up to 80 molecules of P 1 and P 2 for each mode. Macro-states have size 8 × 8 and the truncation threshold is δ = 1e-4. In the analysis of biological switches, not only the switching probability but also the switching dynamics is a central part of understanding the underlying biological mechanisms. In Figure 5 (left), we therefore plot the time-varying probabilities of the gene state conditioned on the mode. We observe a rapid unbinding of P 2 , followed by a slow increase of the binding probability for P 1 . These dynamics are already qualitatively captured by the first lumped approximation (dashed lines). Toggle Switch Next, we apply our method to a toggle switch model exhibiting non-polynomial rate functions. This well-known model considers two proteins A and B inhibiting the production of the respective other protein [29] . with the following reactions and reaction rates. The parameterization is ρ = 10, λ = 0.1. Due to the non-polynomial rate functions α 1 and α 2 , the transition rates between macro-states are approximated by using the continuous integral for a macro-statex = {a, . . . , b}. We analyze the switching scenario from (0, 120) to the first visit of state (120, 0) up to time T = 10. The initial lumping scheme covers up to 352 molecules of A and B and macro-states have size 32 × 32. The truncation threshold is δ = 1e-4. The resulting truncation is shown in Figure 5 (right). It also illustrates the kind of insights that can be obtained from the bridging distributions. For an overview of the switching dynamics, we look at the expected occupation time under the terminal constraint of having entered state (120, 0). Letting the corresponding hitting time be τ = inf{t ≥ 0 | X t = (120, 0)}, the expected occupation time for some state x is E τ 0 1 =x (X t ) dt | τ ≤ 10 . We observe that in this example the switching behavior seems to be asymmetrical. The main mass seems to pass through an area where initially a small number of A molecules is produced followed by a total decay of B molecules. We now turn to the method's application in recursive Bayesian estimation. This is the problem of estimating the system's past, present, and future behavior under given observations. Thus, the MJP becomes a hidden Markov model (HMM). The observations in such models are usually noisy, meaning that we cannot infer the system state with certainty. This estimation problem entails more general distributional constraints on terminal β(·, T ) and initial π(·, 0) distributions than the point mass distributions considered up until now. We can easily extend the forward and backward probabilities to more general initial distributions and terminal distributions β(T ). For the forward probabilities we get and similarly the backward probabilities are given by We apply our method to an SEIR (susceptible-exposed-infected-removed) model. This is widely used to describe the spreading of an epidemic such as the current COVID-19 outbreak [23, 20] . Temporal snapshots of the epidemic spread are mostly only available for a subset of the population and suffer from inaccuracies of diagnostic tests. Bayesian estimation can then be used to infer the spreading dynamics given uncertain temporal snapshots. A population of susceptible individuals can contract a disease from infected agents. In this case, they are exposed, meaning they will become infected but cannot yet infect others. After being infected, individuals change to the removed state. The mass-action reactions are as follows. The parameter values are λ = 0.5, μ = 3, ρ = 3. Due to the stoichiometric invariant X = const., we can eliminate R from the system. We consider the following scenario: We know that initially (t = 0) one individual is infected and the rest is susceptible. At time t = 0.3 all individuals are tested for the disease. The test, however, only identifies infected individuals with probability 0.99. Moreover, the probability of a false positive is 0.05. We like to identify the distribution given both the initial state and the measurement at time t = 0.3. In particular, we want to infer the distribution over the latent counts of S and E by recursive Bayesian estimation. The posterior for n I infected individuals at time t, given measurement Y t = n I can be computed using Bayes' rule This problem is an extension of the bridging problem discussed up until now. The difference is that the terminal posterior is estimated it using the result of the lumped forward equation and the measurement distribution using (17) . Based on this estimated terminal posterior, we compute the bridging probabilities and refine the truncation tailored to the location of the posterior distribution. In Figure 6 (left), we illustrate the bridging distribution between the terminal posterior and initial distribution. In the context of filtering problems this is commonly referred to as smoothing. Using the learned truncation, we can obtain the posterior distribution for the number of infected individuals at t = 0.3 ( Figure 6 (middle)). Moreover, can we infer a distribution over the unknown number of susceptible and exposed individuals (Figure 6 (right)). The analysis of Markov Jump processes with constraints on the initial and terminal behavior is an important part of many probabilistic inference tasks such as parameter estimation using Bayesian or maximum likelihood estimation, inference of latent system behavior, the estimation of rare event probabilities, and reachability analysis for the verification of temporal properties. If endpoint constraints correspond to atypical system behaviors, standard analysis methods fail as they have no strategy to identify those parts of the state-space relevant for meeting the terminal constraint. Here, we proposed a method that is not based on stochastic sampling and statistical estimation but provides a direct numerical approach. It starts with an abstract lumped model, which is iteratively refined such that only those parts of the model are considered that contribute to the probabilities of interest. In the final step of the iteration, we operate at the granularity of the original model and compute lower bounds for these bridging probabilities that are rigorous up to the error of the numerical integration scheme. Our method exploits the population structure of the model, which is present in many important application fields of MJPs. Based on experience with other work based on truncation, the approach can be expected to scale up to at least a few million states [33] . Compared to previous work, our method neither relies on approximations of unknown accuracy nor additional information such as a suitable change of measure in the case of importance sampling. It only requires a truncation threshold and an initial choice for the macro-state sizes. In future work, we plan to extend our method to hybrid approaches, in which a moment representation is employed for large populations while discrete counts are maintained for small populations. Moreover, we will apply our method to model checking where constraints are described by some temporal logic [21] . use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended Flow cytometry: basic principles and applications Backward solution of Markov chains and Markov regenerative processes: Formalization and applications On-the-fly uniformization of time-inhomogeneous infinite Markov population models Parameter identification for Markov models of biochemical reactions Control variates for stochastic simulation of chemical reaction networks Bounding mean first passage times in population continuous-time Markov chains Calculation of switching times in the genetic toggle switch and other bistable systems Stochastic approximation of global reachability probabilities of Markov population models Model checking markov population models by stochastic approximations From Markov jump processes to spatial queues Bayesian Inference for Stochastic Processes Exact and ordinary lumpability in finite Markov chains A polyalgorithm for the numerical solution of ordinary differential equations Automated estimation of rare event probabilities in biochemical systems Quasi lumpability, lower-bounding coupling matrices, and nearly completely decomposable Markov chains Exact stochastic simulation of coupled chemical reactions Efficient sampling of conditioned Markov jump processes Bayesian inference for stochastic kinetic models using a diffusion approximation Bayesian parameter inference for stochastic biochemical network models using particle Markov chain monte carlo Importance of interaction structure and stochasticity for epidemic spreading: A COVID-19 case study Data-informed parameter synthesis for population markov chains Fluid computation of passage-time distributions in large Markov models SEIR modeling of the COVID-19 and its dynamics Sliding window abstraction for infinite Markov chains Reconstructing dynamic molecular states from single-cell time series An efficient and exact stochastic simulation method to analyze rare events in biochemical systems Prism 4.0: Verification of probabilistic real-time systems SHAVE: stochastic hybrid analysis of Markov population models Genetic toggle switch without cooperative binding Stochastic simulations of genetic switch systems On-the-fly verification and optimization of DTA-properties for large Markov chains Approximate numerical integration of the chemical master equation for stochastic reaction networks Efficient calculation of rare event probabilities in Markovian queueing networks Numerical approximation of rare event probabilities in biochemically reacting systems Moment closure based parameter inference of stochastic kinetic models Stochastic processes in epidemiology: HIV/AIDS, other infectious diseases, and computers The finite state projection algorithm for the solution of the chemical master equation Stamina: Stochastic approximate model-checker for infinite-state analysis Markov processes and applications: algorithms, networks, genome and finance An introduction to hidden Markov models Bayesian filtering and smoothing Efficient low-order approximation of first-passage time distributions Validity conditions for moment closure approximations in stochastic chemical kinetics Emergence of switch-like behavior in a large family of simple biochemical networks Stability and multiattractor dynamics of a toggle switch based on a two-stage model of stochastic gene expression Stochastic approaches for systems biology SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python Moment-based variational inference for Markov jump processes Stochastic modelling for systems biology Safe on-the-fly steady-state detection for time-bounded reachability Acknowledgements This project was supported by the DFG project MULTI-MODE and Italian PRIN project SEDUCE.