key: cord-0054952-u2x5ue8l authors: Finkel, Alain; Haddad, Serge; Khmelnitsky, Igor title: Minimal Coverability Tree Construction Made Complete and Efficient date: 2020-04-17 journal: Foundations of Software Science and Computation Structures DOI: 10.1007/978-3-030-45231-5_13 sha: b21b7de0664baefb9c607c71dfb16cca3efd836f doc_id: 54952 cord_uid: u2x5ue8l Downward closures of Petri net reachability sets can be finitely represented by their set of maximal elements called the minimal coverability set or Clover. Many properties (coverability, boundedness, ...) can be decided using Clover, in a time proportional to the size of Clover. So it is crucial to design algorithms that compute it efficiently. We present a simple modification of the original but incomplete Minimal Coverability Tree algorithm (MCT), computing Clover, which makes it complete: it memorizes accelerations and fires them as ordinary transitions. Contrary to the other alternative algorithms for which no bound on the size of the required additional memory is known, we establish that the additional space of our algorithm is at most doubly exponential. Furthermore we have implemented a prototype MinCov which is already very competitive: on benchmarks it uses less space than all the other tools and its execution time is close to the one of the fastest tool. Coverability and coverability set in Petri nets. Petri nets are iconic as an infinite-state model used for verifying concurrent systems. Coverability, in Petri nets, is the most studied property for several reasons: (1) many properties like mutual exclusion, safety, control-state reachability reduce to coverability, (2) the coverability problem is EXPSPACE-complete (while reachability is non elementary), and (3) there exist efficient prototypes and numerous case studies. To solve the coverability problem, there are backward and forward algorithms. But these algorithms do not address relevant problems like the repeated coverability problem, the LTL model-checking, the boundedness problem and regularity of the traces. However these problems are EXPSPACE-complete [4, 1] and are also decidable using the Karp-Miller tree algorithm (KMT) [11] that computes a finite tree The work was carried out in the framework of ReLaX, UMI2000 and also supported by ANR-17-CE40-0028 project BRAVAS. labeled by a set of ω-markings C ⊆ N P ω (where N ω is the set of naturals enlarged with an upper bound ω and P is the set of places) such that the reachability set and the finite set C have the same downward closure in N P . Thus a marking m is coverable if there exists some m ≥ m with m ∈ C. Hence, C can be seen as one among all the possible finite representations of the infinite downward closure of the reachability set. This set C allows, for instance, to solve multiple instances of coverability in linear time linear w.r.t. the size of C avoiding to call many times a costly algorithm. Informally the KMT algorithm builds a reachability tree but, in order to ensure termination, substitutes ω to some finite components of a marking of a vertex when some marking of an ancestor is smaller. Unfortunately C may contain comparable markings while only the maximal elements are important. The set of maximal elements of C can be defined independently of the KMT algorithm and was called the minimal coverability set (MCS) in [6] and abbreviated as the Clover in the more general framework of Well Structured Transition Systems (WSTS) [7] . The minimal coverability tree algorithm. So in [5, 6] the author computes the minimal coverability set by modifying the KMT algorithm in such a way that at each step of the algorithm, the set of ω-markings labelling vertices is an antichain. But this aggressive strategy, implemented by the so-called Minimal Coverability Tree algorithm (MCT), contains a subtle bug and it may compute a strict under-approximation of Clover as shown in [8, 10] . Alternative minimal coverability set algorithms. Since the discovery of this bug, three algorithms (with variants) [10, 14, 13] have been designed for computing the minimal coverability set without building the full Karp-Miller tree. In [10] the authors proposed a minimal coverability set algorithm (called CovProc) that is not based on the Karp-Miller tree algorithm but uses a similar but restricted introduction of ω's. In [14] , Reynier and Servais proposed a modification of the MCT, called the Monotone-Pruning algorithm (called MP), that keeps but "deactivates" vertices labeled with smaller ω-markings while MCT would have deleted them. Recently in [15] , the authors simplified their original proof of correctness. In [16] , Valmari and Hansen proposed another algorithm (denoted below as VH) for constructing the minimal coverability set without deleting vertices. Their algorithm builds a graph and not a tree as usual. In [13] , Piipponen and Valmari improved this algorithm by designing appropriate data structures and heuristics for exploration strategy that may significantly decrease the size of the graph. Our contributions. 1. We introduce the concept of abstraction as an ω-transition that mimics the effect of an infinite family of firing sequences of markings w.r.t. coverability. As a consequence adding abstractions to the net does not modify its coverability set. Moreover, the classical Karp-Miller acceleration can be formalized as an abstraction whose incidence on places is either ω or null. The set of accelerations of a net is upward closed and well-ordered. Hence there exists a finite subset of minimal accelerations and we show that the size of all minimal acceleration is bounded by a double exponential. 2. Despite the current opinion that "The flaw is intricate and we do not see an easy way to get rid of it....Thus, from our point of view, fixing the bug of the MCT algorithm seems to be a difficult task" [10] , we have found a simple modification of MCT which makes it correct. It mainly consists in memorizing discovered accelerations and using them as ordinary transitions. 3. Contrary to all existing minimal coverability set algorithms that use an unknown additional memory that could be non primitive recursive, we show, by applying a recent result of Leroux [12] , that the additional memory required for accelerations, is at most doubly exponential. 4. We have developed a prototype in order to also empirically evaluate the efficiency of our algorithm and the benchmarks (either from the literature or random ones) have confirmed that our algorithm requires significantly less memory than the other algorithms and is close to the fastest tool w.r.t. the execution time. Organization. Section 2 introduces abstractions and accelerations and studies their properties. Section 3 presents our algorithm and establishes its correctness. Section 4 describes our tool and discusses the results of the benchmarks. We conclude and give some perspectives to this work in Section 5. One can find all the missing proofs and an illustration of the behavior of the algorithm in [9] . Here we define Petri nets differently from the usual way but in an equivalent manner. i.e. based on the backward incidence matrix Pre and the incidence matrix C. The forward incidence matrix is implicitly defined by C + Pre. Such a choice is motivated by the introduction of abstractions in section 2.2. -P is a finite set of places; -T is a finite set of transitions, with P ∩ T = ∅; -Pre ∈ N P ×T is the backward incidence matrix; -C ∈ Z P ×T is the incidence matrix which fulfills: for all p ∈ P and t ∈ T , C(p, t) + Pre(p, t) ≥ 0. A marked Petri net (N , m 0 ) is a Petri net N equipped with an initial marking m 0 ∈ N P . The column vector of matrix Pre (resp. C) indexed by t ∈ T is denoted Pre(t) (resp. C(t)). A transition t ∈ T is fireable from a marking m ∈ N P if m ≥ Pre(t). When t is fireable from m, its firing leads to marking m def = m + C(t), denoted by m t −→ m . One extends fireability and firing to a sequence σ ∈ T * by recurrence on its length. The empty sequence ε is always fireable and let the marking unchanged. Let σ = tσ be a sequence with t ∈ T and σ ∈ T * . Then σ is fireable from m if m t −→ m and σ is fireable from m . The firing of σ from m leads to the marking m reached by σ from m . One also denotes this firing by m σ −→ m . In order to introduce the coverability set of a Petri net, let us recall some definitions and results related to ordered sets. Let (X, ≤) be an ordered set. The downward (resp. upward) closure of a subset E ⊆ X is denoted by ↓ E (resp. ↑ E) and defined by: An antichain E is a set which fulfills: ∀x = y ∈ E ¬(x ≤ y ∨ y ≤ x). X is said FAC (for Finite AntiChains) if all its antichains are finite. A non empty set E ⊆ X is directed if for all x, y ∈ E there exists z ∈ E such that x ≤ z and y ≤ z. An ideal is a set which is downward closed and directed. There exists an equivalent characterization of FAC sets which provides a finite description of any downward closed set: a set is FAC if and only if every downward closed set admits a finite decomposition in ideals (a proof of this well-known result can be found in [3] ). X is well founded if all its (strictly) decreasing sequences are finite. X is well ordered if it is FAC and well founded. There are many equivalent characterizations of well order. For instance, a set X is well ordered if and only if for all sequence (x n ) n∈N in X, there exists a non decreasing infinite subsequence. This characterization allows to design algorithms that computes trees whose finiteness is ensured by well order. Let us recall that (N, ≤) and (N P , ≤) are well ordered sets. We are now ready to introduce the cover (also called the coverability set) of a net and to state some of its properties. Since the coverability set is downward closed and N P is FAC, it admits a finite decomposition in ideals. The ideals of N P can be defined in an elegant way as follows. One first extends the sets of naturals and integers: Then one extends the order relation and the addition to Z ω : for all n ∈ Z, ω > n and for all n ∈ Z ω , n + ω = ω + n = ω. N P ω is also a well ordered set and its members are called ω-markings. There is a one-to-one mapping between ideals of N P and ω-markings. Let m ∈ N P ω . Define m by: m is an ideal of N P (and all ideal can be defined in such a way). Let Ω be a set of ω-markings, Ω denotes the set m∈Ω m . Due to the above properties, there exists a unique finite set with minimal size Clover(N , m 0 ) ⊆ N p ω such that: A more general result can be found in [3] for well structured transition systems. Example 1. The marked net of Figure 1 is unbounded. Its Clover is the following set: For instance, the marking p l +p bk +αp ba +βp c is reached thus covered by sequence In order to introduce abstractions and accelerations, we generalize the transitions to allow the capability to mark a place with ω tokens. -Pre(a) ∈ N P ω its backward incidence; -C(a) ∈ Z P ω its incidence with Pre(a) + C(a) ≥ 0. For sake of homogeneity, one denotes Pre(a)(p) (resp. C(a)(p)) by Pre(p, a) (resp. C(p, a)). An ω-transition a is fireable from an ω-marking m ∈ N P ω if m ≥ Pre(a). When a is fireable from m, its firing leads to the ω-marking m def = m + C(a), denoted as previously m a −→ m . One observes that if Pre(p, a) = ω then for all values of C(p, a), m (a) = ω. So without loss of generality, one assumes that for all ω-transition a, Pre(p, a) = ω implies C(p, a) = ω. In order to define abstractions, we first define the incidences of a sequence σ of ω-transitions by recurrence on its length. As previously, we denote Pre(p, σ) def = Pre(σ)(p) and C(p, σ) def = C(σ)(p). The base case corresponds to the definition of an ω-transition. Let σ = tσ , with t an ω-transition and σ a sequence of ω-transitions, then: One checks by recurrence that σ is firable from m if and only if m ≥ Pre(σ) and in this case, m σ −→ m + C(σ). An abstraction of a net is an ω-transition which concisely expresses the behaviour of the net w.r.t. covering (see Proposition 1). One will observe that a transition t of a net is by construction (with σ n = t for all n) an abstraction. Let N = P, T, Pre, C be a Petri net and a be an ω-transition. a is an abstraction if for all n ≥ 0, there exists σ n ∈ T * such that for all p ∈ P with Pre(p, a) ∈ N: The following proposition justifies the interest of abstractions. Let us check that σ n is fireable from m . Let p ∈ P , An easy way to build new abstractions consists in concatenating them. We now introduce the underlying concept of the Karp and Miller construction. Definition 6. Let N = P, T, Pre, C be a Petri net. One says that a is an acceleration if a is an abstraction such that C(a) ∈ {0, ω} P . The following proposition provides a way to get an acceleration from an arbitrary abstraction. Then a is an acceleration. Let us study more deeply the set of accelerations. First we equip the set of ω-transitions with a"natural" order w.r.t. covering. In other words, a ≤ a if given any ω-marking m, if a is fireable from m then a is also fireable and its firing leads to a marking greater or equal that the one reached by the firing of a . Let N be a Petri net. Then the set of abstractions of N is upward closed. Similarly, the set of accelerations is upward closed in the set of ω-transitions whose incidence belongs to {0, ω} P . Proof. The set of accelerations is a subset of N P × {0, ω} P (where P is the set of places) with the order obtained by iterating cartesian products of sets (N, ≤) and ({0, ω}, ≥). These sets are well ordered and the cartesian product preserves this property. So we are done. Since the set of accelerations is well ordered and it is upward closed, it is equal to the upward closure of the finite set of minimal accelerations. Let us study the size of a minimal acceleration. Given some Petri net, one denotes d = |P | and e = max p,t (max(Pre(p, t), Pre(p, t) + C(p, t)). We are going to use the following result of Jérôme Leroux (published on HAL in June 2019) which provides a bound for the lengths of shortest sequences between two markings m 1 and m 2 mutually reachable. Theorem 1. (Theorem 2, [12] ) Let N be a Petri net, m 1 , m 2 be markings, One deduces an upper bound on the size of minimal accelerations. N) . Proposition 6. Let N be a Petri net and a be a minimal acceleration. Proof. Let us consider the net N = P , T , Pre , C obtained from N by deleting the set of places {p | Pre(p, a) = ω} and adding the set of transitions One denotes P 1 = {p | Pre(p, a) < ω = C(p, a)}. One introduces m 1 the marking obtained by restricting Pre(a) to P and m 2 = m 1 + p∈P1 p. Let {σ n } n∈N be a family of sequences associated with a. Let n * = ||Pre(a)|| ∞ +1. Then σ n * is fireable in N from m 1 and its firing leads to a marking that covers m 2 . By concatenating some occurrences of transitions of T 1 , one gets a firing sequence in N m 1 σ1 −→ m 2 . Using the same process, one gets a firing sequence m 2 σ2 −→ m 1 . Let us apply Theorem 1. There exists a sequence σ 1 with m 1 By deleting the transitions of T 1 occurring in σ 1 , one gets a sequence σ 1 ∈ T * such that m 1 The ω-transition a , defined by Pre(p, a ) = Pre(p, σ 1 ) for all p ∈ P , Pre(p, a ) = ω for all p ∈ P \ P and C(a ) = C(a), is an acceleration whose associated family is {σ 1 n } n∈N . By definition of m 1 , a ≤ a. Since a is minimal, a = a. Observing that |σ 1 Thus given any acceleration, one can easily obtain a smaller acceleration whose (representation) size is exponential. is an acceleration. Proof. Let a ≤ a, be a minimal acceleration. For all p such that Pre(p, a) = ω, P re(p, a ) ≤ e(3de) (d+1) 2d+4 . So a ≤ trunc(a). Since the set of accelerations is upward closed, one gets that trunc(a) is an acceleration. As discussed in the introduction, to compute the clover of a Petri net, most algorithms build coverability trees (or graphs), which are variants of the Karp and Miller tree with the aim of reducing the peak memory during the execution. The seminal algorithm [6] is characterized by a main difference with the KMT construction: when finding that the marking associated with the current vertex strictly covers the marking of another vertex, it deletes the subtree issued from this vertex, and when the current vertex belonged to the removed subtree it substitutes it to the root of the deleted subtree. This operation drastically reduces the peak memory but as shown in [8] entails incompleteness of the algorithm. Like the previous algorithms that ensure completeness with deletions, our algorithm also needs additional memory. However unlike the other algorithms, it memorizes accelerations instead of ω-markings. This approach has two advantages. First, we are able to exhibit a theoretical upper bound on the additional memory which is doubly exponential, while the other algorithms do not have such a bound. Furthermore, accelerations are reused in the construction and thus may even shorten the execution time and peak space w.r.t. the algorithm in [6] . Before we delve into a high level description of this algorithm, let us present some of the variables, functions, and definitions used by the algorithm. Algorithm 1, denoted from now on as MinCov takes as an input a marked net (N , m 0 ) and constructs a directed labeled tree CT = (V, E, λ, δ), and a set Acc of ωtransitions (which by Lemma 2 are accelerations). Each v ∈ V is labeled by an ω-marking, λ(v) ∈ N P ω . Since CT is a directed tree, every vertex v ∈ V , has a predecessor (except the root r) denoted by prd(v) and a set of descendants denoted by Des(v). By convention, prd(r) = r. Each edge e ∈ E is labeled by a firing sequence δ(e) ∈ T o ·Acc * , consisting of an ordinary transition followed by a sequence of accelerations (which by Lemma 1 fulfills λ(prd(v)) In addition, again by Lemma 1, m 0 δ(r,r) −−−→ λ(r). Let γ = e 1 e 2 . . . e k ∈ E * be a path in the tree, we denote by δ(γ) := δ(e 1 )δ(e 2 ) . . . δ(e k ) ∈ (T ∪ Acc) * . The subset Front ⊂ V is the set of vertices 'to be processed'. MinCov may call function Delete(v) that removes from V a leaf v of CT and function Prune(v) that removes from V all descendants of v ∈ V except v itself as illustrated in the following figure: First MinCov does some initializations, and sets the tree CT to be a single vertex r with marking λ(r) = m 0 and Front = {r}. Afterwards the main loop builds the tree, where each iteration consists in processing some vertex in Front as follows. MinCov picks a vertex u ∈ Front (line 3). From λ(u), MinCov fires a sequence σ ∈ Acc * reaching some m u that maximizes the number of ω produced, i.e. |{p ∈ P | λ(u)(p) = ω ∧ m u (p) = ω}|. Thus in σ, no acceleration occurs twice and its length is bounded by |P |. Then MinCov updates λ(u) with m u (line 5) and the label of the edge incoming to u by concatenating σ. Afterwards it performs one of the following actions according to the marking λ(u): For a detailed example of a run of the algorithm see Example 2 in [9] . We now establish the correctness of Algorithm 1 by proving the following properties (where for all W ⊆ V , λ(W ) denotes v∈W λ(v)): its termination; -the incomparability of ω-markings associated with vertices in V : λ(V ) is an antichain; -its consistency: λ(V ) ⊆ Cover(N , m 0 ); -its completeness: Cover(N , m 0 ) ⊆ λ(V ) . We get termination by using the well order of N P ω and Koenig Lemma. Proof. Consider the following variation of the algorithm. Instead of deleting the current vertex when its marking is smaller or equal than the marking of a vertex, one marks it as 'cut' and extract it from Front. Instead of cutting a subtree when the marking of the current vertex v is greater than the marking of a vertex which is not an ancestor of v, one marks them as 'cut' and extract from Front those who are inside. Instead of cutting a subtree when the marking of the current vertex v is greater than the marking of a vertex which is an ancestor of v, say v * , one marks those on the path from v * to v (except v) as 'accelerated', one marks the other vertices Let σ ∈ Acc * a maximal fireable sequence of accelerations from λ(u) // Maximal w.r.t. the number of ω's produced // An acceleration was found between u and one of u's ancestors 9 Let γ ∈ E * the path from u to u in CT All the vertices marked as 'cut' or 'accelerated' are ignored for comparisons and discovering accelerations. This alternative algorithm behaves as the original one except that the size of the tree never decreases and so if the algorithm does not terminate the tree is infinite. Since this tree is finitely branching, due to Koenig Lemma it contains an infinite path. On this infinite path, no vertex can be marked as 'cut' since it would belong to a finite subtree. Observe that the marking labelling the vertex following an accelerated subpath has at least one more ω than the marking of the first vertex of this subpath. So there is an infinite subpath with unmarked vertices in V . But N P ω is well-ordered, so there should be two vertices v and v , where v is a descendant of v with λ(v ) ≥ λ(v), which contradicts the behaviour of the algorithm. Since we are going to use recurrence on the number of iterations of the main loop of Algorithm 1, we introduce the following notations: CT n = (V n , E n , λ n , δ n ), Front n , and Acc n are the the values of variables CT, Front, and Acc at line 2 when n iterations have been executed. (1) and (2), marking λ n+1 (u) is incomparable to any marking in λ n+1 (V n ). Since V n ⊆ V n , λ n+1 (V n ) is an antichain. Combining this fact with the incomparability between λ n+1 (u) and any marking in λ n+1 (V n ), we conclude that the set λ n+1 (V n+1 ) = λ n+1 (V n ) ∪ {λ n+1 (u)} is an antichain. In order to establish consistency, we prove that the labelling of vertices and edges is compatible with the firing rule and that Acc is a set of accelerations. Assume that the assertions hold for CT n . Observe that MinCov may change the labeling function λ and/or add new vertices in exactly two places: at lines 4-6 and at lines 22-25. Therefore in order to prove the assertion, we show that after each group of lines it still holds. • After lines 4-6: MinCov computes (1) a maximal fireable sequence σ ∈ Acc * n from λ n (u) (line 4), and updates u's marking to m u = λ n (u) + C(σ) (line 5). Since the assertions hold for CT n , (2) if u = r, λ n (prd(u)) Proof. At most one acceleration is added per iteration. Let us prove by induction on the number n of iterations of the main loop that Acc n is a set of accelerations. Since Acc 0 = ∅, the base case is straightforward. Assume that Acc n is a set of accelerations and consider Acc n+1 . In an iteration, MinCov may add an ω-transition a to Acc. Due to the inductive hypothesis, δ(γ) is a sequence of abstractions where γ is defined at line 9. Consider b, the ω-transition defined by Pre(b) = Pre(δ(γ)) and C(b) = C(δ(γ)). Due to Proposition 2, b is an abstraction. Due to Proposition 3, the loop of lines 11-15 transforms b into an acceleration a. Due to Proposition 7, after truncation at line 16, a is still an acceleration. Cover(N , m 0 ) . Let v ∈ V . Consider the path u 0 , . . . , u k of CT from the root r = u 0 to u k = v. Let σ ∈ (T ∪ Acc) * denote δ(prd(u 0 ), u 0 ) · · · δ(prd(u k ), u k ). Due to Lemma 1, m 0 σ −→ λ(v). Due to Lemma 2, σ is a sequence of abstractions. Due to Proposition 2, the ω-transition a defined by Pre(a) = Pre(σ) and C(a) = C(σ) is an abstraction. Due to Proposition 1, λ(v) ⊆ Cover(N , m 0 ) . The following definitions are related to an arbitrary execution point of MinCov and are introduced to establish its completeness. Definition 8. Let σ = σ 0 t 1 σ 1 . . . t k σ k with for all i, t i ∈ T and σ i ∈ Acc * . Then the firing sequence m σ −→ m is an exploring sequence if: In order to prove completeness of the algorithm, we want to prove that at the beginning of every iteration, any m ∈ Cover(N , m 0 ) is quasi-covered. To establish this assertion, we introduce several lemmas showing that this assertion is preserved by some actions of the algorithm with some prerequisites. More precisely, Lemma 3 corresponds to the deletion of the current vertex, Lemma 4 to the discovery of an acceleration, Lemma 5 to the deletion of a subtree whose marking of the root is smaller than the marking of the current vertex and Lemma 6 to the creation of the children of the current vertex. Then all m ∈ Cover(N , m 0 ) are quasi-covered after performing Prune(u) and then adding u to Front. cover intermediate markings of exploring sequences. Assertion 2 follows from Proposition 9 since V \ Front is unchanged. Assertion 3 follows immediately from lines 4-6. Assertion 4 follows with v = u . Thus using this lemma the induction is proved in this case. Lines 8-16. Let us check the hypotheses of Lemma 4. Assertions 1 and 2 are established as in the previous case. Assertion 3 holds due to Lemma 1, and the fact that no edge has been added since the beginning of iteration. Thus using this lemma the induction is proved in this case. Lines 18-25. We first show that the hypotheses of Lemma 6 hold before line 21. Let us denote the values of CT and Front after line 20 by CT n and Front n . Observe that for all iteration of Line 19 in the inner loop, the hypotheses of Lemma 5 are satisfied. Therefore, in order to apply Lemma 6 it remains only to check assertions 2 and 3 of this lemma. Assertion 2 holds since (1) λ(V \ Front) is an antichain, (2) due to Line 7 there is no w ∈ V \ Front such that λ(w) ≥ λ(u), and (3) by iteration of Line 19 all w ∈ V \ Front such that λ(w) < λ(u) have been deleted. Assertion 3 holds due to Line 5 (all useful enabled accelerations have been fired) and Line 8 (no acceleration has been added). Lines 21-25 correspond to the operations related to Lemma 6. Thus using this lemma, the induction is proved in this case. The completeness of MinCov is an immediate consequence of the previous proposition. In order to empirically evaluate our algorithm, we have implemented a prototype tool which computes the clover and solves the coverability problem. This tool is developed in the programming language Python, using the Numpy library. It can be found on GitHub 3 . All benchmarks were performed on a computer equipped by Intel i5-8250U CPU with 4 cores, 16GB of memory and Ubuntu Linux 18.03. Minimal coverability set. We compare MinCov with the tool MP [14] , the tool VH [16] , and the tool CovProc [10] . We have also implemented the (incomplete) minimal coverability tree algorithm denoted by AF in order to measure the additional memory needed for the (complete) tools. Both MP and VH tools were sent to us by the courtesy of the authors. The tool MP has an implementation in Python and another in C ++ . For comparison we selected the Python one to avoid biases due to programming language. We ran two kinds of benchmarks: (1) 123 standard benchmarks from the literature in Table 1 , (which were taken from [2] ), (2) 100 randomly generated Petri nets also in Table 1 , since the benchmarks from the literature do not present all the features that lead to infinite state systems. These random Petri nets have the following properties: (1) 50 < |P |, |T | < 100, (2) the number of places connected of each transition is bounded by 10, and (3) they are not structurally bounded. The execution time of the tools was limited to 900 seconds. Table 1 contains a summary of all the instances of the benchmarks. The first column shows the number of instances on which the tool timed out. The time column consists of the total time on instances that did not time out plus 900 seconds for any instance that led to a time out. The #Nodes column consists of the peak number of nodes in instances that did not time out on any of the tools (except CovProc which does not provide this number). For MinCov we take the peak number of nodes plus accelerations. In the benchmarks from the literature we observed that the instances that timed out from MinCov are included in those of AF and MP. However there were instances the timed out on VH but did not time out on MinCov and vice versa. MinCov is the second fastest tool, and compared to VH it is 1.2 times slower. A possible explanation would be that VH is implemented in C ++ . As could be expected, w.r.t. memory requirements MinCov has the least number of nodes. In the benchmarks from the literature MinCov has approximately 10 times less nodes then MP and 1.6 times less then VH. In the random benchmarks these ratio are significantly higher. Coverability. We compare MinCov to the tool qCover [2] on the set of benchmarks from the literature in Table 2 . In [2] , qCover is compared to the most competitive tools for coverability and achieves a score of 142 solved instances while the second best tool achieves a score of 122. We split the results into counted the number of instances on which the tools failed (columns T/O) and the total time (columns Time) as in Table 1 . We observed that the tools are complementary, i.e. qCover is faster at proving that an instance is safe and MinCov is faster at proving that an instance is unsafe. safe instances (not coverable) and unsafe ones (coverable). In both categories we Therefore, by splitting the processing time between them we get better results. The third row of Table 2 represents a parallel execution of the tools, where the time for each instance is computed as follows: Time(MinCov qCover) = 2 min (Time(MinCov), Time(qCover)) . Combining both tools is 2.5 times faster than qCover and 3.5 times faster than MinCov. This confirms the above statement. We could still get better results by dynamically deciding which ratio of CPU to share between the tools depending on some predicted status of the instance. We have proposed a simple and efficient modification of the incomplete minimal coverability tree algorithm for building the clover of a net. Our algorithm is based on the introduction of the concepts of covering abstractions and accelerations. Compared to the alternative algorithms previously designed, we have theoretically bounded the size of the additional space. Furthermore we have implemented a prototype which is already very competitive. From a theoretical point of view, we plan to study how abstractions and accelerations, could be defined in the more general context of well structured transition systems. From an experimental point of view, we will follow three directions in order to increase the performance of our tool. First as in [13] , we have to select appropriate data structures to minimize the number of comparisons between ω-markings. Then we want to precompute a set of accelerations using linear programming as the correctness of the algorithm is preserved and the efficiency could be significantly improved. Last we want to take advantage of parallelism in a more general way than simultaneously running several tools. use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. A. Finkel et al. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended Model checking coverability graphs of vector addition systems Approaching the coverability problem continuously Well behaved transition systems On selective unboundedness of VASS Reduction and covering of infinite reachability trees The minimal coverability graph for Petri nets Forward analysis for WSTS, part II: Complete WSTS A counter-example to the minimal coverability tree algorithm Minimal coverability tree construction made complete and efficient On the efficient computation of the minimal coverability set of Petri nets Parallel program schemata Distance between mutually reachable Petri net configurations Constructing minimal coverability sets Minimal coverability set for Petri nets: Karp and Miller algorithm with pruning On the computation of the minimal coverability set of Petri nets Old and new algorithms for minimal coverability sets Let CT , Front and Acc be the values of corresponding variables at some execution point of MinCov, u ∈ Front and u ∈ V such that the following items hold:Then after performing Prune(u ); Delete(u ),Lemma 6. Let CT , Front and Acc be the values of corresponding variables at some execution point of MinCov. and u ∈ Front such that the following items hold:Then after removing u from Front and for all t ∈ T fireable from λ(u), adding a child v t to u in