key: cord-0060421-vlvakqf6 authors: Fedyukovich, Grigory; Ernst, Gidon title: Bridging Arrays and ADTs in Recursive Proofs date: 2021-02-26 journal: Tools and Algorithms for the Construction and Analysis of Systems DOI: 10.1007/978-3-030-72013-1_2 sha: ce46d0daa10ba89f66b53b00a59c591342fd6bda doc_id: 60421 cord_uid: vlvakqf6 We present an approach to synthesize relational invariants to prove equivalences between object-oriented programs. The approach bridges the gap between recursive data types and arrays that serve to represent internal states. Our relational invariants are recursively-defined, and thus are valid for data structures of unbounded size. Based on introducing recursion into the proofs by observing and lifting the constraints from joint methods of the two objects, our approach is fully automatic and can be seen as an algorithm for solving Constrained Horn Clauses (CHC) of a specific sort. It has been implemented on top of the SMT-based CHC solver AdtChc and evaluated on a range of benchmarks. Relational verification is widely applicable during an iterative process of software development, when a high-level specification, a prototype implementation, or even an arbitrary previous version is compared to the current version and verified for the absence of newly introduced bugs. As software grows large, compositionality becomes a crucial factor to achieve scalability of relational verification tasks: reasoning about pairs of entire programs is reduced to reasoning about pairs of modules or isolated components of code. Proofs found for one component can be reused while reasoning about another component, or even the system in a whole. Successful examples in large-scale verification projects include a step-wise refinement in seL4 [30] and the integration of model checking to software development workflow in AWS C Common [11] . In this work, we represent relational verification problems over object-oriented programs as Constrained Horn Clauses (CHC). A CHC is an implication in firstorder logic that involves a set of unknown predicates. For a system of CHCs, we wish to find an interpretation for all predicates that validates all implications. CHCs are used in various tasks appearing in verification, e.g., finding loop invariants or function summaries. For relational verification, a system of CHCs can be constructed by pairing components of code of two versions in lockstep and supplying it with relational pre-and post-conditions [14, 39, 44, 53] . Stateof-the-art tools for solving CHC, e.g., [9, 19, 21, 27, 32] , are based on Satisfiability Modulo Theories (SMT), e.g., [40, 47] , they gradually become more robust, as long as the programs under analysis do not have a mixed use of data structures. Verification conditions of real-world problems involve data structures such as arrays and Algebraic Data Types (ADTs) of unknown size, expecting the proofs to capture (quantified or recursive) properties over countably infinite sets of elements. Arrays are being handled in loops and often require finding universally-quantified loop invariants [21] . ADTs, such as lists, maps, and sets, require reasoning by structural induction [47] and often rely on additional helper lemmas which are difficult to be synthesized automatically. For relational verification tasks, where one program is over arrays, and another is over ADTs, the solvers should likely reason over quantified formulas and induction at the same time, which is currently challenging for most of the automated tools. We propose a set of new algorithms for solving CHCs constructed by pairing programs over arrays and ADTs. Because we deal with object-oriented programs, the data structures might be accessed and modified in any given method, and our pairing is done for each method separately. Relational proofs are synthesized over the data structures -they describe a relation that holds while simultaneously traversing pairs of elements by any of the methods. Our key idea is that not all methods may be needed for the actual synthesis. In fact, our algorithm generates a candidate proof by bridging a single pair of methods and then validates/repairs it on all others. In essence, we observe how pairs of inputs (or pairs of outputs) change the states, guess a candidate relation between elements of states, and (dis-)prove it on all other methods using an SMT-based theorem prover. Our synthesis strategy is customized for different classes of benchmarks via so called recipes. We present two recipes for the list ADT that are applicable, respectively, for (1) stacks and queues, and (2) sets, multisets, and maps. They both discover nontrivial invariants that need a recursive interpretation. We independently generate its base and recursive cases. The key point in determining the relations is to automatically investigate how an input or an output affects the state. Finally, we discover auxiliary lemmas that provide additional properties about objects in isolation and help proving the inferred invariants are valid. Importantly, in contrast to a more lightweight CHC setting over numerical theories (and even arrays) that can rely on an SMT solver to validate its recursion-free solutions, the validation of our recursive solutions is conducted by structural induction. We thus rely on recent advances in SMT-based fully automated theorem proving [55] that (since recently) supports arrays. The experiments have shown that the approach is reasonably fast in practice. Our contribution, while presented in the CHC context, can be lifted on the program analysis context and implemented in a range of robust verification tools that are designed to support compositionality [7, 24] . The rest of the paper is structured as follows. A short outline on background and notation is given in Sect. 2. In Sect. 3, we give an overview of the approach. Then, Sect. 4 and Sect. 5 present our recipes. Finally, we give the evaluation details in Sect. 6, related work in Sect. 7, and conclude the paper in Sect. 8. An object O = (St, Init, (Op n ) n∈ [1,N ] ) is defined over internal states St, with initialization Init(s) denoting initial states s, and methods Op n , also called op-erations, for some identifier n (which for simplicity is treated as a natural number in some finite interval, but later sections liberally refer to Op n by their name). Each operation Op n (in, s, s , out) defines transitions between a pair of states s and s for a given input in, producing an output out. Moreover, each operation has an associated precondition pre n (in, s), ranging over the input and pre-state. In this paper, we take a syntactic approach by representing states as tuples of variables. Specifically, we assume that Init(s) and each operation Op(in, s, s , out) is given as a predicate, i.e., as a characteristic formula, over the specified parameters, that holds for initial states, respectively, when the program can take a particular transition. Such a formula can be obtained from the source code by symbolic execution, and we assume that effect of loops inside operations is captured by quantified formulas, creation of which is an orthogonal problem. Hence, our approach is language agnostic. We assume that the programs under consideration are deterministic, and we assume that pre(in, s) =⇒ ∃s , out. Op n (in, s, s , out). Note that for deterministic programs, the existential quantifier in ∃s , out. Op n (in, s, s , out) can be eliminated if pre(in, s) holds as s , out are functionally determined by in, s. We aim at solving a relational verification problem over two objects and reduce it to inductive invariant inference over a composition of two objects. Definition 1. Two objects A and C are equivalent if there exists an inductive invariant R over a composition of these objects, which satisfies all clauses below. It connects two states St A and St C before and after each pair of operations Init A (as) ∧ Init C (cs) =⇒ R(as, cs) consecution: R(as, cs) ∧ Op A 1 (in, as, as , out A ) ∧ Op C 1 (in, cs, cs , out C ) =⇒ R(as , cs ) . . . R(as, cs) ∧ Op A N (in, as, as , out A ) ∧ Op C N (in, cs, cs , out C ) =⇒ R(as , cs ) safety: applicability: R(as, cs) ∧ Op A 1 (in, as, as , out A ) ∧ Op C 1 (in, cs, cs , out C ) =⇒ out A = out C . . . A N (in, as, as , out A ) ∧ Op C N (in, cs, cs , out C ) =⇒ out A = out C Implications in Def. 1 define a set of Constrained Horn Clauses (CHC) over an uninterpreted relation symbol R. There are three types of constraints: (1) initialization, (2) consecution, and (3) safety. The third, safety, reflects the actual relational specification, i.e., the correspondence between the programs under analysis, in terms of the user-visible variables, namely the input in, and the respective outputs, out and out . Here, safety is divided into applicability (coincidence of preconditions) and equivalence of outputs, which together ensure that the two programs are observationally equivalent. To prove that this equivalence holds, one needs to infer a more complicated invariant R over the internal state. For this reason, we need the initiation and the consecution constraints: whatever happens due to each operation, the invariant is maintained, and by safety, the programs remain observationally equivalent indefinitely. Problem Statement: We seek an interpretation of R that satisfies all constraints in Def. 1 simultaneously. This conventional formulation of a CHC task lets us to use any off-the-shelf CHC solver. However, the problem is undecidable in general, thus no solver guarantees to handle our specific tasks. Furthermore, existing solvers mainly support the lightweight arithmetic theories, and a few exceptions support also ADTs [27] and arrays [21, 32] . To the best of our knowledge, there is no CHC solver that supports ADTs and arrays at the same time, and there is no CHC solver that synthesizes recursive solutions. Context: The system of CHCs ensures that A and C can be substituted interchangeably in any calling context, and it is applicable to a wide range of techniques for formal program development. The focus on equivalence instead of subsumption is not essential for our work, and the presented approach works for the asymmetric case just the same. Specifically, Liskov and Wing's substitution principle [36] follows (precondition strengthening is reflected by the applicability constraints from pre A to pre C , and all postconditions with respect to the outputs are equivalent). Data Refinement [15, 25] follows similarly (Def. 1 characterizes that R is a forward simulation [37] ). See Sect. 7 for more details. In this section, we present the fundamentals of the approach to synthesize recursive relational invariants for systems over arrays and ADTs that we instantiate and illustrate on examples in the subsequent sections. Our approach is purely symbolic and fully automatic in both stages: generating a candidate relational invariant, and proving it correct (i.e., validating). The key insight is an analysis of the operations joint in the constraints of Def. 1. We follow a strategy of introducing recursion into the interpretation based on ADTs, and by aligning the base case to initialization and the recurrence conditions to joint operations. In particular, a relational invariant R that bridges an algebraic list xs Algorithm 1: Automated synthesis of recursive relational invariants Input: Objects A = (as, Init A , (Op A n )n∈N ) and C = (cs, Init C , (Op C n )n∈N ), where as, cs are the state variables, and xs is a list variable of as Output: relational invariant R between A and C 1 R(nil, cs) ← Init A (as[xs := nil]) ∧ Init C (cs); 2 φr ← true; 3 let y and ys be fresh variables; 4 while true do and an array (with auxiliary variables, such as index ) cs is defined recursively over the structure of xs, which produces this general schema: This schema has two placeholders for constraints, φ b in the base case and φ r in the recursive case, that may refer to the variables in scope (as indicated by their respective parameter lists). Moreover, we seek a Skolem function to eliminate the existentially-quantified state variable cs r in the recursive position. Intuitively the desired Skolem function captures the delta between two array states that corresponds to the delta between xs and ys. Alg. 1 gives our top-level synthesis procedure for interpretations of R. It takes as input two objects, A and C, where as and cs are tuples variables that represent their respective states. We refer to primed versions of these state variables to as as and cs , assuming that all as, cs, as , and cs are distinct. The algorithm works with algebraic lists specifically and thus as is assumed to have such a component given by the state variable xs. We denote by as[xs := e] the updated vector of variables such that xs is replaced in as by symbolic expression e. The base case of the interpretation of R is straightforward (line 1): the algorithm uses a predicate Init C and a predicate Init A in which the xs variable is instantiated to nil. The inductive case of the interpretation of R is trickier (line 7). Because several different operations that produce state, consume state, or do nothing with a state are possible (see Def. 2 later in the section), some of them might contribute to different parts of the interpretation being synthesized. In particular, methods Match and Update are responsible for generating a body of R. They are instantiated differently for our two recipes in Sect. 4 (applicable for stacks and queues) and Sect. 5 (applicable for (multi)sets and maps). The first method, Update, synthesizes an updated symbolic state cs r , a tuple of symbolic expressions, to be used in the nested inductive call of R. It can therefore be understood to compute a witness (or Skolem function) to existential quantifier in Eq. (1) as an expression of the remaining variables in scope, y, ys, as, cs. The second method, Match then collects constraints φ r from suitable transitions w.r.t. this cs r . In a loop for each candidate interpretation of R, our algorithm runs an automated SMT-based theorem prover [55] to validate it (line 8). The algorithm can iterate several times and converges after a successful theorem-prover run. A noteworthy feature of our framework is that Update and Match should not necessarily be synchronized in pairs. Although cs r and the result of Match are going to be eventually combined and used in a single formula, the nondeterministic nature of our synthesis procedure suggests that the two ingredients may originate from potentially non-joint operations, thereby enlarging the search space of possible relational invariants. Our particular strategies for choosing ingredients for the inductive interpretation of R are based on the classification of the operations of the abstract object. We define a partial ordering " " on ADT states that connects constructors discerned by the recurrence in R to the transitions of operations. With respect to this ordering, we can for example recognize operations that leave the ADT unchanged ("noops", which play a special role in Sect. 5), operations that "produce" constructors and thereby enlarge the internal state by additional elements and conversely operations that "consume" constructors. A natural choice for is the reflexive closure of the subterm ordering, where xs ys for lists specifies that xs is a suffix of ys. In general, this ordering can be used to control the result of the synthesis for specific applications, and is a heuristic choice. A choice which works well for our examples is that xs is a non-strict subsequence of ys. The ordering naturally extends to tuples of variables (and thus, states), and lets us classify operations into the following three kinds. Op be an operation of an abstract object. Then, The class of an operation can often be identified by a cheap syntactic check to recognize when cons is applied to a current state or a next state variable. In the upcoming stack example in Fig. 1 , from xs = cons(in, xs) we have that push is a producer operation, and from cons(out, xs ) = xs we classify pop as consumer operation. A top operation, not shown in Fig. 1 , would be recognized as a noop (see also hasElement in the upcoming example in Fig. 3 ). In the next two subsections, we introduce our particular strategies for the implementations of Update and Match of Alg. 1, in reference to Def. 2. Some operations fall into neither of the classes; or it may be hard to determine so if they do, given that Def. 2 is semantic; and different operations may contribute different ingredients for a correct definition of R. To make use of as many operations as possible, we suggest strategies for all three classes of operations, to be able to synthesize a relational invariant in complex cases, even when complete information about the system is difficult to obtain. We identify a class of problems that require scanning the arrays in implementations of stacks and queues linearly. A distinguishing feature in this class is the presence of a numeric variable in cs through which array cells are accessed (denoted index in the rest of the section). We first illustrate the synthesis process on the following example and then present the algorithmic details. Two realizations of a FIFO stack are shown in Fig where xs = nil and n > 0 are the preconditions, and out captures the return value. As an illustration, formula Op ListStack pop (s, _, 7) holds for all states s in which pop terminates and returns 7 (by convention we use _ to denote terms that are irrelevant in a particular context). Note also that in the implementation of ArrStack, the popped value is not erased from the array -in order for a[n] to be considered in the future, it has to be rewritten by some push operator. In general, the array always contains infinitely many unknown values outside the range of cells a[0], . . . , a[n − 1] which are never accessed. A possible relational invariant R(xs, n, a) bridging ListStack and ArrStack is defined as follows: Intuitively, this R captures that a list xs has the same content as the portion of an array a between indexes 0 (including) and n (excluding). When xs is empty, then the portion of a should be empty too, thus n = 0. Otherwise, xs is created by cons-ing some other list ys and an element y then (1) n should be strictly positive, and (2) y should belong to the designated portion of a. The schema in Sect. 3.1 has two placeholders for constraints, φ b in the base case and φ r in the recursive case, that may refer to the variables in scope (as indicated by their respective parameter lists). Moreover, we seek a state cs r in the recursive position. Placeholder φ b is instantiated by constraints from the initialization operations, such as n = 0 from ArrStack. This alignment of base case and initialization is not just a coincidence: many data structures start initially empty and are gradually populated by calling operations (e.g., collections). The purpose of φ r in the recursive case of Eq. (1) is twofold. First, it connects a portion of the ADT state (specifically y) to the array state cs, in the example via a[n − 1] = y, and it determines a suitable array state cs r as an argument of the recursive occurrence of R. For instance, we take n − 1 for the recursive call but leave a unchanged. This is motivated by the observation that a state where xs = cons(y, ys) for some y, ys is consumed by pop. Using this information, the recurrence of R must align with the corresponding array transitions, too, as shown in Fig. 2 on the left. The constraint n > 0 is the precondition of the array operation, whereas y = a[n − 1] follows from comparing the outputs. As shown in Fig. 2 on the right, we can dually base the recurrence on push, which produces a cons, i.e., a transition from ys to xs = cons(y, ys) for some y. In this case, both transitions need to be viewed in reverse such that the respective successor states of push now match the left side R(xs, cs) of the schema. Then, the assignment n = n + 1 can be rewritten to yield the equation n r = n − 1. To make this intuition practical, our approach suggests a particular strategy for picking operations to take constraints from, recognizing consumers and producers more generally, and validating the guessed relational invariants using induction and lemmas. Alg. 2 and Alg. 3 show the implementations of Update and Match, respectively, that suit stacks and queues. Recall that these algorithms are called from Alg. 1 and take as input pairs of nondeterministically chosen joint operations of A and C; state variables cs of C; current version of state variables cs r to be used in the recursive call of R; and fresh variables y and ys introduced in Alg. 1 to define the inductive rule of R. Outputs of Update and Match are respectively an updated tuple of variables cs r and a subformula ψ to be conjoined with the inductive definition of R. If the producing operator is picked (line 1 of Alg. 2), then we have to find a term index , such that it would be transitioned by Op C to index . In particular, after assigning a new value to an array cell, index is monotonically updated (i.e., incremented like in the example in Fig. 1, or decremented) . Thus, to access the array cell containing a new value using an updated value of index , we have to invert the arithmetic operation and obtain index − 1 (for Fig. 1 ) or index + 1 (in the case of decrementation). Technically, in Alg. 2, it is realized by taking the index variable from cs, through which cells of the array can be observed (e.g., n in example in Fig. 1 ) and finding such a term index , that would be transitioned by Op C to index . Thus, the resulting cs r is composed from the same ingredients as cs where index replaces index . If the consuming operation is picked (line 4), then we proceed in the reverse direction and find index that is a result of transitioning of index through Op C . Alg. 3 for this recipe relies on the output of Alg. 2. Interestingly, it is supported even if cs r is computed using the producer, but ψ in Alg. 3 is computed using the consumer. Our particular strategy for the consumers in this recipe is 1) to use the precondition for Op C , and 2) to bridge the outputs of Op A and Op C via an equality. Alternatively, the inference via producer in line 1, in comparison, misses important constraint in the example, as the precondition of push is trivial. Such a situation can be mitigated by the discovery of a loop invariant (line 2) over index , i.e., usually just using Linear Integer Arithmetic (LIA), adding it, and blocking the initial state (to distinguish from the base case of the definition of R) in the inductive case of the interpretation of R being synthesized. Loop invariants are generated as follows as interpretations of predicate inv satisfying the following two implications: Note that these CHCs (over LIA) can be solved by numerous existing approaches. Without a query, ideally the strongest loop invariant is desirable; however in practice it suffices to apply lightweight techniques based on forwardpropagation of initial states using quantifier elimination, followed by its inductive subset computation [20] . This often finds an adequately-strong invariant. In this subsection we present a recipe that suits sets, multisets, and maps, that are in some sense non-linear. That is, data structures do not maintain any index variable, which is usually used to access elements. Instead, arrays are viewed as maps, and the corresponding ADTs are equipped with recursive functions that traverse the data structure over and over again for each input. Oftentimes, these objects have noop operations, and our synthesis procedure makes use of them. The array-based implementation handles a map a from elements to Booleans. Initially, all cells in a are false. Inserting and removing an element is implemented by storing true and false to the corresponding cell respectively. The difficulty here is to support the shown implementation of insert and erase in Fig. 3 , as well as possible variants that e.g., eagerly prune duplicate entries in the list-based implementation (see Sect. 6). The expected output of our synthesis procedure is as follows: if xs = nil a[y] ∧ R(ys, a[y := contains(ys , x)]), if xs = cons(y, ys) Algorithm 4: Update (recipe 2) Input: Operations Op A and Op C such that isNo(Op A ) holds, as[xs := cons(y, ys)] the shape of the state of A, cs the state variables of C Output: Updated arguments csr 1 let cs be fresh variables; 2 φ ← Op A (y, as[xs := ys], as[xs := ys], out) ∧ Op C (y, cs , _, out); 3 ψ ← ∀z . z = y =⇒ ∃out . Op C (z, cs, _, out ) ∧ Op C (z, cs , _, out ); 4 assume QE(∃out . φ ∧ ψ) simplifies to (cs = csr); 5 return csr; Algorithm 5: Match (recipe 2) Input: Operations Op A and Op C such that isNo(Op A ) holds, as[xs := cons(y, ys)] the shape of the state of A, denoted as0 below, cs the state variables of C, csr the updated state of C Output: Formula φr 1 φ ← Op A (y, as0, as0, out) ∧ Op C (y, cs, csr, out); 2 return simplify(QE(∃out . φ)); Alg. 4 and Alg. 5 show the implementations of Update and Match, respectively, for this recipe. The arguments cs r of the nested call to R in the inductive case of the definition of R are computed in Alg. 4 using the symbolic encoding of noop. In the set example, noop is the hasElement operation, which allows observing the status of the internal state and does not modify it. We furthermore assume that the input of Op n coincides with the type of elements stored in the list, i.e., it is meaningful to call Op n (y, · · · ) with the list head y from the recursive case of (1) where xs = cons(y, ys). The key idea behind Alg. 4 is to make necessary adjustments to cs to construct cs r that mirror any changes that can be observed via Op A when transitioning from list xs to ys in (1). This update is determined in terms of an auxiliary variables cs that are constrained to satisfy certain input/output pairs for the corresponding Op C , by case analysis whether the input is this particular y that is removed by the recurrence. The primary intention is to reassign a[y] appropriately. We do this by collecting constraints φ such that the output observed for Op C for y and cs matches that of the corresponding Op A on the smaller state with ys. This is also the key difference to Sect. 4, where we heuristically keep a unchanged in the recursive call in (1) . The outputs for all other inputs z, however, are enforced to be unchanged w.r.t. the original cs, which is expressed by the constraint ψ. We then eliminate the quantifier for out (which is straightforward as the operations are deterministic) and rewrite the formula to closed expressions cs r for variables cs as result. Example 3. Specifically for the example in Sect. 5.1, the algorithm proceeds by symbolic execution of hasElement, yielding formulas the following constituents: Op A = (out = contains(ys , y)) The result ∃out . φ ∧ ψ of Alg. 4 is now solved for a . The only free variables refer to the states of the systems. Bound variables out and out can be eliminated by merging equalities over out and out : The first conjunct therefore provides the update for a [y], whereas the second conjunct of φ states that a [z] should not be changed at indices other than y. After applying the axioms over the theory of arrays we get as result the following equality, which pattern matches the expected shape in line 4: This transformation requires to "reverse-apply" the axiom of extensionality, i.e., switch from the pointwise comparison of a and a to an equality between the entire arrays. Note that while in general quantifier elimination is difficult, our current implementation has a limited, but often sufficient, support that can be extended by supplying rules to the underlying SMT-based theorem prover. While Op A Alg. 4 predict future outputs of Op A for input y, Alg. 5 executes Op A on the state where xs = cons(y, ys) to obtain the current output of Op A for the same y. The generated constraint simply expresses that the output of Op C has to match. For hasElement we obtain the following formula: Unfolding the definition of contains and simplification produces true = a[x], which is then used as the "body" of the inductive case of R in (3). We have implemented the approach in a prototype CHC solver called AdtChc 3 , relying on AdtInd [55] as an inductive prover, which in turn uses the Z3 [40] SMT solver to quickly perform the satisfiability checks over uninterpreted functions and linear arithmetic that are needed at various solving stages. AdtChc automatically determines the appropriate synthesis recipe through analyzing the syntax of the program (i.e., presence of index variables) and is able to successfully find relational invariants and prove them valid for all considered benchmarks. We have evaluated the approach from Sect. 3 on different realizations of text-book data structures. The evaluation aims at answering two questions. Is the approach effective in the first place to discover suitable relational invariants, and how well can the necessary induction proofs be automated? The latter is relevant since Alg. 1 crucially depends on Validate in its refinement loop. All our benchmarks require recursive invariants. They fall into two categories. First, stacks and queues from Sect. 4 (with variations that store values only to even indexes of the array) are solved based on linear scan. Second, sets, multisets, and maps, (that differ in whether, e.g., duplicate elements are stored in the respective lists) are solved with the approach in Sect. 5. We include such variations to reflect different trade-offs when designing specifications, and to demonstrate that our technique is reasonably flexible. The only userprovided lemma was required for the multiset benchmark (marked * in Table 1 ): ∀ a, xs. num(a, xs) = 0 =⇒ remove(a, xs) = xs. The results from the evaluation 4 of both groups of benchmarks (resp., recipes used) are shown in Table 1 . The choice which recipe to use was made by the tool itself at synthesis time. Total time (in seconds wall-clock) is entirely dominated by proof search in AdtInd, and includes the time for SMT queries. We remark that the time to synthesize the relational invariant is negligible in comparison to the proof time (and the proof time is often proportional to the number of internal SMT calls). Most proofs are found using the default proof strategy (the same for every benchmark) within 20s. This is caused by the large proof search space created by a combination of array simplification and forward rewriting. We have also tested our tool of buggy implementations, e.g., in which the consumer operations are correct (and can be used for correct guesses of relational invariants), but producers are not. Expectedly, the tool is unable to synthesize a relational invariant for the whole systems in these cases. We have already presented the relational invariants found for the stack (2), for the stack variant that stores to even array indices only, counter n is decreased by 2 instead of 1 in the recursive call as expected. Relational invariant R(xs, m, n, a) for the queue benchmarks keeps two indices into the array a, depending on the variant, the first element of the list xs is found at a[m] or a[n] and the recursion either increases m or decreases n. The relational invariants for the multiset and map examples are analogous. All necessary lemmas are automatically discovered and proved by AdtInd, as an example for the set benchmarks: ∀ xs, s, x. R(xs, s) =⇒ contains(x, xs) = s[x]. Although there exist automated techniques to synthesize relational invariants, nothing was proposed to deal simultaneously with ADTs and arrays. Conceptually, our approach is related to SimAbs, an SMT-based algorithm to simulation synthesis [18] . SimAbs exploits a space of possible simulations and (dis-)proves them using an off-the-shelf decision procedure. Guesses for simulation relation are obtained also from the source code, by matching variables from two programs. Alternatively, simulation relations can be inferred from test runs [49] or through translation validation [41] . Our approach allows dealing with objects (not just imperative code) and contributes several novel strategies for guessing and proving non-trivial simulation relations. Discovery of invariants to relate the behaviors of two programs or other ways of establishing program equivalence is an active research area [5, 14, 22, 23, 39, 44, 51] . These approaches typically reduce the relational verification problem to a safety verification problem and rely on the existing tools-often, solvers for constrained Horn clauses (CHC). Currently, since ADTs and arrays are challenging for the underlying solvers, the applicability of the approaches to our tasks are also limited. There are decision procedures for abstraction of ADTs to lists, sets, and multisets [52] , however, these apply to certain predefined abstractions only. Our approach can be seen as an application of Syntax-Guided Synthesis (Sy-GuS) [2] . Strategies dependent on types of benchmarks essentially represent sets of syntactic templates filled iteratively and checked using an SMT solver. SyGuS is successfully used also in CHC solving [19, 21] and in lemma synthesis [46, 47, 55] . There are only a few approaches [21, 28, 31, 55] that apply SyGuS to synthesize formulas over ADTs or arrays/quantifier. Data-driven approaches are complementary to such syntax-based approaches, e.g., [38] . Neither deals with arrays, quantifiers, and ADTs at the same time. Unno et al. [53] support recursive predicates, by taking the least solution of initialization and consecution as the definition of R, however, this may lead to rather cumbersome inductive cases (e.g., for pop in the stack). We avoid the problem by basing the recurrence scheme on the data structure, and infer constraints that are well aligned to that scheme from the operations. Jennisys [34] tackles the related problem of generating recursive implementations from an abstract model, where the simulation relation is given. More generally, the problem addressed in this work relates to the idea of step-wise refinement, originally conceived by [16] and [54] as a guideline to organize software development and later studied extensively in a formal setting for rigorous assessment of functional correctness (e.g., [1, 4, 15, 25, 29, 33, 36] ). The standard proof technique relies on simulation relations [37] that couple the two state spaces, which is directly reflected in the CHC system of Def. 1. Many methods and tools support development using formal refinement [1, 4, 8, 17, 26, 29, 33, 45] . Large-scale verification projects that are based on refinement include seL4 [30] , FSCQ [10] , Flashix [48] , and CompCert [35] , with high human effort involved. Correct-by-construction correspondence between low-level code and high-level data types helps to some extent in, e.g., [13] and Cogent [3] . Recent work on "push-button" verification includes a verified TLS library [12] , AWS C Common library [11] , file system [50] , a hyperkernel [42] , network functions [56] , where the high degree of proof automation is in part achieved by statically bounding the state space of the systems. The latter work [56] specifically notes how non-experts can formulate high-level correctness requirements (their specifications are written in Python), as evidence that refinement-based approaches may ultimately overcome the "specification bottleneck" [6, 43] . We have demonstrated an approach that can fully automatically synthesize and prove relational invariants over recursive data types and arrays. The approach is based on introducing quantifiers and recursion into the definition of such relations in a systematic way, and by instantiating this schema with constraints from joint transitions of the two systems. A somewhat surprising insight was that it is useful to view such transitions both forward and in reverse, leading to the classification into producers and consumers as a guideline for the search. We have presented a general synthesis algorithm and two concrete instantiations for different data structures of different sorts. The approach is fully automatic in guessing a relation and proving it correct. It relies on the recently developed CHC solver called AdtChc which in turn is based on an SMT-based theorem prover AdtInd featuring a support for arrays, quantifiers and structural induction. The approach is modular and can be extended by further synthesis strategies in the future. In particular, since based on CHC techniques, it can be integrated with other existing CHC solvers tailored to non-ADT reasoning, and can be used in large-scale verification frameworks such as [24] that reduce the safety verification to CHC tasks. Many more interesting benchmarks lend themselves for further investigation: positional insertion and removal of lists, amortized data structures, benchmarks based on trees or nested arrays, and ultimately some real-world software systems. With a growing search space, it becomes more important to quickly recognize incorrect simulation relations, e.g., by evaluation-based counter-examples (cf. [31] ), to prevent costly proof attempts. Similarly, incorporating external tools for invariant generation is another topic for future work. Modeling in Event-B: System and Software engineering Syntax-Guided Synthesis Cogent: Verifying high-assurance file system implementations Refinement calculus: a systematic introduction Relational verification using product programs Lessons learned from microkernel verification-specification is the new bottleneck CPAchecker: A Tool for Configurable Software Verification The ASM refinement method HoIce: An ICE-Based Non-linear Horn Clause Solver Using Crash Hoare Logic for certifying the FSCQ file system Code-level model checking in the software development workflow Continuous formal verification of Amazon s2n Verifying low-level implementations of high-level datatypes Solving Horn Clauses on Inductive Data Types Without Induction Data refinement: Model-oriented proof methods and their comparison A constructive approach to the problem of program correctness KIV: Overview and VerifyThis competition. Software Tools for Technology Transfer (STTT) Automated discovery of simulation between programs Sampling Invariants from Frequency Distributions Solving Constrained Horn Clauses Using Syntax and Data Quantified Invariants via Syntax-Guided Synthesis Automating regression verification Inference rules for proving the equivalence of recursive procedures The SeaHorn Verification Framework Data refinement refined Unified theories of programming The ELDARICA Horn Solver Synthesis of recursive ADT transformations from reusable templates Systematic software development using VDM seL4: Formal verification of an operating-system kernel Synthesis modulo recursive functions SMT-Based Model Checking for Recursive Programs Specifying systems: the TLA + language and tools for hardware and software engineers Program extrapolation with Jennisys Formal verification of a realistic compiler A behavioral notion of subtyping An algebraic definition of simulation between programs Data-driven inference of representation invariants Property Directed Inference of Relational Invariants Z3: An efficient SMT solver Witnessing program transformations Hyperkernel: Push-button verification of an OS kernel Continuous reasoning: scaling the impact of formal methods Exploiting Synchrony and Symmetry in Relational Verification Composition and refinement in the B-method cvc4sy: Smart and Fast Term Enumeration for Syntax-Guided Synthesis Induction for SMT solvers Development of a verified Flash file system Data-driven Equivalence Checking Push-button verification of file systems via crash refinement Regression verification for unbalanced recursive functions Decision procedures for algebraic data types with abstractions. SIGPLAN notices Automating Induction for Solving Horn Clauses Program development by stepwise refinement Lemma Synthesis for Automating Induction over Algebraic Data Types Candea. Verifying software network functions with no verification expertise ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use