key: cord-0060362-l2e2fgir authors: Kesner, Delia; Peyrot, Loïc; Ventura, Daniel title: The Spirit of Node Replication date: 2021-03-23 journal: Foundations of Software Science and Computation Structures DOI: 10.1007/978-3-030-71995-1_18 sha: f3b68e76a39f13ace7a4019232a6bdbb1d5fd1e9 doc_id: 60362 cord_uid: l2e2fgir We define and study a term calculus implementing higher-order node replication. It is used to specify two different (weak) evaluation strategies: call-by-name and fully lazy call-by-need, that are shown to be observationally equivalent by using type theoretical technical tools. Computation in the λ-calculus is based on higher-order substitution, a complex operation being able to erase and copy terms during evaluation. Several formalisms have been proposed to model higher-order substitution, going from explicit substitutions (ES) [1] (see a survey in [41] ) and labeled systems [15] to pointer graphs [60] or optimal sharing graphs [49] . The model of copying behind each of these formalisms is not the same. Indeed, suppose one wants to substitute all the free occurrences of some variable x in a term t by some term u. We can imagine at least four ways to do that. (1) A drastic solution is a one-shot substitution, called non-linear (or full) substitution, based on simultaneously replacing all the free occurrences of x in t by the whole term u. This notion is generally defined by induction on the structure of the term t. (2) A refined method substitutes one free occurrence of x at a time, the so-called linear (or partial) substitution. This notion is generally defined by induction on the number of free occurrences of x in the term t. An orthogonal approach can be taken by replicating one term-constructor of u at a time, instead of replicating u as a whole, called here node replication. This notion can be defined by induction on the structure of the term u, and also admits two versions: (3) non-linear, i.e. by simultaneously replacing all the occurrences of x in t, or (4) linear. The linear version of the node replication approach can be formally defined by combining (2) and (3) . It is not surprising that different notions of substitution give rise to different evaluation strategies. Indeed, linear substitution is the common model in wellknown abstract machines for call-by-name and call-by-value (see e.g. [3] ), while (linear) node replication is used to implement fully lazy sharing [60] . However, node replication, originally introduced to implement optimal graph reduction in a graphical formalism, has only been studied from a Curry-Howard perspective by means of a term language known as the atomic λ-calculus [33] . The Atomic Lambda-Calculus. The Curry-Howard isomorphism uncovers a deep connection between logical systems and term calculi. It is then not surprising that different methods to implement substitution correspond to different ways to normalize logical proofs. Indeed, full substitution (1) can be explained in terms of natural deduction, while partial substitution (2) corresponds to cut elimination in Proof-Nets [2] . Replication of nodes (3)-(4) is based on a Curry-Howard interpretation of deep inference [32, 33] . Indeed, the logical aspects of intuistionistic deep inference are captured by the atomic λ-calculus [33] , where copying of terms proceeds atomically, i.e. node by node, similar to the optimal graph reduction of Lamping [49] . The atomic λ-calculus is based on explicit control of resources such as erasure and duplication. Its operational semantics explicitly handles the structural constructors of weakening and contraction, as in the calculus of resources λlxr [43, 44] . As a result, comprehension of the meta-properties of the termcalculus, in a higher-level, and its application to concrete implementations of reduction strategies in programming languages, turn out to be quite difficult. In this paper, we take one step back, by studying the paradigm of node replication based on implicit, rather than explicit, weakening and contraction. This gives a new concise formulation of node replication which is simple enough to model different programming languages based on reduction strategies. Call-by-Name, Call-by-Value, Call-by-Need. Call-by-name is used to implement programming languages in which arguments of functions are first copied, then evaluated. This is frequently expensive, and may be improved by call-byvalue, in which arguments are evaluated first, then consumed. The difference can be illustrated by the term t = ∆(II), where ∆ = λx.xx and I = λz.z: call-by-name first duplicates the argument II, so that its evaluation is also duplicated, while call-by-value first reduces II to (the value) I, so that duplications of the argument do not cause any duplicated evaluation. It is not always the best solution, though, because evaluating erasable arguments is useless. Call-by-need, instead, takes the best of call-by-name and call-by-value: as in call-by-name, erasable arguments are not evaluated at all, and as in call-byvalue, reduction of arguments occurs at most once. Furthermore, call-by-need implements a demand-driven evaluation, in which erasable arguments are never needed (so they are not evaluated), and non-erasable arguments are evaluated only if needed. Technically, some sharing mechanism is necessary, for example by extending the λ-calculus with explicit substitutions/let constructs [7] . Then βreduction is decomposed in at least two steps: one creating an explicit (pending) substitution, and the other ones (linearly) substituting values. Thus for example, (λx.xx)(II) reduces to (xx) [x\II] , and the substitution argument is thus evaluated in order to find a value before performing the linear substitution. Even when adopting this wise evaluation scheme, there are still some unnecessary copies of redexes: while only values (i.e. abstractions) are duplicated, they may contain redexes as subterms, e.g. λz.z(II) whose subterm II is a redex. Duplication of such values might cause redex duplications in weak (i.e. when evaluation is forbidden inside abstractions) call-by-need. This happens in particular in the confluent variant of weak reduction in [52] . Full laziness. Alas, it is not possible to keep all values shared forever, typically when they potentially contribute to the creation of a future β-reduction step. The key idea to gain in efficiency is then to keep the subterm II as a shared redex. Therefore, the (full) value λz.z(II) to be copied is split into two separate parts. The first one, called skeleton, contains the minimal information preserving the bound structure of the value, i.e. the linked structure between the binder and each of its (bound) variables. In our example, this is the term λz.zy, where y is a fresh variable. The second one is a multiset of maximal free expressions (MFE), representing all the shareable expressions (here only the term II). Only the skeleton is then copied, while the problematic redex II remains shared: When the subterm II is needed ahead, it is first reduced inside the ES, as it is usual in (standard) call-by-need, thus avoiding to compute the redex twice. This optimization is called fully lazy sharing and is due to Wadsworth [60] . In the confluent weak setting evoked earlier [52] , the fully lazy optimization is even optimal in the sense of Lévy [51] . This means that the strategy reaches the weak normal form in the same number of β-steps as the shortest possible weak reduction sequence in the usual λ-calculus without sharing. Thus, fully lazy sharing turns out to be a decidable optimal strategy, in contrast to other weak evaluation strategies in the λ-calculus without sharing, which are also optimal but not decidable [11] . Contributions. The first contribution of this paper is a term calculus implementing (full) node replication and internally encoding skeleton extraction (Sec. 2). We study some of its main operational properties: termination of the substitution calculus, confluence, and its relation with the λ-calculus. Our second contribution is the use of the node replication paradigm to give an alternative specification of two evaluation strategies usually described by means of full or linear substitution: call-by-name (Sec. 4.1) and weak fully lazy reduction (Sec. 4.2), based on the key notion of skeleton. The former can be related to (weak) head reduction, while the latter is a fully lazy version of (weak) call-by-need. In contrast to other implementations of fully lazy reduction relying on (external) meta-level definitions, our implementation is based on formal operations internally defined over the term syntax of the calculus. Furthermore, while it is known that call-by-name and call-by-need specified by means of full/linear substitution are observationally equivalent [7] , it was not clear at first whether the same property would hold in our case. Our third contribution is a proof of this result (Sec. 6) using semantical tools coming from proof theory -notably intersection types. This proof technique [42] considerably simplifies other approaches [7, 54] based on syntactical tools. Moreover, the use of intersection types has another important consequence: standard call-by-name and call-by-need turn out to be observationally equivalent to call-by-name and call-by-need with node replication, as well as to the more semantical notion of neededness (see [45] ). Intersection types provide quantitative information about fully lazy evaluation so that a fourth contribution of this work is a measure based on type derivations which turns out to be an upper bound to the length of reduction sequences to normal forms in a fully lazy implementation. More generally, our work bridges the gap between the Curry-Howard theoretical understanding of node replication and concrete implementations of fully lazy sharing. Related works are presented in the concluding Sec. 7. We now present the syntax and operational semantics of the λR-calculus (R for Replication), as well as a notion of level playing a key role in the next sections. Syntax. Given a countably infinite set X of variables x, y, z, ..., we consider the following grammars. The set of terms (resp. pure terms) is denoted by Λ R (resp. Λ). We write |t| for the size of t, i.e. for its number of constructors. We write I for the identity function λx.x. The construction [x\u] is an explicit substitution (ES), and [x\ \λy.u] an explicit distributor: the first one is used to copy arbitrary terms, while the second one is used specifically to duplicate abstractions. We write [x u] to denote an explicit cut in general, which is either [x\u] or [x\ \u] when u is λy.u , typically to factorize some definitions and proofs where they behave similarly in both cases. When using the general notation t[x u], we define x( ) = 1 if the term is an ES, and x( ) = 0 otherwise. We use two notions of contexts. Term contexts C extend those of the λcalculus to explicit cuts. List contexts L denote an arbitrary list of explicit cuts. They will be used to implement reduction at a distance in the operational semantics defined ahead. Free/bound variables of terms are defined as usual, notably fv(t[x u]) := fv(t)\{x} ∪ fv(u). These notions are extended to contexts as expected, in particular fv(P) := ∅. The domain of a list context is given by dlc(P) := ∅ and dlc(L[x u]) := dlc(L) ∪ {x}. α-conversion [13] is extended to λR-terms as expected and used to avoid capture of free variables. We write t{x\u} for the meta-level (capture-free) substitution simultaneously replacing all the free occurrences of the variable x in t by the term u. The application of a context C to a term t, written C t , replaces the hole P of C by t. For instance, P t = t and (λx.P) t = λx.t. This operation is not defined modulo α-conversion, so that capture of variables eventually happens. Thus, we also consider another kind of application of contexts to terms, denoted with double brackets, which is only defined if there is no capture of variables. For instance, (λy.P) x = λy.x while (λx.P) x is undefined. Operational semantics. ES may block some expected meaningful (i.e. nonstructural) reductions. For instance, β-reduction is blocked in (λx.t)[y ]u because an ES lies between the function and its argument. This kind of stuck redexes do not happen in graphical representations (e.g. [28] ), but it is typical in the sequential structure of term syntaxes. There are at least two ways to handle this issue. The first one is based on structural/permutation rules, as in [33] , where the substitution is first pushed outside the application node, as ( , so that β-reduction is finally unblocked. The second, less elementary, possibility is given by an operational semantics at a distance [6, 4] , where the β-rule can be fired by a rule like L λx.t u → L t[x\u] , L being an arbitrary list context. The distance paradigm is therefore used to gather meaningful and permutation rules in only one reduction step. In λR, we combine these two technical tools. First, we consider the following permutation rules, all of them are constrained by the condition x / ∈ fv(t). The reduction relation → π is defined as the closure of the rules → π under all contexts. It does not hold any computational content, only a structural one that unblocks redexes by moving explicit cuts out. In order to highlight the computational content of node replication we combine distance and permutations within the λR-calculus, given by the closure of the following rules by all the contexts. Notice in the five rules above that the (meta-level) substitution is full (it is performed simultaneously on all free occurrences of the variable x), and the list context L is always pushed outside the term t. We will highlight in green such list contexts in the forthcoming examples to improve readability. Apart from rule dB used to fire β-reductions, there are four substitution rules used to copy abstractions, applications and variables, pushing outside all the cuts surrounding the node to be copied. Rule app copies one application node, while rule var copies one variable node. The case of abstractions is more involved as explained below. The specificity in copying an abstraction λy.u is due to the (binding) relation between λy and all the free occurrences of y in its body u. Abstractions are thus copied in two stages. The first one is implemented by the rule dist, creating a distributor in which a potentially replaceable abstraction is placed, while moving its body inside a new ES. There are then two ways to replicate nodes of the body. Either they can be copied inside the distributor (where the binding relation between λy and the bound occurrences of y is kept intact), or they can be pushed outside the distributor, by means of the (non-deterministic) rule abs. In the second case, however, free occurrences of y cannot be pushed outside the abstraction (with binder y) to be duplicated, at the risk of breaking consistency: only shared components without y links can be then pushed outside. These components are gathered together into a list context L, which is pushed outside by using permutation rules, before performing the substitution of the pure body containing all the bound occurrences of y. Specifying this operation using only distance is hard, thus permutation rules are also used in our rule abs. The s-substitution relation → s (resp. distant Beta relation → dB ) is defined as the closure of → app ∪ → dist ∪ → abs ∪ → var (resp. → dB ) under all contexts, and the reduction relation → R is the union of → s and → dB . In what follows, we underline the term where the reduction is performed: Let R be any reduction relation. We write → * R for the reflexive-transitive closure of → R . A term t is said to be R-confluent iff t → * R u and t → * R s implies there is t such that u → * R t and s → * R t . The relation R is confluent iff every term is R-confluent. A term t is said to be in R-normal form (written also R-nf) iff there is no t such that t → R t . A term t is said to be Rterminating or R-normalizing iff there is no infinite R-sequence starting at t. The reduction R is said to be terminating iff every term is R-terminating. Levels. The notion of level plays a key role in this work. Intuitively, the level of a variable in a term indicates the maximal depth of its free occurrences w.r.t. ES (and not w.r.t. explicit distributors). However, in order to keep soundness w.r.t. the permutation rules, levels are computed along linked chains of ES. For instance, the level of w in both x[x\y [y\w] ] and x[x\y][y\w] is 2. Formally, the level of a variable z in a term t is defined by (structural) induction, while assuming by α-conversion that z is not a bound variable in t: Notice that lv w (t) = 0 whenever w / ∈ fv(t) or t is pure. We illustrate the concept of level by an example. Consider t = x[x\z[y\w]][w\w ], then lv z (t) = 1, lv w (t) = 3 and lv y (t) = 0 because y / ∈ fv(t). This notion is also extended to contexts as expected, i.e. lv 2 (C) = lv z (C z ), where z is a fresh variable. It is worth noticing that there are two cases when the level of a variable in a term may decrease: using a permutation rule to push an explicit cut out of another cut when the first one is a void cut, or using rule → var . Hence, levels alone are not enough to prove termination of → s . We then define a decreasing measure for → s in which not only variables are indexed by a level, but also constructors. For instance, in t[x\λy.yz], we can consider that the level of all the constructors of λy.yz have level lv x (t). This will ensure that the level of an abstraction will decrease when applying rule dist, as well as the level of an application when applying rule app. This is what we do next. We now prove three key properties of the λR-calculus: termination of the reduction system → s , relation between λR and the λ-calculus, and confluence of the reduction system → λR . Termination of → s . Some (rather informal) arguments are provided in [33] to justify termination of the substitution subrelation of their whole calculus. We expand these ideas into an alternative full formal proof adapted to our case, which is based on a measure being strictly decreasing w.r.t. → s . We consider a set O of objects of the form a(k, n) or b(k) (k, n ∈ N), which is equipped with the following ordering > O : We write > O MUL for the multiset extension of the order > O on O, which turns out to be well-founded [8] by Lem. 3. We are now ready to (inductively) define our cuts level measure C ( ) on terms, where the following operation on multisets is used Intuitively, the integer k in a(k, n) and b(k) counts the level of variables bound by explicit cuts, while n counts the size of terms to be substituted by an ES. Remark that for every pure term p we have C (p) = [ ]. Moreover: ). As an example, consider the following reduction sequence: Corollary 5. The reduction relation → s is terminating. Simulations. We show the relation between λR and the λ-calculus, as well as the atomic λ-calculus. For that, we introduce a projection from λR-terms to λ-terms implementing the unfolding of all the explicit cuts: Lemma 7 (Simulation of the λ-calculus). Let p 0 ∈ Λ. If p 0 → β p 1 , then p 0 → dB → + s p 1 . The previous results have an important consequence relating the original atomic λ-calculus and the λR-calculus. Indeed, it can be shown that reduction in the atomic λ-calculus is captured by λR, and vice-versa. More precisely, the λR-calculus can be simulated into the atomic λ-calculus by Lem. 6 and [33] , while the converse holds by [33] and Lem. 7. A more structural correspondence between λR and the atomic λ-calculus could also be established. Indeed, λR can be first refined into a (non-linear) calculus without distance, let say λR , so that permutation rules are integrated in the intermediate calculus as independent rules. Then a structural relation can be established between λR and λR on one side, and λR and the atomic λ-calculus on the other side (as for example done in [43] for the λ-calculus). Confluence. By Cor. 5 the reduction relation → s is terminating. It is then not difficult to conclude confluence of → s by using the unfolding function ↓ . Therefore, by termination of → s any t ∈ Λ R has an s-nf, and by confluence this s-nf is unique (and computed by the unfolding function). Using the interpretation method [35] together with Lem. 6, Cor. 5, and Lem. 7, one obtains: Theorem 8. The reduction relation → R is confluent. In the theory of programming languages [56] , the notion of calculus is usually based on a non-deterministic rewriting relation, providing an equational system of calculation, while the deterministic notion of strategy is associated to a concrete machinery being able to implement a specific evaluation procedure. Typical evaluation strategies are call-by-name, call-by-value, call-by-need, etc. Although the atomic λ-calculus was introduced as a technical tool to implement full laziness, only its (non-deterministic) equational theories was studied. In this paper we bridge the gap between the theoretical presentation of the atomic λ-calculus and concrete specifications of evaluation strategies. Indeed, we use the λR-calculus to investigate two concrete cases: a call-by-name strategy implementing weak head reduction, based on full substitution, and the call-by-need fully lazy strategy, which uses linear substitution. In both cases, explicit cuts can in principle be placed anywhere in the distributors, thus demanding to dive deep in such terms to deal with them. We then restrict the set of terms to a subset U, which simplifies the formal reasoning of explicit cuts inside distributors. Indeed, distributors will all be of the shape [x\ \λy.L p ], where p is a pure term (and L is a commutative list defined below). We argue that this restriction is natural in a weak implementation of the λ-calculus: it is true on pure terms and is preserved through evaluation. We consider the following grammars. A term t generated by any of the grammars G defined above is written t ∈ G. ] / ∈ U. The set T is stable by the relation → s , but U is clearly not stable under the whole → R relation, where dB-reductions may occur under abstractions. However, U is stable under both weak strategies to be defined: call-by-name and call-byneed. We factorize the proofs by proving stability for a more general relation → R , defined as the relation → R with dB-reductions forbidden under abstractions and inside distributors. Lemma 9 (Stability of the Grammar by → s /→ R ). 1. If t ∈ T and t → s t , then t ∈ T. 2. If t ∈ U and t → R t , then t ∈ U. The call-by-name (CBN) strategy → name (Fig. 1) is defined on the set of terms U as the union of the following relations → ndb and → ns . The strategy is weak as there is no reduction under abstractions. It is also worth noticing (as a particular case of Lem. 9) that t ∈ U and t → name t implies t ∈ U. Although the strategy → name is not deterministic, it enjoys the remarkable diamond property, guaranteeing in particular that all reduction sequences starting from t and ending in a normal form have the same length. It is worth noticing that simulation lemmas also hold between call-by-name in the λ-calculus, known as weak head reduction and denoted by → whr , and the λR-calculus. Indeed, → whr is defined as the β-reduction rule closed by contexts E ::= P | E t. Then, as a consequence of Lem. 7, we have that p 0 → whr p 1 implies p 0 → * R p 1 , and as a consequence of Lem. 6, we have that t 0 → name t 1 implies t ↓ 0 → * β t ↓ 1 . More importantly, call-by-name in the λ-calculus and call-by-name in the λR-calculus are also related. Indeed, Lemma 11 (Relating Call-by-Name Strategies). -Let p 0 ∈ Λ. If p 0 → whr p 1 then p 0 → + name p 1 . We now specify a deterministic strategy flneed implementing demand-driven computations and only linearly replicating nodes of values (i.e. pure abstractions). Given a value λx.p, only the piece of structure containing the paths between the binder λx and all the free occurrences of x in p, named skeleton, will be copied. All the other components of the abstraction will remain shared, thus avoiding some future duplications of redexes, as explained in the introduction. By copying only the smallest possible substructure of the abstraction, the strategy flneed implements an optimization of call-by-need called fully lazy sharing [60] . First, we formally define the key notions we are going to use. A free expression [39, 9] of a pure term p is a strict subterm q of p such that every free occurrence of a variable in q is also a free occurrence of the variable in p. A free expression of p is maximal if it is not a subterm of another free expression of p. From now on, we will consider the multiset of all maximal free expressions (MFE) of a term. Thus e.g. the MFEs of λy.p, where p = (Iy)I(λz.zyw), is given by the multiset [I, I, w]. An n-ary context (n ≥ 0) is a term with n holes P. A skeleton is an nary pure context where the maximal free expressions w.r.t. a variable set θ are replaced with holes. Formally, the θ-skeleton {{p}} θ of a pure term p, where θ = {x 1 . . . x n }, is the n-ary pure context {{p}} θ such that {{p}} θ q 1 , . . . , q n = p, for [q 1 , . . . , q n ] the maximal free expressions of λx 1 . . . . λx n .p 4 . Thus, for the same p as before, λy.{{p}} y = λy.(Py)P(λz.zyP). The Splitting Operation. Splitting a term into a skeleton and a multiset of MFEs is at the core of full laziness. This can naturally be implemented in the node replication model, as observed in [33] . Here, we define a (small-step) strategy → st on the set of terms T to achieve it (Fig. 2) , which is indeed a subset of the reduction relation λR 5 . The relation → st makes use of four basic rules which are parameterized by the variable y upon which the skeleton is built, written → y . There are also two contextual (inductive) rules. Notice that the focused variable changes from y to z, then back to y. This is because → st constructs the innermost skeletons first. Lemma 13. The reduction relation → st is confluent and terminating. Thus, from now on, we denote by ⇓ st the function relating a term of T to its unique st-nf. Since the small-step semantics is contained in the λR-calculus, we use it to build our call-by-need strategy of λR. The strategy. The call-by-need strategy → flneed (Fig. 3) is defined on the set of terms U, by using closure under the need contexts, given by the grammar N : N x [x\N] , where N denotes capture-free application of contexts (Sec. 2). As for call-by-name (Sec. 4.1), the call-by-need strategy is weak, because no meaningful reduction steps are performed under abstractions. Rule dB is the same one used to define name. Although rules spl and sub could have been presented in a unique rule of the form N x [x\L λy.p ] → L LL N λy.p [x\ \λy.p ] , we prefer to keep them separate since they represent different stages in the strategy. Indeed, rule spl only uses node replication operations to compute the skeleton of the abstraction, while rule sub implements one-shot linear substitution. Notice that as a particular case of Lem. 9, t ∈ U and t → flneed t implies t ∈ U. Another interesting property is that t → sub t implies lv z (t) ≥ lv z (t ). Moreover, → flneed is deterministic. Example 15. Let t 0 = (λx.(I(Ix)))λy.yI. Needed variable occurrences are highlighted in orange . This section introduces a quantitative type system V for the λR-calculus. Nonidempotent intersection [26] has one main advantage over the idempotent model [14] : it gives quantitative information about the length of reduction sequences to normal forms [21] . Indeed, not only typability and normalization can be proved to be equivalent, but a measure based on type derivations provides an upper bound to normalizing reduction sequences. This was extensively investigated in different logical/computational frameworks [5, 18, 20, 25, 42, 47] . However, no quantitative result based on types exists in the literature for the node replication model, including the attempts done for deep inference [30] . The typing rules of our system are in themselves not surprising (see [46] ), but they provide a handy quantitative characterization of fully lazy normalization (Sec. 6). Types are built on the following grammar of types and multi-types, where α ranges over a set of base types and a is a special type constant used to type terms reducing to normal abstractions. ). This notion is extended to several contexts as expected, so that + i∈I Γ i denotes a finite union of contexts, and the empty context when I = ∅. We write Γ ; ∆ for Γ + ∆ when dom(Γ ) ∩ dom(∆) = ∅. Type judgments have the form Γ t : σ, where Γ is a typing context, t is a term and σ is a type. A (typing) derivation is a tree obtained by applying the (inductive) typing rules of system V (Fig. 4) , introduced in [46]. The notation Φ £ Γ t : σ means there is a derivation named Φ of the judgment Γ t : σ in system V. A term t is typable in system V, or V-typable, iff there is a context Γ and a type σ such that Φ £ Γ t : σ. The size of a type derivation sz(Φ) is defined as the number of its abs, app and ans rules. The typing system is relevant in the sense that Φ £ Γ t : σ implies dom(Γ ) ⊆ fv(t). Type derivations can be measured by 3-tuples. We use a + operation on 3-tuples as pointwise addition: (a, b, c) + (e, f, g) = (a + e, b + f, c + g). These 3tuples are computed by a weighted derivation level function defined on typing derivations as D (Φ) := M (Φ, 1), where M (−, −) is inductively defined below. In the cases (abs), (app) and (cut), we let Φ t (resp. Φ u ) be the subderivation of the type of t (resp. Φ u ) and in (many) we let Φ i t be the i-th derivation of the type of t for each i ∈ I. 1, m, 0) . m . Notice that the first and the third components of any 3-tuple M (Φ, m) do not depend on m. Intuitively, the first (resp. third) component of the 3-tuple counts the number of application/abstraction (resp. (ax)) rules in the typing derivation. The second one takes into account the number of application/abstraction rules as well, but weighted by the level of the constructor. The 3-tuples are ordered lexicographically. The type system V characterizes normalization of both name and flneed strategies as follows: every typable term normalizes and every normalisable term is typable. In this sense, system V can be seen as a (quantitative) model [17] of our call-by-name and call-by-need strategies. We prove these results by studying the appropriate lemmas, notably weighted subject reduction and weighted subject expansion. We then deduce observational equivalence between the name and the flneed strategies from the fact that their associated normalization properties are both fully characterized by the same typing system. Soundness. Soundness of system V w.r.t. both → name and → flneed is investigated in this section. More precisely, we show that typable terms are normalizing for both strategies. In contrast to reducibility techniques needed to show this kind of result for simple types [34] , soundness is achieved here by relatively simple combinatorial arguments based again on decreasing measures. We start by studying the interaction between system V and linear as well as full substitution. The key idea to show soundness is that the measure D ( ) decreases w.r.t. the reduction relations → name and → flneed : Proof. By induction on r ∈ {π, s, ndb, flneed}, using Lem. 17 and Cor. 18. Theorem 20 (Typability implies name-Normalization). Let Φ t £Γ t : σ. Then t is name-normalizing. Proof. Suppose t is not name-normalizing. Since → s is terminating by Cor. 5, then every infinite → name -reduction sequence starting at t must necessarily have an infinite number of dB-steps. Moreover, all terms in such an infinite sequence are typed by Lem 19. Therefore, Lem. 19:3 (resp. Lem. 19:2) guarantees that all dB (resp. s) reduction steps involved in such → name -reduction sequence strictly decrease (resp. do not increase) the measure D ( ). This leads to a contradiction because the order > on 3-tuples D ( ) is well-founded. Then t is necessarily namenormalizing. Theorem 21 (Typability implies flneed-Normalization). Let Φ t £ Γ t : σ. Then t is flneed-normalizing. Moreover, D (Φ t ) is an upper bound to the length of the flneed-reduction evaluation to flneed-nf. Proof. The property trivially holds by Lem. 19:4 since the lexicographic order on 3-tuples is well-founded. Completeness. We address here completeness of system V with respect to → name and → flneed . More precisely, we show that normalizing terms in each strategy are typable. The basic property in showing that consists in guaranteeing that normal forms are typable. The following lemma makes use of a notion of needed variable: Because name-nfs are also flneed-nfs, we infer the following corollary for free. Corollary 23 (name-nfs are Typable). Let t be in name-nf. Then there is a derivation Φ £ Γ t : τ . Now we need lemmas stating the behavior of partial and full (anti-)substitution w.r.t. typing. To achieve completeness, we show that typing is preserved by anti-reduction. We decompose the property as follows: Proof. The proof is by induction on → r and uses Lem. 24 and Cor. 25. Theorem 27 (name-Normalization implies Typability). Let t be a term. If t is name-normalizing, then t is V-typable. Proof. Let t be name-normalizing. Then t → n name u and u is a name-nf. We reason by induction on n. If n = 0, then t = u is typable by Cor. 23. Otherwise, we have t → name t → n−1 name u. By the i.h. t is typable and thus by Lem. 26 (because → ns is included in → s ), t turns out to be also typable. Theorem 28 (flneed-Normalization implies Typability). Let t be a term. If t is flneed-normalizing, then t is V-typable. Proof. Similar to the previous proof but using Lem. 22 instead of Cor. 23. Summing up, Thms. 20, 27, 21 and 28 give: All the technical tools are now available to conclude observational equivalence between our two evaluation strategies based on node replication. Let R be any reduction notion on Λ R . Then, two terms t, u ∈ Λ R are said to be R-observationally equivalent, written t ≡ u, if for any context C, C t is R-normalizing iff C u is R-normalizing. Theorem 30. For all terms t, u ∈ Λ R , t and u are name-observationally equivalent iff t and u are flneed-observationally equivalent. Proof. By Thm. 29, t ≡ name u means that C t is V-typable iff C u is V-typable, for all C. By the same theorem, this is also equivalent to say that C t is flneednormalizing iff C u is flneed-normalizing for any C, i.e. t ≡ flneed u. Several calculi with ES bridge the gap between formal higher-order calculi and concrete implementations of programming languages (see a survey in [40] ). The first of such calculi, e.g. [1, 16] , were all based on structural substitution, in the sense that the ES operator is syntactically propagated step-by-step through the term structure until a variable is reached, when the substitution finally takes place. The correspondence between ES and Linear Logic Proof-Nets [24] led to the more recent notion of calculi at a distance [6, 4, 2] , enlightening a natural and new application of the Curry-Howard interpretation. These calculi implement linear/partial substitution at a distance, where the search of variable occurrences is abstracted out with context-based rewriting rules, and thus no ES propagation rules are necessary. A third model was introduced by the seminal work of Gundersen, Heijltjes, and Parigot [33, 34] , introducing the atomic λ-calculus to implement node replication. Inspired by the last approach we introduced the λR-calculus, capturing the essence of node replication. In contrast to [33] , we work with an implicit (structural) mechanism of weakening and contraction, a design choice which aims at focusing and highlighting the node replication model, which is the core of our calculus, so that we obtain a rather simple and natural formalism used in particular to specify evaluation strategies. Indeed, besides the proof of the main operational meta-level properties of our calculus (confluence, termination of the substitution calculus, simulations), we use linear and non-linear versions of λR to specify evaluation strategies based on node replication, namely call-by-name and call-by-need evaluation strategies. The first description of call-by-need was given by Wadsworth [60] , where reduction is performed on graphs instead of terms. Weak call-by-need on terms was then introduced by Ariola and Felleisen [7] , and by Maraist, Odersky and Wadler [54, 53] . Reformulations were introduced by Accattoli, Barenbaum and Mazza [3] and by Chang and Felleisen [22] . Our call-by-need strategy is inspired by the calculus in [3] , which uses the distance paradigm [6] to gather together meaningful and permutation rules, by clearly separating multiplicative from exponential rules, in the sense of Linear Logic [27] . Full laziness has been formalized in different ways. Pointer graphs [60, 59] are DAGs allowing for an elegant representation of sharing. Labeled calculi [15] implement pointer graphs by adding annotations to λ-terms, which makes the syntax more difficult to handle. Lambda-lifting [38, 39] implements full laziness by resorting to translations from λ-terms to supercombinators. In contrast to all the previous formalisms, our calculus is defined on standard λ-terms with explicit cuts, without the use of any complementary syntactical tool. So is Ariola and Felleisen's call-by-need [7] , however, their notion of full laziness relies on external (ad-hoc) meta-level operations used to extract the skeleton. Our specification of call-by-need enjoys fully lazy sharing, where the skeleton extraction operation is internally encoded in the term calculus operational semantics. Last but not least, our calculus has strong links with proof-theory, notably deep inference. Balabonski [10, 9] relates many formalisms of full laziness and shows that they are equivalent when considering the number of β-steps to a normal form. It would then be interesting to understand if his unified approach, (abstractly) stated by means of the theory of residuals [50, 51] , applies to our own strategy. We have also studied the calculus from a semantical point of view, by means of intersection types. Indeed, the type system can be seen as a model of our implementations of call-by-name and call-by-need, in the sense that typability and normalization turn out to be equivalent. Intersection types go back to [23] and have been used to provide characterizations of qualitative [14] as well as quantitative [21] models of the λcalculus, where typability and normalization coincide. Quantitative models specified by means of non-idempotent types [26, 48] were first applied to the λcalculus (see a survey in [19] ) and to several other formalisms ever since, such as call-by-value [25, 20] , call-by-need [42, 5] , call-by-push-value [31, 18] and classical logic [47] . In the present work, we achieve for the first time a quantitative characterization of fully lazy normalization, which provides upper bounds for the length of reduction sequences to normal forms. The characterizations provided by intersection type systems sometimes lead to observational equivalence results (e.g. [42] ). In this work we succeed to prove observational equivalence related to a fully lazy implementation of weak call-byneed, a result which would be extremely involved to prove by means of syntactical tools of rewriting, as done for weak call-by-need in [7] . Moreover, our result implies that our node replication implementation of full laziness is observationally equivalent to standard call-by-name and to weak call-by-need (see [42] ), as well as to the more semantical notion of neededness (see [45] ). A Curry-Howard interpretation of the logical switch rule of deep inference is given in [58, 57] as an end-of-scope operator, thus introducing the spinal atomic λcalculus. The calculus implements a refined optimization of call-by-need, where only the spine of the abstraction (tighter than the skeleton) is duplicated. It would be interesting to adapt the λR-calculus to spine duplication by means of an appropriate end-of-scope operator, such as the one in [37] . Further optimizations might also be considered. Finally, this paper only considers weak evaluation strategies, i.e. with reductions forbidden under abstractions, but it would be interesting to extend our notions to full (strong) evaluations too [29, 12] . Extending full laziness to classical logic would be another interesting research direction, possibly taking preliminary ideas from [36] . We would also like to investigate (quantitative) tight types for our fully lazy strategy, as done for weak call-by-need in [5] , which does not seem evident in our node replication framework. Explicit substitutions Proof nets and the linear substitution calculus Distilling abstract machines A nonstandard standardization theorem Types by need The structural lambda-calculus The call-by-need lambda calculus Term rewriting and all that La plein paresse, une certain optimalité : partage de sous-termes et stratégies de réduction en réécriture d'ordre supérieur A unified approach to fully lazy sharing Weak optimality, and the meaning of sharing Foundations of strong call by need The lambda calculus -its syntax and semantics Lambda Calculus with Types. Perspectives in logic Sharing in the weak lambda-calculus Preservation of strong normalization in named lambda calculi with explicit substitution and garbage collection On phase semantics and denotational semantics: the exponentials The bang calculus revisited Non-idempotent intersection types for the lambda-calculus A semantical and operational account of call-by-value solvability Sémantiques de la logique linéaire et temps de calcul The call-by-need lambda calculus, revisited A new type assignment for λ-terms Proof nets and explicit substitutions Collapsing non-idempotent intersection types Discovering needed reductions using type theory Linear logic Proof-nets: The parallel syntax for proof-theory A compiled implementation of strong reduction A deep quantitative type system The bang calculus and the two girard's translations A proof calculus which reduces syntactic bureaucracy Atomic lambda calculus: A typed lambda-calculus with explicit sharing A proof of strong normalisation of the typed atomic lambda-calculus Confluence results for the pure strong categorical logic CCL: lambdacalculi as subsystems of CCL The Atomic Lambda-Mu Calculus The design and implementation of programming languages The Implementation of Functional Programming Languages The theory of calculi with explicit substitutions revisited A theory of explicit substitutions with safe and full composition Reasoning about call-by-need by means of types Resource operators for lambda-calculus A prismoid framework for languages with resources Call-by-need, neededness and all that Quantitative types for the linear substitution calculus Non-idempotent types for classical calculi in natural deduction style A linearization of the lambda-calculus and consequences An algorithm for optimal lambda calculus reduction Réductions correctes et optimales dans le lambda-calcul Optimal reductions in the lambda-calculus Explicit substitutions and programming languages Call-by-name, call-by-value, call-by-need and the linear lambda calculus The call-by-need lambda calculus Deep-inference intersection types Call-by-name, call-by-value and the lambda-calculus Spinal atomic lambdacalculus A lambda-calculus that achieves full laziness with spine duplication Bottom-up beta-reduction: Uplinks and lambda-dags. Fundam Semantics and Pragmatics of the Lambda Calculus Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.