key: cord-0046578-r3iofe7p authors: Ullrich, Sebastian; de Moura, Leonardo title: Beyond Notations: Hygienic Macro Expansion for Theorem Proving Languages date: 2020-06-06 journal: Automated Reasoning DOI: 10.1007/978-3-030-51054-1_10 sha: b3b2b6b45a7467795fbee65e7be66262beb856fb doc_id: 46578 cord_uid: r3iofe7p In interactive theorem provers (ITPs), extensible syntax is not only crucial to lower the cognitive burden of manipulating complex mathematical objects, but plays a critical role in developing reusable abstractions in libraries. Most ITPs support such extensions in the form of restrictive “syntax sugar” substitutions and other ad hoc mechanisms, which are too rudimentary to support many desirable abstractions. As a result, libraries are littered with unnecessary redundancy. Tactic languages in these systems are plagued by a seemingly unrelated issue: accidental name capture, which often produces unexpected and counterintuitive behavior. We take ideas from the Scheme family of programming languages and solve these two problems simultaneously by proposing a novel hygienic macro system custom-built for ITPs. We further describe how our approach can be extended to cover type-directed macro expansion resulting in a single, uniform system offering multiple abstraction levels that range from supporting simplest syntax sugars to elaboration of formerly baked-in syntax. We have implemented our new macro system and integrated it into the upcoming version (v4) of the Lean theorem prover. Despite its expressivity, the macro system is simple enough that it can easily be integrated into other systems. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this chapter (10.1007/978-3-030-51054-1_10) contains supplementary material, which is available to authorized users. Mixfix notation systems have become an established part of many modern ITPs for attaching terse and familiar syntax to functions and predicates of arbitrary arity. As a further extension, all shown systems also allow binding names inside mixfix notations. Electronic supplementary material The online version of this chapter (https:// doi.org/10.1007/978-3-030-51054-1_10) contains supplementary material, which is available to authorized users. While these extensions differ in the exact syntax used, what is true about all of them is that at the time of the notation declaration, the system already, statically knows what parts of the term are bound by the newly introduced variable. This is in stark contrast to macro systems in Lisp and related languages where the expansion of a macro (a syntactic substitution) can be specified not only by a template expression with placeholders like above, but also by arbitrary syntax transformers, i.e. code evaluated at compile time that takes and returns a syntax tree. 1 As we move to more and more expressive notations and ideally remove the boundary between built-in and user-defined syntax, we argue that we should no more be limited by the static nature of existing notation systems and should instead introduce syntax transformers to the world of ITPs. However, as usual, with greater power comes greater responsibility. By using arbitrary syntax transformers, we lose the ability to statically determine what parts of the macro template can be bound by the macro input (and vice versa). Thus it is no longer straightforward to avoid hygiene issues (i.e. accidental capturing of identifiers; [11] ) by automatically renaming identifiers. We propose to learn from and adapt the macro hygiene systems implemented in the Scheme family of languages for interactive theorem provers in order to obtain more general but still well-behaved notation systems. After giving a practical overview of the new, macro-based notation system we implemented in the upcoming version of Lean (Lean 4) in Sect. 2, we describe the issue of hygiene and our general hygiene algorithm, which should be just as applicable to other ITPs, in Sect. 3 . Section 4 gives a detailed description of the implementation of this algorithm in Lean 4. In Sect. 5, we extend the use case of macros from mere syntax substitutions to type-aware elaboration. Finally, we have already encountered hygiene issues in the current version of Lean in a different part of the system: the tactic framework. We discuss how these issues are inevitable when implementing reusable tactic scripts and how our macro system can be applied to this hygiene problem as well in Sect. 6. Contributions. We present a system for hygienic macros optimized for theorem proving languages as implemented 2 in the next version of the Lean theorem prover, Lean 4. -We describe a novel, efficient hygiene algorithm to employ macros in ITP languages at large: a combination of a white-box, effect-based approach for detecting newly introduced identifiers and an efficient encoding of scope metadata. -We show how such a macro system can be seamlessly integrated into existing elaboration designs to support type-directed expansion even if they are not based on homogeneous source-to-source transformations. -We show how hygiene issues also manifest in tactic languages and how they can be solved with the same macro system. To the best of our knowledge, the tactic language in Lean 4 is the first tactic language in an established theorem prover that is automatically hygienic in this regard. Lean's current notation system as shown in Sect. 1 is still supported in Lean 4, but based on a much more general macro system; in fact, the keyword itself has been reimplemented as a macro, more specifically as a macro-generating macro making use of our tower of abstraction levels. The corresponding Lean 4 command 3 for the example from the previous section expands to the macro declaration where the syntactic category (term) of placeholders and of the entire macro is now specified explicitly. The right-hand side uses an explicit syntax quasiquotation to construct the syntax tree, with syntax placeholders (antiquotations) prefixed with $. As suggested by the explicit use of quotations, the right-hand side may now be an arbitrary Lean term computing a syntax object; in other words, there is no distinction between pattern-based and procedural macros in our system. We can now use this abstraction level to implement simple command-level macros, for example. Syntactic categories can be specified explicitly for antiquotations as in $id: ident where otherwise ambiguous. itself is another command-level macro that, for our example, expands to two commands that is, a pair of parser extension (which we will not further discuss in this paper) and syntax transformer. Our reason for ultimately separating these two concerns is that we can now obtain a well-structured syntax tree pre-expansion, i.e. a concrete syntax tree, and use it to implement source code tooling such as auto-completion, go-to-definition, and refactorings. Implementing even just the most basic of these tools for the Lean 3 frontend that combined parsing and notation expansion meant that they had to be implemented right inside the parser, which was not an extensible or even maintainable approach in our experience. Both and are in fact further macros for regular Lean definitions encoding procedural metaprograms, though users should rarely need to make use of this lowest abstraction level explicitly. Both commands can only be used at the top level; we are not currently planning support for local macros. There is no more need for the complicated scoped syntax since the desired translation can now be specified naturally, without any need for further annotations. The lack of static restrictions on the right-hand side ensures that this works just as well with custom binding notations, even ones whose translation cannot statically be determined before substitution. Here we explicitly make use of the and abstraction level for its convenient syntactic pattern matching syntax. and are "open" in the sense that multiple transformers for the same declaration can be defined; they are tried in reverse declaration order by default up to the first match (though this can be customized using explicit priority annotations). As a final example, we present a partial reimplementation of the arithmetic "bigop" notations found 4 in Coq's Mathematical Components library [12] such as for summing over a filtered sequence of elements. The specific bigop notations are defined in terms of a single fold operator; however, because Coq's notation system is unable to abstract over this new indexing syntax, every specific bigop notation has to redundantly repeat every specific index notation before delegating to . In total, the 12 index notations for are duplicated for 3 different bigops in the file. In contrast, using our system, we can introduce a new syntactic category for index notations, interpret it once in , and define new bigops on top of it without any redundancy. The full example is included in the supplement. In this section, we will give a mostly self-contained description of our algorithm for automatic hygiene applied to a simple recursive macro expander; we postpone comparisons to existing hygiene algorithms to Sect. 7. Hygiene issues occur when transformations such as macro expansions lead to an unexpected capture (rebinding) of identifiers. For example, given the notation we would not expect the term x to be closed because intuitively there is no x in scope at the argument position of ; that the implementation of the macro makes use of the name internally should be of no concern to the macro user. Thus hygiene issues can also be described as a confusion of scopes when syntax parts are removed from their original context and inserted into new contexts, which makes name resolution strictly after macro expansion (such as in a compiler preceded by a preprocessor) futile. Instead we need to track scopes as metadata before and during macro expansion so as not to lose information about the original context of identifiers. Specifically, 1. when an identifier captured in a syntax quotation matches one or more toplevel symbols 5 , the identifier is annotated with a list of these symbols as top-level scopes to preserve its extra-macro context (which, because of the lack of local macros, can only contain top-level bindings), and 2. when a macro is expanded, all identifiers freshly introduced by the expansion are annotated with a new macro scope to preserve the intra-macro context. Macro scopes are appended to a list, i.e. ordered by expansion time. This full "history of expansions" is necessary to treat macro-producing macros correctly, as we shall see in Sect. 3.2. Thus, the expansion of the above term is (an equivalent of) where 1 is a fresh macro scope appended to the macro-introduced x , preventing it from capturing the x from the original input. In general, we will style hygienic identifiers in the following as n.msc 1 .msc 2 .. . ..msc n {tsc 1 ,. . .,tsc n } where n is the original name, msc are macro scopes, and tsc top-level scopes, eliding the braces if there are no top-level scopes as in the example above. We use the dot notation to suggest both the ordered nature of macro scopes and their eventual implementation in Sect. 4. We will now describe how to implement these operations in a standard macro expander. The macro expander described in this section bundles the execution of macros and insertion of their results with interspersed name resolution to track scopes and ensure hygiene of identifiers. As we shall see below, top-level scopes on binding names are always discarded by it. Thus we will define a symbol more formally as an identifier together with a list of macro scopes, such as x.1 above. Given a global context (a set of symbols), the expander does a conventional top-down expansion, keeping track of an initially-empty local context (another set of symbols). When a binding is encountered, the local context is extended with that symbol; top-level scopes on bindings are discarded since they are only meaningful on references. When a reference, i.e. an identifier not in binding position, is encountered, it is resolved according to the following rules: 1. If the local context has an entry for the same symbol, the reference binds to the corresponding local binding; any top-level scopes are ignored. 2. Otherwise, if the identifier is annotated with one or more top-level scopes or matches one or more symbols in the global context, it binds to all of these (to be disambiguated by the elaborator). 3. Otherwise, the identifier is unbound and an error is generated. In the common incremental compilation mode of ITPs, every command is fully processed before subsequent commands. Thus, an expander for such a system will not extend the global context by itself, but pass the fully expanded command to the next compilation step before being called again with the next command's unexpanded syntax tree and a possibly extended global context. Notably, our expander does not add macro scopes to identifiers by itself, either, much in contrast to other expansion algorithms. We instead delegate this task to the macro itself, though in a completely transparent way for all pattern-based and for many procedural macros. We claim that a macro should in fact be interpreted as an effectful computation since two expansions of the same identifier-introducing macro should not return the same syntax tree to avoid unhygienic interactions between them. Thus, as a side effect, it should apply a fresh macro scope to each captured identifier. In particular, a syntax quotation should not merely be seen as a datum, but implemented as an effectful value that obtains and applies this fresh scope to all the identifiers contained in it to immediately ensure hygiene for pattern-based macros. Procedural macros producing identifiers not originating from syntax quotations might need to obtain and make use of the fresh macro scope explicitly. We give a specific monad-based [14] implementation of effectful syntax quotations as a regular macro in Sect. 4. Given the following input, we incrementally parse, expand, and elaborate each declaration before advancing to the next one. For a first, trivial example, let us focus on the expansion of the second line. At this point, the global context contains the symbol x (plus any default imports that we will ignore here). Descending into the right-hand side of the definition, we first add y to the local context. The reference x does not match any local definitions, so it binds to the matching top-level definition. In the next line, the built-in macro expands to the definitions When a top-level macro application unfolds to multiple declarations, we expand and elaborate these incrementally as well to ensure that declarations are in the global context of subsequent declarations. When recursively expanding the declaration (we will assume for this example that itself is primitive) in the global context , we first visit the syntax quotation on the left-hand side. The identifier e inside of it is in an antiquotation and thus not captured by the quotation. It is in binding position for the right-hand side, so we add e to the local context. Visiting the right-hand side, we find the quotationcaptured identifier x and annotate it with the matching top-level definition of the same name; we do not yet know that it is in a binding position. When visiting the reference e, we see that it matches a local binding and do not add top-level scopes. with the global context , we descend into the right-hand side. We expand the macro given a fresh macro scope 2, which is applied to any captured identifiers. We add the symbol x.2 (discarding the top-level scope x) to the local context and finally visit the reference x. The reference does not match the local binding x.2 but does match the top-level binding x, so it binds to the latter. Now let us briefly look at a more complex macro-macro example demonstrating use of the macro scopes stack: If we call m f, we apply a macro scope 1 to all captured identifiers, then incrementally process the two new declarations. If we call the new macro mm, we apply one more macro scope 2. When processing these new definitions, we see that the scopes ensure the expected name resolution. In particular, we now have global declarations f .1, f .2, and f .1.2 that show that storing only a single macro scope would have led to a collision. Syntax objects in Lean 4 are represented as an inductive type of nodes (or nonterminals), atoms (or terminals), and, as a special case of nonterminals, identifiers. An additional constructor represents missing parts from syntax error recovery. Atoms and identifiers are annotated with source location metadata unless generated by a macro. Identifiers carry macro scopes inline in their Name while top-level scopes are held in a separate list. The additional Nat is an implementation detail of Lean's hierarchical name resolution. The type Name of hierarchical names precedes the implementation of the macro system and is used throughout Lean's implementation for referring to (namespaced) symbols. The syntax`a.b is a literal of type Name for use in meta-programs. The numeric part of Name is not accessible from the surface syntax and reserved for internal names; similar designs are found in other ITPs. By reusing Name for storing macro scopes, but not top-level scopes, we ensure that the new definition of symbol from Sect. 3.1 coincides with the existing Lean type and no changes to the implementation of the local or global context are necessary for adopting the macro system. A Lean 4 implementation of the expansion algorithm described in the previous section is given in Fig. 1 ; the full implementation including examples is included in the supplement. As a generalization, syntax transformers have the type Syntax where the TransformerM monad gives access to the global context and a fresh macro scope per macro expansion. The expander itself uses an extended ExpanderM monad that also stores the local context and the set of registered macros. We use the Lean equivalent of Haskell's do notation [13] to program in these monads. As usual, the expander has built-in knowledge of some "core forms" (lines 3-17) with special expansion behavior, while all other forms are assumed to be macros and expanded recursively (lines 20-22). Identifiers form one base case of the recursion. As described in the algorithm, they are first looked up in the local context (recall that the val of an identifier includes macro scopes), then as a fall back in the global context plus its own top-level scopes. mkTermId : Name creates an identifier without source information or top-level scopes, which are not needed after expansion. mkOverloadedConstant implements the Lean special case of overloaded symbols to be disambiguated by elaboration; systems without overloading support should throw an ambiguity error instead in this case. As an example of a core binding form, the expansion of a single-parameter fun is shown in lines 13-17 of Fig. 1 . It recursively expands the given parameter type, then expands the body in a new local context extended with the value of id. Here in particular implements the discarding of top-level scopes from binders. Finally, in the macro case, we fetch the syntax transformer for the given node kind, call it in a new context with a fresh current macro scope, and recurse. Syntax quotations are given as one example of a macro: they do not have built-in semantics but transform into code that constructs the appropriate syntax tree (expandStxQuot in Fig. 2 ). More specifically, a syntax quotation will, at runtime, query the current macro scope msc from the surrounding TransformerM monad and apply it to all captured identifiers, which is done in quoteSyntax. quoteSyntax recurses through the quoted syntax tree, reflecting its constructors. Basic datatypes such as String and Name are turned into Syntax via the typeclass method quote. For antiquotations, we return their contents unreflected. In the case of identifiers, we resolve possible global references at compile time and reflect them, while msc is applied at runtime. Thus a quotation`(a + $b) inside a global context where the symbol a matches declarations a.a and b.a is transformed to the equivalent of This implementation of syntax quotations itself makes use of syntax quotations for simplicity and thus is dependent on its own implementation in the previous stage of the compiler. Indeed, the helper variable msc must be renamed should the Fig. 1 . Abbreviated implementation of a recursive expander for our macro system name already be in scope and used inside an antiquotation. Note that quoteSyntax is allowed to reference the same msc as expandStxQuot because they are part of the same macro call and the current macro scope is unchanged between them. The macro system as described so far can handle most syntax sugars of Lean 3 except for ones requiring type information. For example, the anonymous constructor is sugar for (c e ...) if the expected type of the expression is known and it is an inductive type with a single constructor c. While trivial to parse, there is no way to implement this syntax as a macro if expansion is done strictly prior to elaboration. To the best of our knowledge, none of the ITPs listed in the introduction support hygienic elaboration extensions of this kind, but we will show how to extend their common elaboration scheme in that way in this section. Elaboration 6 can be thought of as a function in an appropriate monad ElabM 7 from a (concrete or abstract) surface-level syntax tree type Syntax to a fully-specified core term type Expr [15] . We have presented the (concrete) definition of Syntax in Lean 4 in Sect. 4; the particular definition of Expr is not important here. While such an elaboration system could readily be composed with a type-insensitive macro expander such as the one presented in Sect. 3, we would rather like to intertwine the two to support type-sensitive but still hygienic-by-default macros (henceforth called elaborators) without having to reimplement macros of the kind discussed so far. Indeed, these can automatically be adapted to the new type given an adapter between the two monads, similarly to the adaption of macros to expanders in [6] : Because most parts of our hygiene system are implemented by the expander for syntax quotations, the only changes to an elaboration system necessary for supporting hygiene are storing the current macro scope in the elaboration monad (to be passed to the expansion monad in the adapter) and allocating a fresh macro scope in elabTerm and other recursion points, which morally now represent the starting point of a macro's expansion. Thus elaborators immediately benefit from hygiene as well whenever they use syntax quotations to construct unelaborated helper syntax objects to pass to elabTerm. In order to support syntax quotations in these two and other monads, we generalize their implementation to a new monad typeclass implemented by both monads. The second operation is not used by syntax quotations directly, but can be used by procedural macros to manually enter new macro call scopes. As an example, the following is a simplified implementation of the anonymous constructor syntax mentioned above. The [termElab] attribute registers this elaborator for the given syntax node kind. $args * is an antiquotation splice that extracts/injects a syntactic sequence of elements into/from an Array Syntax. The array by default includes separators such as "," as Syntax.atoms in order to be lossless, which we here filter out using getSepElems. The function synthesizes a hygienic reference to the given constant name by storing it as a top-level scope and applying a reserved macro scope to the constructed identifier. This implementation fails if the expected type is not yet sufficiently known at this point. The actual implementation 8 of this elaborator extends the code by postponing elaboration in this case. When an elaborator requests postponement, the system returns a fresh metavariable as a placeholder and associates the input syntax tree with it. Before finishing elaboration, postponed elaborators associated with unsolved metavariables are retried until they all ultimately succeed, or else elaboration is stuck because of cyclic dependencies and an error is signed. Lean 3 includes a tactic framework that, much like macros, allows users to write custom automation either procedurally inside a Tactic monad (renamed to TacticM in Lean 4) or "by example" using tactic language quotations, or in a mix of both [9] . For example, Lean 3 uses a short tactic block to prove injection lemmas for data constructors. Unfortunately, this code unexpectedly broke in Lean 3 when used from a library for homotopy type theory that defined its own propext and Iff . intro declarations; 9 in other words, Lean 3 tactic quotations are unhygienic and required manual intervention in this case. Just like with macros, the issue with tactics is that binding structure in such embedded terms is not known at declaration time. Only at tactic run time do we know all local variables in the current context that preceding tactics may have added or removed, and therefore the scope of each captured identifier. Arguably, the Lean 3 implementation also exhibited a lack of hygiene in the handling of tactic-introduced identifiers: it did not prevent users from referencing such an identifier outside of the scope it was declared in. Coq's similar Ltac tactic language [5] exhibits the same issue and users are advised not to introduce fixed names in tactic scripts but to generate fresh names using the fresh tactic first, 10 which can be considered a manual hygiene solution. Lean 4 instead extends its automatically hygienic macro implementation to tactic scripts by allowing regular macros in the place of tactic invocations. By the same hygiene mechanism described above, introduced identifiers such as h are renamed so as not to be accessible outside of their original scope, while references to global declarations are preserved as top-level scope annotations. Thus Lean 4's tactic framework resolves both hygiene issues discussed here without requiring manual intervention by the user. Expansion of tactic macros in fact does not precede but is integrated into the tactic evaluator evalTactic : such that recursive macro calls are expanded lazily. Here the quotation kind tactic followed by a pipe symbol specifies the parser to use for the quotation, since tactic syntax may otherwise overlap with term syntax. automatically infers it from the given syntax category, but cannot be used here because the parser for repeat would not yet be available in the righthand side. When $t eventually fails, the recursion is broken without visiting and expanding the subsequent repeat macro call. The try tactical is used to ignore this eventual failure. While we believe that macros will cover most use cases of tactic quotations in Lean 3, their use within larger TacticM metaprograms can be recovered by passing such a quotation to evalTactic: TacticM implements the MonadQuotation typeclass for this purpose. The main inspiration behind our hygiene implementation was Racket's new Sets of Scopes [10] hygiene algorithm. Much like in our approach, Racket annotates identifiers both with scopes from their original context as well as with additional macro scopes when introduced by a macro expansion. However, there are some significant differences: Racket stores both types of scopes in a homogeneous, unordered set and does name resolution via a maximum-subset check. For both simplicity of implementation and performance, we have reduced scopes to the bare minimal representation using only strict equality checks, which we can easily encode in our existing Name implementation. In particular, we only apply scopes to matching identifiers and only inside syntax quotations. This optimization is of special importance because top-level declarations in Lean and other ITPs are not part of a single, mutually recursive scope as in Racket, but each open their own scope over all subsequent declarations, which would lead to a total number of scope annotations quadratic in the number of declarations using the Sets of Scopes algorithm. Finally, Racket detects macro-introduced identifiers using a "black-box" approach without the macro's cooperation following the marking approach of [11] : a fresh macro scope is applied to all identifiers in the macro input, then inverted on the macro output. While elegant, a naive implementation of this approach can result in quadratic runtime compared to unhygienic expansion and requires further optimizations in the form of lazy scope propagation [7] , which is difficult to implement in a pure language such as Lean. Our "whitebox" approach based on the single primitive of an effectful syntax quotation, while slightly easier to escape from in procedural syntax transformers, is simple to implement, incurs minimal overhead, and is equivalent for pattern-based macros. The idea of automatically handling hygiene in the macro, and not in the expander, was introduced in [4] , though only for pattern-based macros. MetaML [18] refined this idea by tying hygiene more specifically to syntax quotations that could be used in larger metaprogram contexts, which Template Haskell [17] interpreted as effectful (monadic) computations requiring access to a fresh-names generator, much like in our design. However, both of the latter systems should perhaps be characterized more as metaprogramming frameworks than Schemelike macro systems: there are no "macro calls" but only explicit splices and so only built-in syntax with known binding semantics can be captured inside syntax quotations. Thus the question of which captured identifiers to rename becomes trivial again, just like in the basic notation systems discussed in Sect. 1. While the vast majority of research on hygienic macro systems has focused on S-expression-based languages, there have been previous efforts on marrying that research with non-parenthetical syntax, with different solutions for combining syntax tree construction and macro expansion. The Dylan language requires macro syntax to use predefined terminators and eagerly scans for the end of a macro call using this knowledge [2] , while in Honu [16] the syntactic structure of a macro call is discovered during expansion by a process called "enforestation". The Fortress [1] language strictly separates the two concerns into grammar extensions and transformer declarations, much like we do. Dylan and Fortress are restricted to pattern-based macro declarations and thus can make use of simple hygiene algorithms while Honu uses the full generality of the Racket macro expander. On the other hand, Honu's authors "explicitly trade expressiveness for syntactic simplicity" [16] . In order to express the full Lean language and desirable extensions in a macro system, we require both unrestricted syntax of macros and procedural transformers. Many theorem provers such as Coq, Agda, Idris, and Isabelle not already based on a macro-powered language provide restricted syntax extension mechanisms, circumventing hygiene issues by statically determining binding as seen in Sect. 1. Extensions that go beyond that do not come with automatic hygiene guarantees. Agda's macros 11 , for example, operate on the De Bruijn index-based core term level and are not hygienic. 12 The ACL2 prover in contrast uses a subset of Common Lisp as its input language and adapts the hygiene algorithm of [7] based on renaming [8] . The experimental Cur [3] theorem prover is a kind of dual to our approach: it takes an established language with hygienic macros, Racket, and extends it with a dependent type system and theorem proving tools. ACL2 does not support tactic scripts, while in Cur they can be defined via regular macros. However, this approach does not currently provide tactic hygiene as defined in Sect. 6. 13 We have proposed a new macro system for interactive theorem provers that enables syntactic abstraction and reuse far beyond the usual support of mixfix notations. Our system is based on a novel hygiene algorithm designed with a focus on minimal runtime overhead as well as ease of integration into pre-existing codebases, including integration into standard elaboration designs to support type-directed macro expansion. Despite that, the algorithm is general enough to provide a complete hygiene solution for pattern-based macros and provides flexible hygiene for procedural macros. We have also demonstrated how our macro system can address unexpected name capture issues that haunt existing tactic frameworks. We have implemented our method in the upcoming version (v4) of the Lean theorem prover; it should be sufficiently attractive and straightforward to implement to be adopted by other interactive theorem proving systems as well. The Fortress language specification D-expressions: Lisp power. Dylan style Dependent type systems as macros Macros that work A tactic language for the system Coq Expansion-passing style: beyond conventional macros Syntactic abstraction in scheme Hygienic macros for ACL2 A metaprogramming framework for formal verification Binding as sets of scopes Hygienic macro expansion Mathematical components Haskell 2010 language report Notions of computation and monads Elaboration in dependent type theory Honu: syntactic extension for algebraic notation through enforestation Template meta-programming for Haskell MetaML and multi-stage programming with explicit annotations Acknowledgments. We are very grateful to the anonymous reviewers, David Thrane Christiansen, Gabriel Ebner, Matthew Flatt, Sebastian Graf, Alexis King, Daniel Selsam, and Max Wagner for extensive comments, corrections, and advice.