key: cord-0046554-g76pbauj authors: Jakubův, Jan; Chvalovský, Karel; Olšák, Miroslav; Piotrowski, Bartosz; Suda, Martin; Urban, Josef title: ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (System Description) date: 2020-06-06 journal: Automated Reasoning DOI: 10.1007/978-3-030-51054-1_29 sha: f9a57fc9b6351b94fe9e26565afba84e92576845 doc_id: 46554 cord_uid: g76pbauj We describe an implementation of gradient boosting and neural guidance of saturation-style automated theorem provers that does not depend on consistent symbol names across problems. For the gradient-boosting guidance, we manually create abstracted features by considering arity-based encodings of formulas. For the neural guidance, we use symbol-independent graph neural networks (GNNs) and their embedding of the terms and clauses. The two methods are efficiently implemented in the E prover and its ENIGMA learning-guided framework. To provide competitive real-time performance of the GNNs, we have developed a new context-based approach to evaluation of generated clauses in E. Clauses are evaluated jointly in larger batches and with respect to a large number of already selected clauses (context) by the GNN that estimates their collectively most useful subset in several rounds of message passing. This means that approximative inference rounds done by the GNN are efficiently interleaved with precise symbolic inference rounds done inside E. The methods are evaluated on the MPTP large-theory benchmark and shown to achieve comparable real-time performance to state-of-the-art symbol-based methods. The methods also show high complementarity, solving a large number of hard Mizar problems. In this work, we develop two symbol-independent (anonymous) inference guiding methods for saturation-style automated theorem provers (ATPs) such as E [25] and Vampire [20] . Both methods are based on learning clause classifiers from previous proofs within the ENIGMA framework [5, 13, 14] implemented in E. By symbol-independence we mean that no information about the symbol names is used by the learned guidance. In particular, if all symbols in a particular ATP problem are consistently renamed to new symbols, the learned guidance will result in the same proof search and the same proof modulo the renaming. Symbol-independent guidance is an important challenge for learning-guided ATP, addressed already in Schulz's early work on learning guidance in E [23] . With ATPs being increasingly used and trained on large ITP libraries [2, 3, 6, 8, 16, 18] , it is more and more rewarding to develop methods that learn to reason without relying on the particular terminology adopted in a single project. Initial experiments in this direction using concept alignment [10] methods have already shown performance improvements by transferring knowledge between the HOL libraries [9] . Structural analogies (or even terminology duplications) are however common already in a single large ITP library [17] and their automated detection can lead to new proof ideas and a number of other interesting applications [11] . This system description first briefly introduces saturation-based ATP with learned guidance (Sect. 2). Then we discuss symbol-independent learning and guidance using abstract features and gradient boosting trees (Sect. 3) and graph neural networks (Sect. 4). The implementation details are explained in Sect. 5 and the methods are evaluated on the MPTP benchmark in Sect. 6. Saturation-Based Automated Theorem Provers (ATPs) such as E and Vampire are used to prove goals G using a set of axioms A. They clausify the formulas A ∪ {¬G} and try to deduce contradiction using the given clause loop [22] as follows. The ATP maintains two sets of processed (P ) and unprocessed (U ) clauses. At each loop iteration, a given clause g from U is selected, moved to P , and U is extended with new inferences from g and P . This process continues until the contradiction is found, U becomes empty, or a resource limit is reached. The search space grows quickly and selection of the right given clauses is critical. Learning Clause Selection over a set of related problems is a general method how to guide the proof search. Given a set of FOL problems P and initial ATP strategy S, we can evaluate S over P obtaining training samples T . For each successful proof search, training samples T contain the set of clauses processed during the search. Positive clauses are those that were useful for the proof search (they appeared in the final proof), while the remaining clauses were useless, forming the negative examples. Given the samples T , we can train a machine learning classifier M which predicts usefulness of clauses in future proof searches. Some clause classifiers are described in detail in Sects. 3, 4, and 5. Once a clause classifier M is trained, we can use it inside an ATP. An ATP strategy S is a collection of proof search parameters such as term ordering, literal selection, and also given clause selection mechanism. In E, the given clause selection is defined by a collection of clause weight functions which alternate to select the given clauses. Our ENIGMA framework uses two methods of plugging the trained classifier M into S. Either (1) we use M to select all given clauses (solo mode denoted S M), or (2) we combine predictions of M with clause selection mechanism from S so that roughly 50% of the clauses is selected by M (cooperative mode denoted S ⊕ M). Proof search settings other than clause selection are inherited from S in both the cases. See [5] for details. The phases of learning and ATP guidance can be iterated in a learning/evaluation loop [29] , yielding growing sets of proofs T i and stronger classifiers M i trained over them. See [15] for such large experiment. Clause Features are used by ENIGMA to represent clauses as sparse vectors for machine learners. They are based mainly on vertical/horizontal cuts of the clause syntax tree. We use simple feature hashing to handle theories with large number of symbols. A clause C is represented by the vector ϕ C whose i-th index stores the value of a feature with hash index i. Values of conflicting features (mapped to the same index) are summed. Additionally, we embed conjecture features into the clause representation and we work with vector pairs (ϕ C , ϕ G ) of size 2 * base, where ϕ G is the feature vector of the current goal (conjecture). This allows us to provide goal-specific predictions. See [15] for more details. Gradient Boosting Decision Trees (GBDTs) implemented by the XGBoost library [4] currently provide the strongest ENIGMA classifiers. Their speed is comparable to the previously used [14] weaker linear logistic classifier, implemented by the LIBLINEAR library [7] . In this work, we newly employ the LightGBM [19] GBDT implementation. A decision tree is a binary tree whose nodes contain Boolean conditions on values of different features. Given a feature vector ϕ C , the decision tree can be navigated from the root to the unique tree leaf which contains the classification of clause C. GBDTs combine predictions from a collection of follow-up decision trees. While inputs, outputs, and API of XGBoost and LightGBM are compatible, each employ a different method of tree construction. XGBoost constructs trees level-wise, while LightGBM leafwise. This implies that XGBoost trees are well-balanced. On the other hand, LightGBM can produce much deeper trees and the tree depth limit is indeed an important learning meta-parameter which must be additionally set. We develop a feature anonymization method based on symbol arities. Each function symbol name s with arity n is substituted by a special name "fn", while a predicate symbol name q with arity m is substituted by "pm". Such features lose the ability to distinguish different symbol names, and many features are merged together. Vector representations of two clauses with renamed symbols are clearly equal. Hence the underlying machine learning method will provide equal predictions for such clauses. For more detailed discussion and comparison with related work see Appendix B. To improve the ability to distinguish different anonymized clauses, we add the following features. Variable statistics of clause C containing (1) the number of variables in C without repetitions, (2) the number of variables with repetitions, (3) the number of variables with exactly one occurrence, (4) the number of variables with more than one occurrence, (5-10) the number of occurrences of the most/least (and second/third most/least) occurring variable. Symbol statistics do the same for symbols instead of variables. Recall that we embed conjecture features in clause vector pair (ϕ C , ϕ G ). As G embeds information about the conjecture but not about the problem axioms, we propose to additionally embed some statistics of the problem P that C and G come from. We use 22 problem features that E prover already computes for each input problem to choose a suitable strategy. These are (1) number of goals, (2) number of axioms, (3) number of unit goals, etc. See E's manual for more details. Hence we work with vector triples (ϕ C , ϕ G , ϕ P ). Another clause classifier newly added to ENIGMA is based on graph neural networks (GNNs). We use the symbol-independent network architecture developed in [21] for premise selection. As [21] contains all the details, we only briefly explain the basic ideas behind this architecture here. Hypergraph. Given a set of clauses C we create a directed hypergraph with three kinds of nodes that correspond to clauses, function and predicate symbols N , and unique (sub)terms and literals U occurring in C, respectively. There are two kinds of hyperedges that describe the relations between nodes according to C. The first kind encodes literal occurrences in clauses by connecting the corresponding nodes. The second hyperedge kind encodes the relations between nodes from N and U. For example, for f (t 1 , . . . , t k ) ∈ U we loosely speaking connect the nodes f ∈ N and t 1 , . . . , t k ∈ U with the node f (t 1 , . . . , t k ) and similarly for literals, where their polarity is also taken into account. Message-Passing. The hypergraph describes the relation between various kinds of objects occurring in C. Every node in the hypergraph is initially assigned a constant vector, called the embedding, based only on its kind (C, N , or U). These node embeddings are updated in a fixed number of message-passing rounds, based on the embeddings of each node's neighbors. The underlying idea of such neural message-passing methods 1 is to make the node embeddings encode more and more precisely the information about the connections (and thus various properties) of the nodes. For this to work, we have to learn initial embeddings for our three kinds of nodes and the update function. 2 Classification. After the message-passing phase, the final clause embeddings are available in the corresponding clause nodes. The estimated probability of a clause being a good given clause is then computed by a neural network that takes the final embedding of this clause and also aggregated final embeddings of all clauses obtained from the negated conjecture. In order to use either GBDTs (Sect. 3) or GNNs (Sect. 4), a prediction model must be learned. Learning starts with training samples T , that is, a set of pairs (C + , C − ) of positive and negative clauses. For each training sample T ∈ T , we additionally know the source problem P and its conjecture G. Hence we can consider one sample T ∈ T as a quadruple (C + , C − , P, G) for convenience. . Vectors where C ∈ C + are labeled as positive, and otherwise as negative. All the labeled vectors are fed together to a GBDT trainer yielding model D T . When predicting a generated clause, the feature vector is computed and D T is asked for the prediction. GBDT's binary predictions (positive/negative) are turned into E's clause weight (positives have weight 1 and negatives 10). Given T = (C + , C − , P, G) ∈ T as above we construct a hypergraph for the set of clauses C + ∪C − ∪G. This hypergraph is translated to a tensor representation (vectors and matrices), marking clause nodes as positive, negative, or goal. These tensors are fed as input to our GNN training, yielding a GNN model N T . The training works in iterations, and N T contains one GNN per iteration epoch. Only one GNN from a selected epoch is used for predictions during the evaluation. In evaluation, it is more efficient to compute predictions for several clauses at once. This also improves prediction quality as the queried data resembles more the training hypergraphs where multiple clauses are encoded at once as well. During an ATP run on problem P with the conjecture G, we postpone evaluation of newly inferred clauses until we reach a certain amount of clauses Q to query. 3 To resemble the training data even more, we add a fixed number of the given clauses processed so far. We call these context clauses (X ). To evaluate Q, we construct the hypergraph for Q∪X ∪G, and mark clauses from G as goals. Then model N T is asked for predictions on Q (predictions for X are dropped). The numeric predictions computed by N T are directly used as E's weights. Implementation and Performance. We use GBDTs implemented by the XGBoost [4] and LightGBM [19] libraries. For GNN we use Tensorflow [1] . All the libraries provide Python interfaces and C/C++ APIs. We use the Python interfaces for training and the C APIs for the evaluation in E. The Python interfaces for XGBoost and LightGBM include the C APIs, while for Tensorflow this must be manually compiled, which is further complicated by poor documentation. The libraries support training both on CPUs and on GPUs. We train Light-GBM on CPUs, and XGBoost and Tensorflow on GPUs. However, we always evaluate on a single CPU as we aim at practical usability on standard hardware. This is non-trivial and it distinguishes this work from evaluations done with large numbers of GPUs or TPUs and/or in prohibitively high real times. The LightGBM training can be parallelized much better -with 60 CPUs it is much faster than XGBoost on 4 GPUs. Neither using GPUs for LightGBM nor many CPUs for XGBoost provided better training times. The GNN training is slower than GBDT training and it is not easy to make Tensorflow evaluate reasonably on a single CPU. It has to be compiled with all CPU optimizations and restricted to a single thread, using Tensorflow's poorly documented experimental C API. Setup. We experimentally evaluate 4 our GBDT and GNN guidance 5 on a large benchmark of 57880 Mizar40 [18] problems 6 exported by MPTP [28] . Hence this evaluation is compatible with our previous symbol-dependent work [15] . We evaluate GBDT and GNN separately. We start with a good-performing E strategy S (see [5, Appendix A]) which solves 14 966 problems with a 10 s limit per problem. This gives us training data T 0 = eval(S) (see Sect. 5), and we start three iterations of the learning/evaluation loop (see Sect. 2). For GBDT, we train several models (with hash base 2 15 ) and conduct a small learning meta-parameters grid search. For XGBoost, we try different tree depths (d ∈ {9, 12, 16}), and for LightGBM various combinations of tree depths and leaves count ((d, l) ∈ {10, 20, 30, 40} × {1200, 1500, 1800}). We evaluate all these models in a cooperative mode with S on a random (but fixed) 10% of all problems (Appendix A). The best performing model is evaluated on the whole benchmark in both cooperative (⊕) and solo ( ) runs. These give us the next samples T i+1 . We perform three iterations and obtain models D 0 , D 1 , and D 2 . For GNN, we train a model with 100 epochs, obtaining 100 different GNNs. We evaluate GNNs from selected epochs (e ∈ {10, 20, 50, 75, 100}) and we try different settings of query (q) and context (c) sizes (see Sect. 5). In particular, q ranges over {64, 128, 192, 256, 512} and c over {512, 768, 1024, 1536} . All possible combinations of (e, q, c) are again evaluated in a grid search on the small benchmark subset (Appendix A), and the best performing model is selected for the next iteration. We run three iterations and obtain models N 0 , N 1 , and N 2 . Results are presented in Table 1 (1) Model accuracies are computed on samples extracted from problems newly solved by each model, that is, on testing data not known during the training. Columns TPR/TNR show accuracies on positive/negative testing samples. (2) Train sizes measure the training data in millions of clauses. (4) Letter "X" stands for XGBoost models, while "L" for LightGBM. (5) For real time we use 10 s limit per problem, and (6) in abstract time we limit the number of generated clauses to 5000. We show the number of problems solved and the gain (in %) on S. The abstract time evaluation is useful to assess the methods modulo the speed of the implementation. The first row shows the performance of S without learning. Evaluation. The GNN models start better, but the GBDT models catch up and beat GNN in later iterations. The GBDT models show a significant gain even in the 3rd iteration, while the GNN models start stagnating. The GNN models report better testing accuracy, but their ATP performance is not as good. For GBDTs, we see that the first two best models (D 0 and D 1 ) were produced by XGBoost, while D 2 by LightGBM. While both libraries can provide similar results, LightGBM is significantly faster. For comparison, the training time for XGBoost in the third iteration was 7 h, that is, LightGBM is 10 times faster. The higher speed of LightGBM can overcome the problems with more complicated parameter settings, as more models can be trained and evaluated. For GNNs, we observe higher training times and better models coming from earlier epochs. The training in the 1st and 2nd iterations was done on 1 GPU, while in the 3rd on 4 GPUs. The good abstract time performance indicates that further gain could be obtained by a faster implementation. But note that this is the first time that NNs have been made comparable to GBDTs in real time. We have developed and evaluated symbol-independent GBDT and GNN ATP guidance. This is the first time symbol-independent features and GNNs are tightly integrated with E and provide good real-time results on a large corpus. Both the GBDT and GNN predictors display high ability to learn from previous proof searches even in the symbol-independent setting. To provide competitive real-time performance of the GNNs, we have developed context-based evaluation of generated clauses in E. This introduces a new paradigm for clause ranking and selection in saturation-style proving. The generated clauses are not ranked immediatelly and independently of other clauses. Instead, they are judged in larger batches and with respect to a large number of already selected clauses (context) by a neural network that estimates their collectively most useful subset by several rounds of message passing. This also allows new ways of parameterizing the search that result in complementary methods with many new solutions. The new GBDTs show even better performance than their symbol-dependent versions from our previous work [15] . This is most likely because of the parameter grid search and new features not used before. The union of the problems solved by the twelve ENIGMA strategies (both and ⊕) in real time adds up to 28 247. When we add S to this portfolio we solve 28 271 problems. This shows that the ENIGMA strategies learned quite well from S, not losing many solutions. When we add eight more strategies developed here we solve 29 130 problems, of which 2109 are among the hard Mizar40. This is done in general in 200 s and without any additional help from premise selection methods. Vampire in 300 s solves 27 842 problems. Future work includes joint evaluation of the system on problems translated from different ITP libraries, similar to [9] . This appendix presents additional data from the experiments in Section 6. Figure 3 shows the results of the grid search for GNN models on one tenth of all benchmark problems done in order to find the best-performing parameters for query and context sizes. The x-axis plots the query size, the y-axis plots the context size, while the z-axis plots the ATP performance, that is, the number of solved problems. Recall that the grid search was performed on a randomly selected but fixed tenth of all benchmark problems with a 10 s real-time limit per problem. For N 0 and N 1 , there is a separate graph for each iteration, showing only the best epochs. For N 2 , there are two graphs for models from epoch 20 and 50. Note how the later epoch 50 becomes more independent on the context size. The ranges of the grid search parameters were extended in later iterations when the best-performing value was at the graph edge. Figure 4 shows the grid search results for the best LightGBM's GBDT models from iterations 1, 2, and 3 (denoted here D 0 , D 1 , and D 2 ). The x-axis plots the number of tree leaves, the y-axis plots the tree depth, while the z-axis plots the number of solved problems. There are two models from the second iteration (D 1 ), showing the effect of different learning rate (η). Again, the ranges of metaparameters were updated in between the iterations by a human engineer. Figure 5 shows the training accuracies and training loss for the LightGBM model D 2 . Accuracies (TPR and TNR) of the training data are computed from the first iteration (T 0 ). The values for loss (z) are inverted (1 − z) so that higher values correspond to better models which makes a visual comparison easier. We can see a clear correlation between the accuracies and the loss, but not so clear correlation with the ATP performance. The ATP performance of D 2 is the same as in Figure 4 , repeated here for convenience. Finally, we have compared the feature vectors of the symbol-dependent and symbol-independent versions of the GBDTs. On the same data, we observe roughly 2x more collisions. The symbol-independent version has around 1% of colliding feature vectors, while the symbol-dependent version has 0.42%. Our use of symbol-independent arity-based features for GBDTs differs from Schulz's anonymous clause patterns [23, 24] (CPs) used in E for proof guidance and from Gauthier and Kaliszyk's (GK) anonymous abstractions used for their concept alignments between ITP libraries [10] in two ways: 1. In both CP and GK, serial (de Bruijn-style) numbering of abstracted symbols of the same arity is used. I.e., the term h(g(a)) will get abstracted to F 11(F 12(F 01)). Our encoding is just F 1(F 1(F 0)). It is even more lossy, because it is the same for h(h(a)). 2. ENIGMA with gradient boosting decision trees (GBDTs) can be (approximately) thought of as implementing weighted feature-based clause classification where the feature weights are learned. Whereas both in CP and GK, exact matching is used after the abstraction is done. 7 In CP, this is used for hint-style guidance of E. There, for clauses, such serial numbering however isn't stable under literal reordering and subsumption. Partial heuristics can be used, such as normalization based on a fixed global ordering done in both CP and GK. Addressing the latter issue (stability under reordering of literals and subsumption) leads to the NP hardness of (hint) matching/subsumption. I.e., the abstracted subsumption task can be encoded as standard first-order subsumption for clauses where terms like F 11(F 12(F 01)) are encoded as apply1(X1, apply1(X2, apply0(X3))). The NP hardness of subsumption is however here more serious in practice than in standard ATP because only applications behave as non-variable symbols during the matching. Thus, the difference between our anonymous approach and CP is practically the same as between the standard symbol-based ENIGMA guidance and standard hint-based [30] guidance. In the former the matching (actually, clause classification) is approximate, weighted and learned, while with hints the clause matching/classification is crisp, logic-rooted and preprogrammed, sometimes running into the NP hardness issues. Our latest comparison [12] done over the Mizar/MPTP corpus in the symbol-based setting showed better performance of ENIGMA over using hints, most likely due to better generalization behavior of ENIGMA based on the statistical (GBDT) learning. Note also that the variable and symbol statistics features to some extent alleviate the conflicts obtained with our encoding. E.g., h(g(a)) and h(h(a)) will have different symbol statistics (Section 3) features. To some extent, such features are similar to Schulz's feature vector and fingerprint indexing [26, 27] . TensorFlow: large-scale machine learning on heterogeneous systems A learningbased fact selector for Isabelle/HOL Hammering towards QED XGBoost: a scalable tree boosting system ENIGMA-NG: efficient neural and gradient-boosted inference guidance for E Hammer for Coq: automation for dependent type theory Liblinear: a library for large linear classification Premise selection and external provers for HOL4 Sharing HOL4 and HOL light proof knowledge Aligning concepts across proof assistant libraries Initial experiments with statistical conjecturing over large formal corpora ENIGMAWatch: proofWatch meets ENIGMA ENIGMA: efficient learning-based inference guiding machine Enhancing ENIGMA given clause guidance Hammering Mizar by learning clause guidance Learning-assisted automated reasoning with Flyspeck HOL(y)Hammer: online ATP service for HOL light MizAR 40 for Mizar 40 Lightgbm: a highly efficient gradient boosting decision tree First-order theorem proving and Vampire Property invariant embedding for automated reasoning A new class of automated theorem-proving algorithms Learning Search Control Knowledge For Equational Deduction of DISKI Learning search control knowledge for equational theorem proving E -a brainiac theorem prover Fingerprint indexing for paramodulation and rewriting Simple and efficient clause subsumption with feature vector indexing MPTP 0.2: design, implementation, and initial experiments MaLARea SG1 -machine learner for automated reasoning with semantic guidance Using hints to increase the effectiveness of an automated reasoning program: case studies We thank Stephan Schulz and Thibault Gauthier for discussing with us their methods for symbol-independent term and formula matching.