key: cord-0479076-alewb9c7 authors: Ostrolenk, Kiran; Mattelaer, Olivier title: Speeding up MadGraph5_aMC@NLO date: 2021-02-01 journal: nan DOI: nan sha: 6c8931c735ec86b6756f4cc7838c6b1a7888f519 doc_id: 479076 cord_uid: alewb9c7 In this paper we will describe two new optimisations implemented in MadGraph5_aMC@NLO, both of which are designed to speed up the computation of leading-order processes (for any model). First we implement a new method to evaluate the squared matrix element, dubbed helicity recycling, which results in factor of two speed-up. Second, we have modified the multi-channel handling of the phase-space integrator providing tremendous speed-up for VBF-like processes (up to thousands times faster). While the LHC is still running, preparation is starting for the High-Luminosity LHC. As part of this preparation, the CPU efficiency of our Monte-Carlo generators is crucial due to the sheer number of events that need to be generated. Given the current constraints on the LHC IT budget this will not be possible without significant software improvement [1, 2] . While the full CPU time of the LHC experiment is not dominated by event generation, it is still estimated to represent between ten and twenty percent of it. Consequently, we received a specific request to speed up our code by at least 20% and ideally by a factor of two [1] . In addition to the High-Luminosity LHC, work is also starting for future high-energy accelerators [3] . Going to the high-energy regime will amplify multi-scale issues which in turn can reduce the efficiency of event generation. This issue is particularly striking for VBF-like processes where the current phase-space integration algorithm either fails to generate the requested number of events or takes an unreasonable time to do so. The different avenues for speeding up Monte-Carlo integration is a well covered topic. Such efforts can be classified into four different categories. First one can optimise the evaluation of the function being integrated, which in our case is the matrix element [4] [5] [6] [7] . Second one can optimise the integration method to minimise the number of times such functions a e-mail: olivier.mattelaer@uclouvain.be b e-mail: kiran.ostrolenk@manchester.ac.uk need to be evaluated [8] [9] [10] [11] [12] . Third, one can try to use more efficiently the various types of hardware (e.g. GPU, MPI, vectorization) [13] [14] [15] [16] and finally one can play with the weights of the sample to optimise/re-use information [17] [18] [19] [20] [21] [22] . In the context of this work, we will focus on optimising MadGraph5_aMC@NLO (MG5aMC) [23, 24] , one of the main Monte-Carlo generators [25] [26] [27] and we will combine two different methods to achieve our goal, one optimising the time to evaluate the matrix element and one optimising the phase-space integrator. The strategy we employed was to keep the main design choices in place (in particular the helicity amplitude method [28] and the single diagram enhancement method [29] ) and to deeply study them to see how they could be further improved. To reduce the time needed to evaluate a given squared matrix element, we use a quite standard Memory/CPU tradeoff guided by physics considerations. We have identified parts of the computation that can be stored in memory in order to avoid their re-computation later. We dubbed this method helicity recycling since the additional terms stored in memory correspond to identical sub-expressions shared between different helicity configurations. This improvement will be presented in section (2) . We start in section (2.1) by presenting the helicity formalism used in MG5aMC, then we continue in section (2.2) by explaining the main idea behind helicity recycling. The details of the associated speed-up will be then presented in section (2. 3). The second improvement that we have implemented improves the phase-space integrator. Contrary to the first method, it is more challenging to introduce an improvement that acts positively on all processes. On the other hand, the expected (and obtained) gain can be much more impressive with this method. For this paper we will mainly focus on the speed-up of VBF-like processes since they are the ones where MG5aMC has some specific issues. This will be covered in section (3) where we start by reviewing the current multi-channel strategy (the single diagram enhancement method) in section (3.1). We then explain in section (3.2) the methods used to have a better handling on the integration of t-channel propagator. We then turn in section (3.3) to the deeper changes we made to the multi-channel strategy as well as the impact on some physical observables -which are beyond LO accurate. Our speed-ups are then compare to the older version of the code in section (3.4). Our conclusions will be presented in section (4) . We also provide two appendices. First, we will give a brief manual on how to tune the optimisation parameters. Finally in Appendix B, we describe the modifications (and the associated conventions) to the Aloha package [30] , related to the helicity recycling algorithm. 2 Helicity recycling within the Helicity Amplitude Method When evaluating a matrix element one can identify two key structures: the Lorentz structure of the matrix element and its colour structure. Within MG5aMC evaluation of these two structures factorises at the amplitude level [5] . Hence it is possible to discuss one without the other. The helicity-recycling optimisation only impacts the evaluation of the Lorentz structure and in this section we will explain how this evaluation is performed. Matrix elements typically contain factors of spinors (from the external fermions) and Lorentz vectors (from internal propagators). When analytically evaluating the square of this matrix element it is common to remove the dependence on spinors via their on-shell condition: where the sum is over helicity. The squared matrix element can then be reduced to a series of scalar products between Lorentz vectors. However, such a formalism is typically not used by programs that perform numerical evaluations of matrix elements. This is because such a formalism will cause the computational complexity to grow quadratically with the number of Feynman diagrams (due to the presence of interference terms). 1 One solution is to use the helicity amplitude formalism [32] [33] [34] where the summation over helicity is postponed. Under this formalism the matrix element is reduced to a series of spinor products, rather than Lorentz products. 2 These spinors will depend on the specific helicities of the external particles. The advantage of this formalism is that its complexity grows linearly with the number of diagrams (since interference terms can be handled by a simple sum over amplitudes). 3 An example will help illustrate the key principles here. Following Ref. [37] , let us work in the massless limit and consider the unpolarised e − e + → µ − µ + s-channel matrix element with a photon mediator. Let us use p 1 , p 2 to label Fig. 1 : S-channel e − e + → µ − µ + diagram with photon mediator. We use p 1 , p 2 , p 3 and p 4 to label the external momenta. the momenta of the electrons and p 3 , p 4 for the muons. This process is pictured in Fig. (1) . 4 In what follows it will also be helpful to use the standard Mandelstam variables, defined as: Accordingly, this matrix element is written as: Remember that u(p) and v(p) are Dirac spinors and can either be left-or right-handed. In the massless limit this corresponds to them having +1 or −1 helicity eigenvalues respectively. In what follows we will also use ψ to denote a general Dirac spinor. The subscript L and R will be used to denote leftand right-handedness respectively. It is common to use the following notation when expressing helicity amplitudes, for a spinor of momentum p n : One can show that [nγ µ m] = nγ µ m = 0. Hence, for the matrix element to be non-zero, the incoming fermions must have opposite helicity and similarly for the outgoing. Hence, for all the possible helicity combinations we could construct, the only ones that give a non-zero matrix element are: Here, the notation n m means the fermion with momentum p n has helicity m. The first helicity combination will give: Using the relation [nm] mn = mn [nm] = p m · p n , one finds the square to be: By parity, the helicity combination 1 + 2 − 3 + 4 − will give the same result. The only remaining contributing helicity combinations are the same two but with 1 ↔ 2. By symmetry arguments, one can see they will give the same result as in equation (14) but with u → t. Hence the final result for this matrix element becomes: The main points from the simple example looked at in section (2.1.1) are: 1. Diagrams are evaluated at the amplitude level (before squaring the matrix element). 2. To get the final result these diagrams must be summed over helicity combinations. 3. Only some of these helicity combinations will actually contribute. MG5aMC follows the HELAS strategy [28] to compute amplitudes. It starts by computing the spinorial representation of all external particles. These spinors are then are iteratively combined using some model specific functions (generated thanks to the Aloha [30] package). In this way the full diagram can be built up. This process is represented in Fig. (2) . In the helicity amplitude formalism one distinguishes three types of functions: -external wave function: function calculating the spinorial representation (e.g. u,v, µ ) of the external particles evaluated for a given helicity. -internal wave function: function calculating the spinorial representation (see Eq. (16)) of an internal particle (i.e. of a propagator) evaluated for a given helicity combination. -amplitude: function fully contracting all spinorial representations and therefore returning the value for a given helicity combination of the amplitude of the associated Feynman diagram. As depicted in Fig. (2) , the evaluation of the matrix element starts by evaluating all the external wave functions, before combining them to get the internal wave functions associated to the propagators. Finally it reaches the last vertex and at that point returns an amplitude. For example, consider again the e − e + → µ − µ + process. After the computation of the spinors associated to the electrons and muons (which depends only their momenta and helicity), the algorithm will call the routine for calculating the internal photon wave function. The analytic expression of this photon wave function φ γ will be: where ψ µ has been used to represent the muon spinors. As already mentioned, these will be dependent on the helicity of the muons, labelled h 3 and h 4 . Note that the wave function associated to this propagator is not unique since it depends on which vertex is used to compute it. In this example, we are already at the last vertex and all that remains is to contract the various wave functions, taking into account the Lorentz/spinor structure of the associated vertex. Analytically, this is written as: where, just as with the muons, ψ e has been used to represent the spinors of the electrons. This is the same as the expression in (12) but without the specific choice of helicity combination. MG5aMC generates a FORTRAN subroutine that carries out the above process for a given matrix element, a given helicity combination and then returns the squared amplitude. 5 MG5aMC then performs a loop over all possible helicity combinations, at each point calling this matrix-element subroutine. The results from this loop are then summed to produce the final matrix element. This can be represented, for our e − e + → µ − µ + example, by the following pseudo-code: Obviously, MG5aMC has already implemented a couple of optimisations. First, none of the external wave functions depend on the Feynman diagram representation and can therefore be moved outside of the loop over diagrams. Furthermore, the same propagator can appear in multiple Feynman diagrams and in such a case it is also highly beneficial to move Secondly, recall the point made at the start of this section that only a subset of helicity combinations contributes. Hence, another optimisation already included in the above meta-code exploits this fact and makes sure the loop over helicity combinations only includes "contributing" ones. We shall refer to this optimisation as 'helicity filtering'. 7 The matrix routine discussed in this section is called by the madevent binary. This binary is also responsible for generating phase space points, evaluating the PDFs and writing the events (amongst other things). We can evaluate the computational cost of the matrix routine by comparing the number of instructions it executes to the total number executed by madevent. This is presented in table (1) for topquark pair production plus gluons. The diagrams for tt are simpler and fewer in number than those of ttgg and even more so than those of ttggg. Hence the total number of madevent instructions executed by madevent increases across these processes. Furthermore, this means the matrix routine is also responsible for an increasing percentage of the instructions: We also see that for ttgg and ttggg (the more complex processes) the amplitude routines account for 44% and 51% of the computation, making it the dominant contribution to the matrix element calculation. This is again due to the higher number of diagrams and since the number of unique propagators does not scale as fast as the diagram multiplicity. Hence it is important that any future optimisation targets not just wave-function routines but also the amplitude routines. amp 530M (4.0%) 210G (44%) 5.5T (51%) Table 1 : Here we present the number of instructions evaluated by the matrix routine (matrix1) and the total number of instructions evaluated by madevent. In brackets we also present these numbers as a percentage of the madevent total. These results are presented for three processes: gg → tt, gg → ttgg and gg → ttggg. We have also broken down the matrix routine into the number of instructions (again alongside their percentage of the total) evaluated by calls to functions that evaluate external spinors (ext), internal wavefunctions (int) and amplitudes (amp). The data in this table was obtained using a combination of Valgrind [39] and KCachegrind [40] . Finally, one can also see from the table that the wavefunction and amplitude routines do not add up to the number of matrix routine instructions. This is because the routine has other things to evaluate, most noticeably the colour factors for the relevant amplitudes. Such computation is even the second hot-spot for the three gluon multiplicity case and therefore will limit the potential impact of our optimisation. In general, when summing over helicity combinations, the same spinor with the same helicity can appear multiple times. For example, in the combinations (8) -(11) each helicity spinor (such as 1 + ) appears twice. Hence when MG5aMC loops over the combinations it will calculate the same wave function for these spinors multiple times (see the above metacode). This is a waste of computation time -even if ultimately irrelevant for external wavefunctions-. It would be cheaper to only calculate the wave functions once, save their values and reuse it when appropriate. The same principle also applies to the wave function of internal particles. Such routines take other wave functions as input, therefore for a subset of helicity combinations the same input will be given and the code will waste time by re-computing the same quantity. For example, when looking at the internal (photon) wave function given in Eq. (16) the helicity combination (8) and (10) will give exactly the same result. Hence, the computational efficiency can again be improved by only calculating such internal wave functions once and then re-using the value when necessary. This technique of calling the wave-function routines only once is what we refer to as "helicity recycling" and can be written in terms of pseudo-code as: 1 loop over external particles 2 | loop over helicity 3 | | ext_ij = get_spinor ( p_i , hel_j ) Here you can note that all stored variables have an additional index indicating which helicity combination was used to compute them. While such an optimisation sounds natural, it has two potential issues. First, the amount of RAM needed for the computation will increase. However the RAM is currently dominated by meta-data related to the phase-space integrator and not to the amount of RAM used in the matrix element. The increase of memory needed by the helicity recycling is actually shadowed by such meta-data and hence we did not observe a sizeable increase and certainly not faced any issues with RAM assignment even for the most complex processes. Second, while the previous strategy was allowing helicity filtering at run time, this method requires us to know which helicity combinations do not contribute when creating the code. In order numerically determine the null-helicities, we have designed the code's work-flow in the following way: 1. We first allow MG5aMC to create the matrix-element subroutine as it normally would. 2. We then sample a couple of events in order to determine which helicity combinations and amplitudes do not contribute. 3. Next, the matrix-element subroutine is rewritten in the new paradigm. The conversion of the code is done by using a directed acyclic graph to represent how the various wave-function and amplitude routines depend on one another. Having established this graph, the program is then able to easily evaluate which wave-function routines are needed to calculate a given diagram. This allows us to avoid duplicate calls to these routines when unrolling the helicity loop. It also allows us to prone efficiently any wave-function (and amplitude) call that are only associated to vanishing amplitude. 8 . Comparing to the helicity filtering discussed in section (2.1.2), this new algorithms is more efficient since it can identifies any non-contributing component of the computation, like an amplitude who is vanishing only for some helicity combination. So far we have discussed how helicity recycling is used to minimise calls to external and internal wave functions. However the impact such an optimisation is at best quite limited since the computation is actually dominated by the amplitudes routines. Thankfully, this implementation also allows us to optimise these routines by a significant factor. For the sake of example (and without any lack of generality) let us assume that the final vertex of a given Feynman diagram is a Standard Model fermion-fermion vector vertex. This corresponds to (see Eq. (17)): Where we have explicitly added indices representing the associated helicity (or helicity combination for internal wave functions) of each component. Now, when computing M h 1 h 2 h φ and M h 1 h 2hφ , the factorψ h 1 1 γ µ ψ h 2 2 will be the identical and can therefore be re-used multiple times. Therefore we have been able to optimise the code further by implementing a recycling of this factor. However, a similar factor can also be recycled between M h 1 h 2 h φ and Mh 1 h 2 h φ . Therefore it is advantageous to compare (for each Feynman diagram) which one of the following expressions: can be re-used at a higher frequency and use the most optimal recycling strategy. This optimisation requires us to define a new type of helicity routine and the details of this implementation into Aloha are presented in Appendix B. In this section we will quantify the speed-up resulting from using the improvements detailed in section (2.2). First we reproduce table (1) with helicity recycling switched on. This is shown in table (2). One can see that for all processes the total number of evaluated instructions has reduced: from 13G to 11G for tt (15% reduction) from 470 to 180G for ttgg (62% reduction) from 11T to 5T for ttggg (55% reduction) The latter reductions are much larger because evaluating the matrix element represents a larger percentage of the overall computation for thoses processes. This is because the diagrams are more complex and numerous for ttgg and ttggg. Looking at table (2), we observe that both external and internal wavefuction routines represent, after helicity recycling, a relatively insignificant computational cost. First they were not that significant before the optimization and second they have been highly reduced by the helicity recycling (by at least 10× factor). The final speed-up is actually more dependent on the reduction in calls to amplitude routines. Focusing on the ttgg process, one can see the amplitude routines have (1) but this time with helicity recycling enabled. We also present the percentage reduction between the two tables. seen a large reduction in the number of associated instructions (by a factor of two) but still represent 46% of the overall computation. Although not shown in this table, roughly half of this computation (19% of the total) is spent evaluating simply scalar products (the contraction of Eq. (20) (21) (22) with the remaining wave function) which strongly limit the hope of further optimisation. For the three gluon final state the situation is similar, even if the reduction of amplitude routine instructions is closer to a factor of 4. However, for this process the limiting factor is now the computation of the colour factor (taking around 60% of the computation). We have also investigated how that step could be optimised and introduced two simple improvements of the code. First we use a common sub-expression reduction algorithm [41] on that segment of the code. Second we merge the numerator and denominator of the colour-matrix into a single matrix, reducing the number of operations and allowing for a better memory pattern. Combined together those modifications lead to a speed-up of around 20%. 9 Having looked at how the computational cost of madevent breaks down into different functions, we now present its overall factor speed increase. This is shown for a range of processes in table (3). If t with and t without are the times it takes to run madevent with and without helicity recycling respectively then the speed-up is represented as As has already been alluded to in tables (1) and (2), the speed-up we gain from helicity recycling is highly process dependent. Helicity recycling reduces the number of times we must calculate wave functions and amplitudes and so processes with more complicated diagrams and with a higher number of total diagrams see the biggest boost. For example, consider the gg → tt results shown in table (3a). As more gluons are added to the final state the gain increases, with the ttgg final state seeing a dramatic 2.27× speed increase. In contrast to this large increase the similar process qq → ttqq (where q ∈ {u, d, c, s}) sees a noticeably lower speed increase of 1.27×. This is because processes involving fermions have a higher number of non-contributing helicity combinations and so helicity filtering will have a bigger effect. Hence, there will be fewer wave functions/amplitudes to evaluate and so helicity recycling will have a smaller impact. One can see that the other processes presented in table (3) also follow these general principles regarding diagram complexity and fermion multiplicity. W bosons allow for minimal helicity filtering and so table (3b) displays a large speed increase, whereas electrons -being fermions -suppress the speed-up in table (3c). In table (4) we present these results for a wider range of processes and with a more detailed breakdown. In the 'hel' column we present how many helicity combinations are evaluated per phase-space point. In this column we use a '→' to indicate that the maximum value across the various matrix element is being shown. The columns "survey" and "refine" present the timing of the two main phases of the phase-space integration/event generation. The "survey" is designed to get a first (un-precise) estimate of the cross-section and to know the relative importance of each contribution. The amount of time spent in the "survey" is therefore independent of the number of requested events. The "refine" is targeting to generate the number of requested events and therefore scale linearly with the number of events. 10 In both cases, the timing is the time to solution observed on a i7-7700HQ cpu (2016 macbook pro laptop) using up to 8 thread in the standard multi-core mode of the code. The last column present the global speed-up between the version of the code including all the optimisation introduced in this paper (2.9.0) and the same code without helicity recycling (2.9.0 nohel). The optimisation related to the colour computation are present in both columns. The details description of the cuts, scale choice and others are given as supplementary material, providing also all the material to reproduce this (and sub-sequent) table. Again one can see the same principles at play: the biggest speed increase is seen for complex processes dominated by QCD and vector boson interactions as they have the most non-vanishing helicity combinations and diagrams. The disparity of the gain between "survey" and "refine" can be explained due to different relative importance of the various matrix element in each steps. Notice that helicity recycling's tendency to more greatly speed up more intensive processes is very convenient. For example, in a pp → 4j simulation, the processes with j = g will heavily dominate the calculation and so speeding those up 10 The scaling is not perfectly linear due to various thresholds in the method of integration. For this table (and similar one) we always request the code to generate ten thousands unweighted events. Speed up (a) A selection of tt processes. Speed up qq → e + e − 1.02× qq → e + e − g 1.03× gg → e + e − qq 1.09× (c) A selection of e + e − processes. Table 3 : The factor speed up of madevent as a result of using helicity recycling for a selection of SM processes. Here q ∈ {u, d, c, s}. One can see two general principles at play. Firstly, the more diagrams a process has and the more complicated those diagrams are, the bigger the speed increase. For example, see the effect of adding more gluons in (a). Secondly, the more helicity combinations we filter away, the lower the speed increase. This is why processes with more fermions see a lower speed up. is the most effective way to speed up the overall simulation. This is why the simulation sees an impressive 2.1× speed-up in table (4). In addition to the speed of the matrix-element evaluation, the efficiency of the phase-space integrator is another crucial factor determining the speed of the computation since it controls how many times we need to evaluate the matrix element. Due to both the number of dimensions of integration and also the requirement to generate uncorrelated events, the only suitable method is Monte-Carlo integration. However, the convergence of the method is quite slow (1/ √ N , where N is the number of times the matrix element is evaluated). In Monte-Carlo methods (see [42] for a nice review), one evaluates the integrand for random values of the variable of integration. The estimator of the integral (I N ) is simply given by the average of the function estimated across these random points x i : The statistical error (∆I N ) of this estimator I n is controlled by the variance of the integrand and can be estimated as Since the convergence rate is fixed to 1 √ N , the various avenues of optimisation consist in modifying the function being integrated to effectively reduce its variance. In MG5aMC, we use the single diagram enhancement method [29] , which combines various importance sampling method -both analytical [42] and numerical [43, 44] -on top of a multi-channel strategy. The integral is decomposed as a sum of various contributions: where the indices i and j range over individual (or potentially a subset of) Feynman diagram. The values α i ≡ |M i | 2 j |M j | 2 are called the channel weights, they do not modify the value of the integral (as long as i α i = 1 for every phase-space point) but do impact the variance of the integrand and therefore the speed of the computation. While in general the value α i could be any arbitrary function, the single diagram enhancement method makes a specific choice a-priori. This choice is particularly motivated by the classical limit where interference terms between the Feynman diagrams are small. In that case and therefore we have In other words, with this choice the assumption is that each of the terms of the sum -called channels of integration-are mainly behaving as a single diagram squared from which the poles are easily identifiable and importance sampling is relatively easy to implement (see section (3.2)). The role of machine learning algorithm (MG5aMC use a modified VEGAS algorithm [43] ) is then mainly to catch the impact of |M | 2 j |M j | 2 term as well as other source of deviation from the ideal case (like for example the impact of generation cuts or the impact of the running of the strong coupling). While this method works extremely well in general, it is not the most suited approach for VBF-like processes, especially at high energy where large interference occurs due to gauge symmetry. As a matter of fact, MG5aMC is much slower than many other programs (in particular VBFNLO [45] ) for such types of generation. When running MG5aMC, one can easily identify that the slowest channels of integration are the ones with multiple tchannel propagators. In MG5aMC, we handle t-channel propagators with the following change of variable/phase-space measure [46, 47] : where as in Fig. 3 , p and q are the initial state momenta and k 1 and k 2 the final state ones, E 1 (respectively E 2 ) is the energy of k 1 (respectively k 2 ), S = (p + q) 2 is the center of mass energy of the collision, t 1 is the virtuality of the t-channel propagator given by The integration over t 1 is bounded by the following values: In the presence of multiple t-channel propagators, MG5aMC writes them as and performs the integration in the following (ordered) way meaning that we first integrate over t 1 , then t 2 and continues up to t n−1 . The combination of such ordering with the boundary condition of Eq. (31) creates correlations between the variables of integration. Such type of correlation are problematic for any type of VEGAS-like algorithm [42, 43] . The importance of the ordering can be seen in figure (4) where we compare various orderings for a given channel of integration (the associated diagram is displayed in the figure). We present both the un-weighting efficiency and the estimated statistical uncertainty after three and five iterations which gives a hint on the convergence of the integration grid. In this example, it is clear that the original ordering strategy picked by MG5aMC is sub-optimal (between 3 and 6 times slower than the optimal strategy depending of the number of iterations). Not only is the best strategy more efficient Fig. 4 : Comparison of various ordering of the three variable of integration corresponding to the invariant of timelike particles for the channel associated to the Feynman diagram represented on the left. We present both the relative error, the number of events generated at a given iteration and the associated un-weighting efficiency. at generating events but also the associated grid seems to need fewer iterations to converge. While we hand-picked an example of channel of integration where the ordering was sub-optimal, it would not have been difficult to also present cases where it was optimal (e.g. just flip the initial state for the example). Indeed the optimal ordering is deeply dependent on the channel of integration under consideration. On top of the old ordering, we have added support for three additional orderings (see details in Appendix A). Each channel of integration is then associated to one of those orderings such that the virtuality of the most singular t-channel propagator is integrated first. In principle one could use the values of t + i (Eq. 31) to choose such an ordering, but this is technically not possible and we used a simple heuristic approach for such determination. Our resulting timing is presented in table (5) which contains the same type of information as table (4) . The comparison is between the version of the code containing all the optimisations of this paper (2.9.0) with the same version of the code where the ordering strategy was forced to the old one (Eq. 33). While in general the "survey" presents only a small speed-up 11 a more sizeable gain is achieved during the "refine". The actual speed-up will therefore slightly increase when requesting more events. Additionally, one can note that the biggest gain are achieved for the slowest processes. One processes (VBF with transverse polarised W boson) shows a significant slow-down (1.6 times slower). Our investigation shows that the ordering picked in that case was correct but the convergence of the grid was actually slower than what it was for the previous strategy leading to the slow-down. Such an issue can only happen for relatively easy channels of integration since most complex processes would need more iteration and then this effect would disappear (as observed on the VBF process at 100 TeV). A recent paper [48] pointed to the importance of the gauge choice when generating events within MG5aMC. Even if the amplitude is fully gauge invariant, the definition of the channel weights (α i ) is not. The presence of large gauge cancellations are related to large interference term and therefore the assumptions used by the phase-space integrator (Eq. (27)) will not hold anymore. Consequently, the α i associated with Feynman diagrams with t-channel propagators will be artificially large at high-energy. This will increase the variance of the function and reduce the efficiency of the method. We propose here, a second strategy for the single diagram enhancement method. Instead of using Eq. (25), we replace the |M i | 2 by the product of the denominator (both s and t channel) associated to the single diagram under consideration (and normalise them as needed). Such a change in the definition of the multi-weights is in principle not impacting the cross-section (see Eq. 26). However in practise the choice of the dynamical scale is done on a CKKW inspired clustering [49] which depends on the Feynman diagram selected by the single diagram enhancement method. This modification of the channel weight will therefore impact the scale associated to each event and therefore the cross-section and shape, both within scale uncertainty. In general this can be avoided -if needed-by changing the scale computation to H T /2 or other pure kinematical scale choice [50] . However, one should note that it is not possible to avoid such effects when running matched/merged generation within MLM or shower-kt MLM [51, 52] , since the CKKW clustering is mandatory in those type of generation. A more subtle effect of the modification of the channel weight is related to the parton-shower. When writing an event inside the output file MG5aMC provides, in addition to the kinematical variables, the colour dipole representation in the leading colour approximation. The determination of the eligible dipole depends on the Feynman diagram (e.g. in presence of mixed expansion) and therefore the modification of the multi-channel strategy can impact such a cesselection. One can therefore expect some change, within theoretical uncertainties, after parton-shower for some QCD related observable. In table (6), we compare the time needed to generate ten thousands events with the previous strategy and the one introduced in this paper (all other optimisations of this paper are included in both cases). As one can expect for such deep modification of the phase-space integration strategy, the observe spectrum of speed-up/down is extremely broad going process 2.9.0 old ordering 2.9.0 speed-up VBF-like processes survey refine survey refine Table 6 : Integration timing (time to solution) on a mac-book pro (quad-core 2016) to generate 10k events depending of the multi-channel strategy. If the generation fails to generate enough event the timing is re-scaled accordingly. In such case the timing is set in bold and the number of events actually generated is then indicated. Timing in red means an actual slowdown of the new strategy compare to the previous one. from three orders of magnitude speed-up to five times slower. It is clear that such an optimisation is a must have in the context of VBF processes but must be avoided for most QCD multi-jet processes. While the user can easily switch from one strategy to the other (see Appendix A), we have configured the code such that the default value is process dependent. All processes with only one colour-flow will use the new method while others processes will use the old method. We made and exception for pure multi-jet processes which are using the new method as well. The default for each processes is indicated as the last column in table (6) . Since for most QCD processes we keep the previous integration strategy, the caveat on the change in the scale/leading colour choice, mentioned above, are naturally mitigated. In table (7), we compare the speed of the code between two versions of MG5aMC (2.8.1 and 2.9.0). 2.9.0 is the first version of the code containing the modification described in this paper. Let's stress that every optimisation flag are set on their default value and therefore this is the speed-up that a user will observe without playing with any options. The combined impact of all our optimisations is striking for VBF-like processes with a speed-up of more than 30,000 times faster for one process. While this process is very specific and probably not the most important one for many users, all the VBF processes show massive speed-up passing from hour long runs to a couple of minutes. Actually, in many cases, the previous version of the code had a lot of trouble in generating the requested number of events and sometimes event to correctly converge trough the correct cross-section. All those problems are now solved with 2.9.0; the cross-section converges quickly and can generates events very efficiently. Gain for the other processes is more modest. Firstly because the phase-space integration was much better handled to start with and secondly because those processes are less sensitive to t-channel diagrams on which we have focused. Nevertheless in combination with the helicity recycling, the code is, for processes heavilly used at the LHC, around three times faster, a very valuable gain. In order to evaluate an amplitude MG5aMC must sum it over all contributing helicity combinations. Before the work of this paper MG5aMC would calculate every wave function and amplitude separately for each helicity combination. We have successfully restructured the code so it will now only calculate a given wave function once and then reuse the output for all the different helicity combinations. We have also been able to split up the amplitude calculation such that part of it can be recycled across different helicity combinations. This restructuring of the code has also allowed us to avoid calculating diagrams which for a given helicity combination are null. All these optimisations mean that for complex processes with few fermions we can a see a speed-up of the code of around 2×. At the other end of scale, simple processes dominated by fermions can see a speed-up of only a few percent. Additionally, we have studied the efficiency issue of MG5aMC for VBF-like processes at high energy. We have identified that modifying the order of integration of the virtuality of tchannel particles and changing the multi-channel weight was highly valuable providing game-changing speed-up for such computation. Fixing a lot of the issue faced in the past for such processes. Combining all those optimisations allows us to overshoot the target speed-up asked by the HSF community [1, 2] since we provide a code at least three times faster for the cpu intensive processes and typically much higher than that for VBF processes. Optimisation is important for code which are massively used by LHC experiment and are run a non negligible amount of time on the grid/local cluster. We believe that this paper is a significant milestone providing significant speed improvement both in the time to evaluate a phase-space point and to the phase-space integrator. However this is certainly not the end of the road and this effort has to (and will) continue. First, the techniques developed of this paper needs to be applied to the NLO processes within MG5aMC. We do not expect any big difficulties in such porting and expect similar gain in speed. Second, they are still room to optimise the evaluation of the matrix element, even at Leading-Order: Work are in progress to have a tighter link to the hardware with investigation on a GPU port but also on the use of SIMD operation on CPU [53] . When generating a LO process within MG5aMC, the optimisation described in this paper will be activated by default (Since version 2.9.0). If for some reason one want to deactivate some optimisation or change some internal parameter, we offer the possibility to do so via a couple of parameter that can be included in the run_card.dat which is the main configuration file. Most of those parameters are not present by default in that card since we consider them as advanced parameter. It is enough to add them in the file when needed. The only parameter present by default is the one allowing to choose the multi-channel strategy. In table (8), we briefly present the various parameter that can be modified in that way. One should notice that the parameter "hel_recycling" is a switch that forbids to use the helicity recycling, therefore when set to False, the other parameters related to helicity recycling (prefixed with "hel_") will be without any impact. It is also possible to ask MG5aMC to generate a code without any possibility to use helicity recycling. This allows to have a code closer to the previous version and avoid to spend time to generate the new aloha routine. To do this one needs to modify the "output" command and add the flag "--hel_recycling=False". For example Another new optional flag for the "output" command allows to control the ordering of the variable of integration corresponding to the invariant mass of t-channel propagators. For example: 1 generate p p > W + W -j j QCD =0 2 output MYDIR --t_strategy = X The possible values and the associate meaning is described below. A concrete example for the ordering of the Feynman diagram represented in Fig. (4) is also given to provide a simpler comparison: Option controlling phase-space integration parameter default values comments sde_strategy 1/2 integration strategy: "1" means Eq. (25), "2" means Eq. (35)). hard_survey 0 0/1/2/3 increase number of events and maximum number of iteration. second_refine_threshold 0.9 [0,1] threshold use to forbid the second refine Option controlling advanced compilation flag global_fflag "-O" string compilation flag for all used for all compilation aloha_fflag " " string additional compilation flag for aloha routine (suggested: -ffast-math) Note: re-compilation are not automatic (need "make clean" in Source directory) Table 8 : Short description of the various hidden parameter that could be specified within the run_card (configuration file) to tweak the behaviour of the new methods introduced in this paper. -1 Always use one sided ordering ending with particle "1". For the example, this correspond to dt 1 dt 2 dt 3 . -2 Always use one sided ordering ending with particle "2". This was the only option in older options of the code. For the example, this correspond to dt 3 dt 2 dt 1 . --1 Going from external to internal T-variable starting by the one closer to particle "2". For the example, this correspond to dt 2 dt 1 dt 3 . --2 Going from external to internal T-variable starting by the one closer to particle "1". For the example, this correspond to dt 2 dt 3 dt 1 . A final flag for the "output" command allows to deactivate the common sub-expression reduction algorithm for the colour-factor part of the code (which by default is activated). In some cases this flags can be relevant since the upfront cost of such function can be large (here 5 min for the example), while the run_time gain (around 8%) might not always be enough to justify it. In general both the upfront cost and the reduction are barely noticeable. Note that this flag can also be used for Fortran standalone output. generation of the Feynman diagram by MG5aMC, MG5aMC request Aloha to generate the HELAS [28] function needed for the associated computation. In the context of helicity recycling a new type of helicity routines has been introduced (Eq. (20) (21) ). Contrary to the other types of routines, MG5aMC does not know at generation time which of those functions will be effectively used. Consequently MG5aMC request Aloha to generates all possible choices (so in general three routines) such that any strategy can be picked if relevant. The implementation strategy is to ask Aloha to generates a standard internal wave-function routine but with a custom propagator. Consequently this was possible thanks to previous extension of Aloha adding support for custom propagators [54, 55] . The definition for such propagators are The reason of the presence of a metric term for the vector propagator allows to not include the metric in the final scalar product and therefore have a code which is easier for the compiler to optimise (giving the possibility to use some SIMD instructions) which can be critical since a large part of the computation is spent in such simple scalar product (≈ 20% for gg → ttgg). Concerning the ALOHA naming scheme convention, such a routine will have a suffix "P1N". So for the following Lorentz structure (which correspond to a γ µ lorentz structure [56] ): 2021 Snowmass Summer Study Update of the European Strategy for Particle Physics Quantum Field Theory and the Standard Model Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation (Association for Computing Machinery Tools for High Performance Computing Data Parallel C++ Aloha [30] is a program shipped with MG5aMC who computes automatically the set of helicity amplitudes needed for a given matrix-element computation (like Eq. 16). After the