key: cord-1001844-e1x52hck authors: Mattelaer, O.; Ostrolenk, K. title: Speeding up MadGraph5_aMC@NLO date: 2021-05-20 journal: Eur Phys J C Part Fields DOI: 10.1140/epjc/s10052-021-09204-7 sha: 33d86344728d8ef1f26b83c0b6895d952a40a7a6 doc_id: 1001844 cord_uid: e1x52hck In this paper we will describe two new optimisations implemented in MadGraph5_aMC@NLO, both of which are designed to speed-up the computation of leading-order processes (for any model). First we implement a new method to evaluate the squared matrix element, dubbed helicity recycling, which results in factor of two speed-up. Second, we have modified the multi-channel handling of the phase-space integrator providing tremendous speed-up for VBF-like processes (up to thousands times faster). SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1140/epjc/s10052-021-09204-7. While the LHC is still running, preparation is starting for the High-Luminosity LHC. As part of this preparation, the CPU efficiency of our Monte-Carlo generators is crucial due to the sheer number of events that need to be generated. Given the current constraints on the LHC IT budget this will not be possible without significant software improvement [1, 2] . While the full CPU time of the LHC experiment is not dominated by event generation, it is still estimated to represent between ten and twenty percent of it. Consequently, we have received a specific request to speed-up that step by at least 20% and ideally by a factor of two [1] . In addition to the High-Luminosity LHC, work is also starting for future high-energy accelerators [3] . Going to the high-energy regime will amplify multi-scale issues which in turn can reduce the efficiency of event generation. This issue is particularly striking within MadGraph5_aMC@NLO [4, 5] (MG5aMC) for VBF-like processes where the current phase-space integration algorithm either fails to generate the The different avenues for speeding up Monte-Carlo integration is a well covered topic. Such efforts can be classified into four different categories. First one can optimise the evaluation of the function being integrated, which in our case is the matrix element [6] [7] [8] [9] . Second one can optimise the integration method to minimise the number of times such functions need to be evaluated [10] [11] [12] [13] [14] . Third, one can try to use more efficiently the various types of hardware (e.g. GPU, MPI, vectorization) [15] [16] [17] [18] and finally one can play with the weights of the sample to optimise/re-use information [19] [20] [21] [22] [23] [24] . In the context of this work, we will focus on optimising MG5aMC, one of the main Monte-Carlo generators [25] [26] [27] and we will combine two different methods to achieve our goal, one optimising the time to evaluate the matrix element and one optimising the phase-space integrator. The strategy we employed was to keep the main design choices in place (in particular the helicity amplitude method [28] and the single diagram enhancement method [29] ) and to deeply study them to see how they could be further improved. To reduce the time needed to evaluate a given squared matrix element, we use a quite standard Memory/CPU tradeoff guided by physics considerations. We have identified parts of the computation that can be stored in memory in order to avoid their re-computation later. We dubbed this method helicity recycling since the additional terms stored in memory correspond to identical sub-expressions shared between different helicity configurations. This improvement will be presented in Sect. 2. We start in Sect. 2.1 by presenting the helicity formalism used in MG5aMC, then we continue in Sect. 2.2 by explaining the main idea behind helicity recycling. The details of the associated speed-up will be then presented in Sect. 2.3. The second improvement that we have implemented improves the phase-space integrator. Contrary to the first method, it is more challenging to introduce an improvement that acts positively on all processes. On the other hand, the expected (and obtained) gain can be much more impressive with this method. For this paper we will mainly focus on the speed-up of VBF-like processes since they are the ones where MG5aMC has some specific issues. This will be covered in Sect. 3 where we start by reviewing the current multi-channel strategy (the single diagram enhancement method) in Sect. 3.1. We then explain in Sect. 3.2 the methods used to have a better handling on the integration of t-channel propagators. We then turn in Sect. 3.3 to the deeper changes we made to the multi-channel strategy as well as the impact on some physical observables -which are beyond LO accurate. Our speed-ups are then compared to the older version of the code in section (3.4). Our conclusions will be presented in Sect. 4 . We also provide two appendices. First, we will give a brief manual on how to tune the optimisation parameters. Finally in Sect. 1, we describe the modifications (and the associated conventions) to the Aloha package [30] , related to the helicity recycling algorithm. When evaluating a matrix element one can identify two key structures: the Lorentz structure of the matrix element and its colour structure. Within MG5aMC, the evaluation of these two structures factorises at the amplitude level [9] . Hence it is possible to discuss one without the other. The helicityrecycling optimisation only impacts the evaluation of the Lorentz structure and in this section we will explain how this evaluation is performed. Matrix elements typically contain factors of spinors (from the external fermions) and Lorentz vectors (from internal propagators). When analytically evaluating the square of this matrix element it is common to remove the dependence on spinors via their on-shell condition: where the sum is over helicity. The squared matrix element can then be reduced to a series of scalar products between Lorentz vectors. However, such a formalism is typically not Fig. 1 S-channel e − e + → μ − μ + diagram with photon mediator. We use p 1 , p 2 , p 3 and p 4 to label the external momenta used by programs that perform numerical evaluations of matrix elements. This is because such a formalism will cause the computational complexity to grow quadratically with the number of Feynman diagrams (due to the presence of interference terms). 1 One solution is to use the helicity amplitude formalism [32] [33] [34] where the summation over helicity is postponed. Under this formalism the matrix element is reduced to a series of spinor products, rather than Lorentz products. 2 These spinors will depend on the specific helicities of the external particles. The advantage of this formalism is that its complexity grows linearly with the number of diagrams (since interference terms can be handled by a simple sum over amplitudes). 3 An example will help illustrate the key principles here. Following Ref. [37] , let us work in the massless limit and consider the unpolarised e − e + → μ − μ + s-channel matrix element with a photon mediator. Let us use p 1 , p 2 to label the momenta of the electrons and p 3 , p 4 for the muons. This process is pictured in Fig. 1 . 4 In what follows it will also be helpful to use the standard Mandelstam variables, defined as: Accordingly, this matrix element is written as: Remember that u( p) and v( p) are Dirac spinors and can either be left-or right-handed. In the massless limit this corresponds to them having +1 or −1 helicity eigenvalues respectively. In what follows we will also use ψ to denote a general Dirac spinor. The subscript L and R will be used to denote left-and right-handedness respectively. It is common to use the following notation when expressing helicity amplitudes, for a spinor of momentum p n : One can show that [nγ μ m] = nγ μ m = 0. Hence, for the matrix element to be non-zero, the incoming fermions must have opposite helicity and similarly for the outgoing. Hence, for all the possible helicity combinations we could construct, the only ones that give a non-zero matrix element are: Here, the notation n m means the fermion with momentum p n has helicity m. The first helicity combination will give: Using the relation [nm] mn = mn [nm] = p m · p n , one finds the square to be: By parity, the helicity combination 1 + 2 − 3 + 4 − will give the same result. The only remaining contributing helicity combinations are the same two but with 1 ↔ 2. By symmetry arguments, one can see they will give the same result as in Eq. (14) but with u → t. Hence the final result for this matrix element becomes: The main points from the simple example looked at in section (2.1.1) are: 1. Diagrams are evaluated at the amplitude level (before squaring the matrix element). 2. To get the final result these diagrams must be summed over helicity combinations. 3. Only some of these helicity combinations will actually contribute. MG5aMC follows the HELAS strategy [28] to compute amplitudes. It starts by computing the spinorial representation of all external particles. These spinors are then iteratively combined using some model specific functions (generated thanks to the Aloha [30] package). In this way the full diagram can be built up. This process is represented in Fig. 2 . In the helicity amplitude formalism one distinguishes three types of functions: external wave function: function calculating the spinorial representation (e.g. u,v, μ ) of the external particles evaluated for a given helicity. internal wave function: function calculating the spinorial representation (see Eq. (16)) of an internal particle (i.e. of a propagator) evaluated for a given helicity combination. amplitude: function fully contracting all spinorial representations and therefore returning the value for a given helicity combination of the amplitude of the associated Feynman diagram. As depicted in Fig. 2 , the evaluation of the matrix element starts by evaluating all the external wave functions, before combining them to get the internal wave functions associated to the propagators. Finally it reaches the last vertex and at that point returns an amplitude. For example, consider again the e − e + → μ − μ + process. After the computation of the spinors associated to the electrons and muons (which depends only on their momenta and helicity), the algorithm will call the routine for calculating the internal photon wave function. The analytic expression of this photon wave function φ γ will be: where ψ μ has been used to represent the muon spinors. As already mentioned, these will be dependent on the helicity of the muons, labelled h 3 and h 4 . Note that the wave function associated to this propagator is not unique since it depends on which vertex is used to compute it. In this example, we are already at the last vertex and all that remains is to contract the various wave functions, taking into account the Lorentz/spinor structure of the associated vertex. Analytically, this is written as: where, just as with the muons, ψ e has been used to represent the spinors of the electrons. This is the same as the expression in (12) but without the specific choice of helicity combination. MG5aMC generates a FORTRAN subroutine that carries out the above process for a given matrix element and helicity combination and then returns the squared amplitude. 5 MG5aMC then performs a loop over all possible helicity combinations, at each point calling this matrix-element subroutine. The results from this loop are then summed to produce the final matrix element. This can be represented, for our e − e + → μ − μ + example, by the following pseudo-code: 1 Obviously, MG5aMC has already implemented a couple of optimisations. First, none of the external wave functions depend on the Feynman diagram representation and can therefore be moved outside of the loop over diagrams. Furthermore, the same propagator can appear in multiple Feynman diagrams and in such a case it is also highly beneficial to move it out of the Feynman diagram's loop. Therefore a more accurate representation of MG5aMC, prior to this paper, can be written like: Secondly, recall the point made at the start of this section that only a subset of helicity combinations contributes. Hence, another optimisation already included in the above meta-code exploits this fact and makes sure the loop over helicity combinations only includes "contributing" ones. We shall refer to this optimisation as 'helicity filtering'. 7 The matrix routine discussed in this section is called by the madevent binary. This binary is also responsible for generating phase-space points, evaluating the PDFs and writing the events (amongst other things). We can evaluate the computational cost of the matrix routine by comparing the num- Table 1 Here we present the number of instructions evaluated by the matrix routine (matrix1) and the total number of instructions evaluated by madevent. In brackets we also present these numbers as a percentage of the madevent total. These results are presented for three processes: gg → tt, gg → ttgg and gg → tt ggg. We have also broken down the matrix routine into the number of instructions (again alongside their percentage of the total) evaluated by calls to functions that evaluate external spinors (ext), internal wavefunctions (int) and amplitudes (amp). The data in this table was obtained using a combination of Valgrind [39] and KCachegrind [40] gg → tt g g→ ttgg gg → tt ggg ber of instructions it executes to the total number executed by madevent. This is presented in Table 1 for top-quark pair production plus gluons. The diagrams for tt are simpler and fewer in number than those of ttgg and even more so than those of tt ggg. Hence the total number of madevent instructions executed by madevent increases across these processes. Furthermore, this means the matrix routine is also responsible for an increasing percentage of the instructions: 23% → 96% → ∼100%. We also see that for ttgg and tt ggg (the more complex processes) the amplitude routines account for 44% and 51% of the computation, making it the dominant contribution to the matrix element calculation. This is again due to the higher number of diagrams and since the number of unique propagators does not scale as fast as the diagram multiplicity. Hence it is important that any future optimisation targets not just wave-function routines but also the amplitude routines. Finally, one can also see from the table that the wavefunction and amplitude routines do not add up to the number of matrix routine instructions. This is because the routine has other things to evaluate, most noticeably the colour factors for the relevant amplitudes. Such computation is even the second hot-spot for the three gluon multiplicity case and therefore will limit the potential impact of our optimisation. In general, when summing over helicity combinations, the same spinor with the same helicity can appear multiple times. For example, in the combinations (8)-(11) each helicity spinor (such as 1 + ) appears twice. Hence when MG5aMC loops over the combinations it will calculate the same wave function for these spinors multiple times (see the above metacode). This is a waste of computation time (even if in this sim-ple case the external wave functions are cheap to evaluate). It would be more efficient to only calculate the wave functions once, save their values and reuse it when appropriate. The same principle also applies to the wave function of internal particles. Such routines take other wave functions as input, therefore for a subset of helicity combinations the same input will be given and the code will waste time by re-computing the same quantity. For example, when looking at the internal (photon) wave function given in Eq. (16) the helicity combination (8) and (10) will give exactly the same result. Hence, the computational efficiency can again be improved by only calculating such internal wave functions once and then re-using the value when necessary. This technique of calling the wave-function routines only once is what we refer to as "helicity recycling" and can be written in terms of pseudo-code as: 1 loop over external particles 2 | loop over helicity 3 | | ext_ij = get_spinor ( p_i , hel_j ) Here you can note that all stored variables have an additional index indicating which helicity combination was used to compute them. While such an optimisation sounds natural, it has two potential issues. First, the amount of RAM needed for the computation will increase. However the RAM is currently dominated by meta-data related to the phase-space integrator and not to the amount of RAM used in the matrix element. The increase of memory needed by the helicity recycling is actually shadowed by such meta-data and hence we did not observe a sizeable increase and certainly not faced any issues with RAM assignment even for the most complex processes. Second, while the previous strategy was allowing helicity filtering at run time, this method requires us to know which helicity combinations do not contribute when creating the code. In order to numerically determine the null-helicities, we have designed the code's work-flow in the following way: 1. We first allow MG5aMC to create the matrix-element subroutine as it normally would. 3. Next, the matrix-element subroutine is rewritten in the new paradigm. The conversion of the code is done by using a directed acyclic graph to represent how the various wave-function and amplitude routines depend on one another. Having established this graph, the program is then able to easily evaluate which wave-function routines are needed to calculate a given diagram. This allows us to avoid duplicate calls to these routines when unrolling the helicity loop. It also allows us to prune efficiently any wave-function (and amplitude) calls that are only associated with vanishing amplitudes. 8 Compared to the helicity filtering discussed in Sect. 2.1.2, this new algorithm is more efficient since it can identify any noncontributing component of the computation, like an amplitude that is vanishing only for one particular helicity combination. So far we have discussed how helicity recycling is used to minimise calls to external and internal wave functions. However the impact of such an optimisation is at best quite limited since the computation is actually dominated by the amplitude routines. Thankfully, this implementation also allows us to optimise these routines by a significant factor. For the sake of example (and without any lack of generality) let us assume that the final vertex of a given Feynman diagram is a Standard Model fermion-fermion-vector vertex. This corresponds to (see Eq. (17)): Where we have explicitly added indices representing the associated helicity (or helicity combination for internal wave functions) of each component. Now, when computing M h 1 h 2 h φ and M h 1 h 2hφ , the factorψ h 1 1 γ μ ψ h 2 2 will be identical and can therefore be re-used multiple times. Therefore we have been able to optimise the code further by implementing a recycling of this factor. However, a similar factor can also be recycled between M h 1 h 2 h φ and Mh 1 h 2 h φ . Therefore it is advantageous to compare (for each Feynman diagram) which one of the following expressions: can be re-used at a higher frequency and use the most optimal recycling strategy. This optimisation requires us to define a new type of helicity routine and the details of this implementation into Aloha are presented in Sect. 1. In this section we will quantify the speed-up resulting from using the improvements detailed in Sect. 2.2. First we reproduce Table 1 with helicity recycling switched on. This is shown in Table 2 . One can see that for all processes the total number of evaluated instructions has reduced: -from 13G to 11G for tt (15% reduction) -from 470 to 180G for tt gg (62% reduction) -from 11T to 5T for tt ggg (55% reduction) The latter reductions are much larger because evaluating the matrix element represents a larger percentage of the overall computation for those processes. This is because the diagrams are more complex and numerous for ttgg and tt ggg. Looking at Table 2 , we observe that both external and internal wave-function routines represent, after helicity recycling, a relatively insignificant computational cost. Firstly, they were not that significant before the optimisation and secondly they have been highly reduced by the helicity recycling (by at least 10× factor). The final speed-up is actually more dependent on the reduction in calls to amplitude routines. Focusing on the tt gg process, one can see the amplitude routines have seen a large reduction in the number of associated instructions (by a factor of two) but still represent 42% of the overall computation. Although not shown in this table, roughly half of this computation (19% of the total) is spent evaluating simply scalar products (the contraction of Eqs. (20) (21) (22) with the remaining wave function) which strongly limit the hope of further optimisation. For the three gluon final state the situation is similar, even if the reduction of amplitude routine instructions is closer to a factor of 4. However, for this process the limiting factor is now the computation of the colour factor (taking around 60% of the computation). We have also investigated how that step could be optimised and introduced two simple improvements of the code. First we use a common sub-expression reduction algorithm [41] on that segment of the code. Second we merge the numerator and denominator of the colour-matrix into a single matrix, reducing the number of operations and allowing for a better memory pattern. Combined together those modifications lead to a speed-up of around 20%. 9 Table 2 The same as Table 1 but this time with helicity recycling enabled. We also present the percentage reduction between the two tables Having looked at how the computational cost of madevent breaks down into different functions, we now present its overall factor speed increase. This is shown for a range of processes in Table 3 . If t with and t without are the times it takes to run madevent with and without helicity recycling respectively then the speed-up is represented as As has already been alluded to in Tables 1 and 2 , the speed-up we gain from helicity recycling is highly process dependent. Helicity recycling reduces the number of times we must calculate wave functions and amplitudes and so processes with more complicated diagrams and with a higher number of total diagrams see the biggest boost. For example, consider the gg → tt results shown in Table 3a . As more gluons are added to the final state the gain increases, with the ttgg final state seeing a dramatic 2.27× speed increase. In contrast to this large increase the similar process qq → ttqq (where q ∈ {u, d, c, s}) sees a noticeably lower speed increase of 1.27×. This is because processes involving fermions have a higher number of non-contributing helicity combinations and so helicity filtering will have a bigger effect. Hence, there will be fewer wave functions/amplitudes to evaluate and so helicity recycling will have a smaller impact. One can see that the other processes presented in Table 3 also follow these general principles regarding diagram complexity and fermion multiplicity. W bosons allow for minimal helicity filtering and so Table 3b displays a large speed increase, whereas electrons -being fermions -suppress the speed-up in Table 3c . In Table 4 we present these results for a wider range of processes and with a more detailed breakdown. In the 'hel' column we present how many helicity combinations are evaluated per phase-space point. In this column we use a '→' to indicate that the maximum value across the various matrix elements is being shown. The columns "survey" and "refine" present the timing of the two main phases of the phase-space Table 3 The factor speed-up of madevent as a result of using helicity recycling for a selection of SM processes. Here q ∈ {u, d, c, s}. One can see two general principles at play. Firstly, the more diagrams a process has and the more complicated those diagrams are, the bigger the speed increase. For example, see the effect of adding more gluons in (a). Secondly, the more helicity combinations we filter away, the lower the speed increase. This is why processes with more fermions see a lower speed-up Speed-up (a) A selection of tt processes gg → e + e − qq 1.09× integration/event generation for a single run. The "survey" is designed to get a first (un-precise) estimate of the cross section and to know the relative importance of each contribution. The amount of time spent in the "survey" is therefore independent of the number of requested events. The "refine" stage aims to generate the number of requested events and therefore scales linearly with the number of events. 10 In both cases, the timing is the time to solution observed on a i7-7700HQ CPU (2016 macbook pro laptop) using up to 8 threads in the standard multi-core mode of the code when requesting ten thousand events. 11 The last column presents the global speed-up (computed by comparing the sum of the timings of the survey and of the refine) between the version of the code including all the optimisations introduced in this paper (2.9.0) and the same code without helicity recycling (2.9.0 nohel). The optimisation related to the colour computation are present in both columns. A detailed description of the cuts, scale choices and such are given as supplementary material. There one can also find all the material required to reproduce this (and sub-sequent) tables. Again one can see the same principles at play: the biggest speed increase is seen for complex processes dominated by QCD and vector boson interactions as they have the most nonvanishing helicity combinations and diagrams. The disparity of the gain between "survey" and "refine" can be explained due to different relative importances of the various matrix elements in each of the steps. Notice that helicity recycling's tendency to more greatly speed-up more intensive processes is very convenient. For example, in a pp → 4 j simulation, the processes with j = g will heavily dominate the calculation and so speeding those up is the most effective way to speed-up the overall simu- 11 To first approximation, the precision on the cross section is directly related to the number of events generated, which means all integrals are estimated at the percent level. lation. This is why the simulation sees an impressive 2.1× speed-up in Table 4 . In addition to the speed of the matrix-element evaluation, the efficiency of the phase-space integrator is another crucial factor determining the speed of the computation since it controls how many times we need to evaluate the matrix element. Due to both the number of dimensions of integration and also the requirement to generate uncorrelated events, the only suitable method is Monte-Carlo integration. However, the convergence of the method is quite slow (1/ √ N , where N is the number of times the matrix element is evaluated). In Monte-Carlo methods (see [42] for a nice review), one evaluates the integrand for random values of the variable of integration. The estimator of the integral (I N ) is simply given by the average of the function estimated across these random points x i : The statistical error ( I N ) of this estimator I n is controlled by the variance of the integrand and can be estimated as Since the convergence rate is fixed to 1 √ N , the various avenues of optimisation consist in modifying the function being integrated to effectively reduce its variance. In MG5aMC, we use the single diagram enhancement method [29] , which combines various importance sampling methods -both analytical [42] and numerical [43, 44] -on top of a multi-channel strategy. The integral is decomposed as a sum of various contributions: where the indices i and j range over individual (or potentially a subset of) Feynman diagrams. The values are called the channel weights, they do not modify the value of the integral (as long as i α i = 1 for every phase-space point) but do impact the variance of the integrand and therefore the speed of the computation. While in general the value α i could be any arbitrary function, the single diagram enhancement method makes a specific choice a priori. This choice is particularly motivated by the classical limit where interference terms between the Feynman diagrams are small. In that case and therefore we have In other words, with this choice the assumption is that each of the terms of the sum -called channels of integration -are mainly behaving as a single diagram squared from which the poles are easily identifiable and importance sampling is relatively easy to implement (see Sect. 3.2). The role of machine learning algorithms (MG5aMC uses a modified VEGAS algorithm [43] ) is then mainly to catch the impact of the |M| 2 j |M j | 2 term as well as other sources of deviation from the ideal case (like for example the impact of generation cuts or the impact of the running of the strong coupling). While this method works extremely well in general, it is not the most suited approach for VBF-like processes, especially at high energy where large interference occurs due to gauge symmetry. As a matter of fact, MG5aMC is much slower than many other programs [45] (in particular VBF-NLO [46] ) for such types of generation. When running MG5aMC, one can easily identify that the slowest channels of integration are the ones with multiple t-channel propagators. In MG5aMC, we handle t-channel propagators with the following change of variable/phasespace measure [47, 48] : where as in Fig. 3 , p and q are the initial state momenta and k 1 and k 2 the final state ones, E 1 (respectively E 2 ) is the energy of k 1 (respectively k 2 ), S = ( p + q) 2 is the center of mass energy of the collision, t 1 is the virtuality of the t-channel propagator given by The integration over t 1 is bounded by the following values: In the presence of multiple t-channel propagators, MG5aMC writes them as and performs the integration in the following (ordered) way meaning that we first integrate over t 1 , then t 2 and continues up to t n−1 . The combination of such an ordering with the boundary condition of Eq. (31) creates correlations between the variables of integration. This correlation is problematic for any type of VEGAS-like algorithm [42, 43] . The importance of the ordering can be seen in Fig. 4 where we compare various orderings for a given channel of integration (the associated diagram is displayed in the figure). We present both the unweighting efficiency and the estimated statistical uncertainty after three and five iterations which gives a hint on the convergence of the integration grid. In this example, it is clear that the original ordering strategy picked by MG5aMC is sub-optimal (between 3 and 6 times slower than the optimal strategy depending of the number of iterations). Not only is the best strategy more efficient at generating events but also the associated grid seems to need fewer iterations to converge. While we hand-picked an example of channel of integration where the ordering was sub-optimal, it would not have been difficult to also present cases where it was optimal (e.g. just flip the initial state in the example). Indeed the optimal ordering is deeply dependent on the channel of integration under consideration. On top of the old ordering, we have added support for three additional orderings (see details in Sect. 1). Each channel of integration is then associated to one of those orderings such that the virtuality of the most singular t-channel propagator is integrated first. In principle one could use the values of t + i (Eq. 31) to choose such an ordering, but this is technically not possible and we used a simple heuristic approach for such determination. Our resultant timings are presented in Table 5 , which contains the same type of information as Table 4 . The comparison is between the version of the code containing all the optimisations of this paper (2.9.0) with the same version of the code where the ordering strategy was forced to the old one (Eq. 33). While in general the "survey" presents only a small speed-up 12 a more sizeable gain is achieved during the "refine". The actual speed-up will therefore slightly increase when requesting more events. Additionally, one can note that the biggest gain is achieved for the slowest processes. One process (VBF with transverse polarised W boson) shows a significant slow-down (1.6 times slower). Our investigation shows that the ordering picked in that case was correct but the convergence of the grid was actually slower than what it was for the previous strategy leading to the slowdown. Such an issue can only happen for relatively easy channels of integration since most complex processes would need more iterations and then this effect would disappear (as observed on the VBF process at 100 TeV). A recent paper [49] pointed to the importance of the gauge choice when generating events within MG5aMC. Even if the amplitude is fully gauge invariant, the definition of the channel weights (α i ) is not. The presence of large gauge cancellations are related to large interference terms and therefore the assumptions used by the phase-space integrator (Eq. (27)) will not hold anymore. Consequently, the α i associated with Feynman diagrams with t-channel propagators will be artificially large at high-energy. This will increase the variance of the function and reduce the efficiency of the method. We propose here, a second strategy for the single diagram enhancement method. Instead of using Eq. (25), we replace the |M i | 2 by the product of the denominator (both s and t channel) associated to the single diagram under consideration (and normalise them as needed). Such a change in the definition of the multi-weights does not in principle impact the cross section (see Eq. 26). However in practise the choice of the dynamical scale is done on a CKKW inspired clustering [50] which depends on the Feynman diagram selected by the single diagram enhancement method. This modification of the channel weight will therefore impact the scale associated to each event and therefore the cross section and shape, both within scale uncertainties. In our tests, the impact of this effect was at the order of the percent, so well below the scale uncertainty. In general this can be avoided -if needed -by changing the scale computation to H T /2 or some other purely kinematical scale choice [51] . However, one should note that it is not possible We present both the relative error, the number of events generated at a given iteration and the associated un-weighting efficiency Table 5 Integration timing (time to solution) on a mac-book pro (quadcore 2016) to generate 10k events depending of the t-channel ordering strategy. If the generation fails to generate enough events the timing is re-scaled accordingly. In such a case the timing is set in bold and the number of events actually generated is then indicated to avoid such effects when running matched/merged generation within MLM or shower-kT MLM [52, 53] , since the CKKW clustering is mandatory in those type of generation. A more subtle effect of the modification of the channel weight is related to the parton shower. When writing an event inside the output file, MG5aMC provides (in addition to the kinematic variables) the colour dipole representation in the leading colour approximation. The determination of the eligible dipole depends on the Feynman diagram (e.g. in presence of mixed expansion) and therefore the modification of the multi-channel strategy can impact such a selection. One can therefore expect some change, within theoretical uncertainties, after parton shower for some QCD related observable. In Table 6 , we compare the time needed to generate ten thousand events with the previous strategy and the one introduced in this paper (all other optimisations of this paper are included in both cases). As one can expect for such deep modification of the phase-space integration strategy, the observed Eur. Phys. J. C (2021) 81:435 Table 6 Integration timing (time to solution) on a mac-book pro (quadcore 2016) to generate 10k events depending of the multi-channel strategy. If the generation fails to generate enough events the timing is re-scaled accordingly. In such a case the timing is set in bold and the number of events actually generated is then indicated spectrum of speed-up/down is extremely broad going from three orders of magnitude speed-up to five times slower. It is clear that such an optimisation is a must-have in the context of VBF processes but must be avoided for most QCD multi-jet processes. While the user can easily switch from one strategy to the other (see Sect. 1), we have configured the code such that the default value is process dependent. All processes with only one colour-flow will use the new method while others processes will use the old method. We made an exception for pure multi-jet processes which now use the new method as well. The default for each process is indicated in the last column in Table 6 . Since for most QCD processes we keep the previous integration strategy, the caveats on the change in the scale/leading colour choice, mentioned above, are naturally mitigated. In Table 7 , we compare the speed of the code between two versions of MG5aMC (2.8.1 and 2.9.0). 2.9.0 is the first version of the code containing the modification described in this paper. Let's stress that every optimisation flag is set to their default value and therefore this is the speed-up that a user will observe without playing with any options. The combined impact of all our optimisations is striking for VBF-like processes with a speed-up of more than 30,000 times faster for one process. While this process is very specific and probably not the most important one for many users, all the VBF processes show massive speed-up passing from hour long runs to a couple of minutes. Actually, in many cases, the previous version of the code had a lot of trouble in generating the requested number of events and sometimes even to correctly converge to the correct cross section. All those problems are now solved with 2.9.0; the cross section converges quickly and events are generated very efficiently. The gain for the other processes is more modest. Firstly because the phase-space integration was much better handled to start with and secondly because those processes are less sensitive to t-channel diagrams on which we have focused. Nevertheless in combination with the helicity recycling, the code is, for processes heavily used at the LHC, around three times faster, a very valuable gain. In order to evaluate an amplitude MG5aMC must sum it over all contributing helicity combinations. Before the work Table 7 Integration timing (time to solution) on a mac-book pro (quadcore 2016) to generate 10k events. If in 2.8.1, it was not possible to generate ten thousand events, the time is re-scaled accordingly (and the timing is set in bold). The second number presented in that case is the actual number of events that was generated of this paper MG5aMC would calculate every wave function and amplitude separately for each helicity combination. We have successfully restructured the code so it will now only calculate a given wave function once and then reuse the output for all the different helicity combinations. We have also been able to split up the amplitude calculation such that part of it can be recycled across different helicity combinations. This restructuring of the code has also allowed us to avoid calculating parts of the helicity calculation that contribute to null-diagrams. All these optimisations mean that for complex processes with few fermions we can a see a speed-up of the code of around 2×. At the other end of scale, simple processes dominated by fermions can see a speed-up of only a few percent. Additionally, we have studied the efficiency issue of MG5aMC for VBF-like processes at high energy. We have identified that modifying the order of integration of the virtuality of t-channel particles and changing the multi-channel weight was highly valuable, providing gamechanging speed-up for such computations. This has fixed a lot of the issues faced in the past for such processes. Combining all those optimisations allow us to overshoot the target speed-up asked by the HSF community [1,2] since we provide a code at least three times faster for CPU intensive processes and far higher for VBF processes. Optimisation is important for any program heavily used by LHC experiments and MG5aMCrepresents a non-negligible amount of grid/local cluster usage. We believe that this paper is a significant milestone for MG5aMC providing significant speed improvements both in the time to evaluate a phasespace point and to the phase-space integrator. However this is certainly not the end of the road and this effort has to (and will) continue. First, the techniques developed in this paper need to be applied to the NLO processes within MG5aMC. We do not expect any big difficulties in such porting and expect similar gain in speed. Second, there is still room to optimise the evaluation of the matrix element, even at leading order: work is in progress to have a tighter link to the hardware with investigation on a GPU port but also on the use of SIMD operation on CPUs [54] . Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . When generating an LO process within MG5aMC, the optimisation described in this paper will be activated by default (since version 2.9.0). If for some reason one wants to deactivate some optimisation or change some internal parameter, we offer the possibility to do so via a couple of parameters that can be included in the run_card.dat which is the main configuration file. Most of these parameters are not present by default in that card since we consider them as advanced parameters. It is enough to add them in the file when needed. The only parameter present by default is the one allowing for the choice of multi-channel strategy. In Table 8 , we briefly present the various parameters that can be modified and in what way. One should notice that the parameter "hel_recycling" is a switch that forbids the use of helicity recycling, therefore when set to False, the other parameters related to helicity recycling (prefixed with "hel_") will be without any impact. It is also possible to ask directly MG5aMC to generate code without any helicity recycling. This allows for the generation of code closer to the previous version and avoids spending time generating the new aloha routines. To do this one needs to modify the "output" command and add the flag "--hel_recycling=False". For example The possible values and the associated meaning is described below. A concrete example for the ordering of the Feynman diagram represented in Fig. 4 is also given to provide a simpler comparison. Note: re-compilation is not automatic (need "make clean" in "Source" directory) top (initial state particle with positive p z is displayed conventionally at the top of the diagram). For the example, this corresponds to dt 1 dt 2 dt 3 . -2 Always use one-sided ordering integrating the t invariant mass from the top of the Feynman Diagram to the bottom. This was the only option in older versions of the code. For the example, this correspond to dt 3 dt 2 dt 1 . -−1 Going from external to internal t-variables starting with the t invariant mass at the bottom of the Feynman Diagram, then the one at the top, then the second one from the bottom, followed by the second one from the top and so on up to depletion. For the example, this correspond to dt 2 dt 1 dt 3 . For an example with 4 t-channel propagator this will correspond to dt 2 dt 3 dt 1 dt 4 . -−2 Same as the previous ordering but starting from the most top invariant mass. For the example, this correspond to dt 2 dt 3 dt 1 and for a 4 t-channel propagator case, this will correspond to dt 3 dt 2 dt 4 dt 1 . A final flag for the "output" command allows for the deactivation of the common sub-expression reduction algorithm for the colour-factor part of the code (which by default is activated): 1 generate g g > t t~g g g 2 output MYDIR --jamp_optim = False In some cases this flag can be relevant since the upfront cost of such a function can be large (here 5 min for the example), while the run-time gain (around 8%) might not always be enough to justify it. In general both the upfront cost and the reduction are barely noticeable. Note that this flag can also be used for Fortran standalone output. Aloha [30] is a program shipped with MG5aMC which computes automatically the set of helicity amplitudes needed for a given matrix-element computation (like Eq. 16). After the generation of the Feynman diagram by MG5aMC, MG5aMC requests Aloha to generate the HELAS [28] function needed for the associated computation. In the context of helicity recycling a new type of helicity routine has been introduced (Eqs. (20) (21) (22) ). Contrary to the other types of routines, MG5aMC does not know at generation time which of those functions will be effectively used. Consequently MG5aMC request Aloha to generate all possible choices (so in general three routines) such that any strategy can be picked if relevant. The implementation strategy is to ask Aloha to generates a standard internal wave-function routine but with a custom propagator. This was possible thanks to a previous extension of Aloha adding support for custom propagators [55, 56] . The reason for the presence of a metric term for the vector propagator is that it allows us to not include the metric in the final scalar product and therefore have code which is easier for the compiler to optimise (giving the possibility to use some SIMD instructions) which can be critical since a large part of the computation is spent evaluating such simple scalar products (≈ 20% for gg → tt gg). Concerning the ALOHA naming scheme convention, such a routine will have a suffix "P1N". So for the following Lorentz structure (which correspond to a γ μ Lorentz structure HL-LHC Computing Review: Common Tools and Community Software A roadmap for HEP software and computing R & D for the 2020s Update of the European Strategy for Particle Physics The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations Mad-Graph 5: going beyond The chirality-flow formalism for the standard model Recursive calculations for processes with n gluons Exact expressions for processes involving a vector boson and up to five partons Color flow decomposition of QCD amplitudes Efficient Monte Carlo integration using boosted decision trees and generative deep neural networks Exploring phase space with neural importance sampling Event generation with normalizing flows Neural network-based approach to phase space integration Challenges in scaling NLO generators to leadership computers Fast computation of MadGraph amplitudes on graphics processing unit (GPU) Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU) Fast calculation of HELAS amplitudes using graphics processing unit (GPU) A positive resampler for Monte Carlo events with negative weights Efficient multi-jet merging at high multiplicities On the reduction of negative weights in MC@NLO-type matching procedures OASIS: optimal analysis-specific importance sampling for event generation On the maximal use of Monte Carlo samples: reweighting events at NLO accuracy Neural resampler for Monte Carlo reweighting with preserved uncertainties Event Generation with Sherpa 2.2 Matching NLO QCD computations with parton shower simulations: the POWHEG method WHIZARD: simulating multi-particle processes at LHC and ILC HELAS: HELicity amplitude subroutines for Feynman diagram evaluations MadEvent: automatic event generation with MadGraph ALOHA: automatic libraries of helicity amplitudes for Feynman diagram computations CalcHEP 3.4 for collider physics within and beyond the Standard Model Helicity amplitudes for massless QED Multiple Bremsstrahlung in gauge theories at high-energies. 1. General formalism for quantum electrodynamics The Helicity Method: a review New recursion relations for tree amplitudes of gluons Direct proof of treelevel recursion relation in Yang-Mills theory Quantum Field Theory and the Standard Model Ti k z-feynman: Feynman diagrams with ti k z Valgrind: a framework for heavyweight dynamic binary instrumentation Sequential performance analysis with callgrind and kcachegrind The State of the Art of Computer Programming Introduction to Monte Carlo methods Adaptive Multidimensional Integration: VEGAS Enhanced Recursive stratified sampling for multidimensional Monte Carlo integration Vbfnlo. Slides at AQGC Release Note -VBFNLO Ubiali, b-initiated processes at the LHC: a reappraisal QED and QCD helicity amplitudes in parton-shower gauge QCD matrix elements + parton showers Automated event generation for loopinduced processes QCD radiation in the production of heavy colored particles at the LHC A New approach to multijet calculations in hadron collisions Data Parallel C++ Automated predictions from polarized matrix elements Simulating spin-3 2 particles at colliders The authors would like to thanks Fabio Maltoni, Mike Seymour, Richard Ruiz, Luca Mantani, Andrew Lifson, Andrea Valassi, Stefan Roiser and all MG5aMC authors (past and present) for useful discussions. We would also like to thank the Université catholique de Louvain staff for working around the limitations imposed by the Covid-19 pandemic. This work has received funding from the European Union's Horizon 2020 research and innovation programme as part of the Marie Skłodowska-Curie Innovative Training Network