Five loop renormalization of $\phi^3$ theory with applications to the Lee-Yang edge singularity and percolation theory

We apply the method of graphical functions that was recently extended to six dimensions for scalar theories, to $\phi^3$ theory and compute the $\beta$ function, the wave function anomalous dimension as well as the mass anomalous dimension in the $\overline{\mbox{MS}}$ scheme to five loops. From the results we derive the corresponding renormalization group functions for the Lee-Yang edge singularity problem and percolation theory. After determining the $\varepsilon$ expansions of the respective critical exponents to $\mathcal{O}(\varepsilon^5)$ we apply recent resummation technology to obtain improved exponent estimates in 3, 4 and 5 dimensions. These compare favourably with estimates from fixed dimension numerical techniques and refine the four loop results. To assist with this comparison we collated a substantial amount of data from numerical techniques which are included in tables for each exponent.


Introduction
One of the core quantum field theories with applications to various areas of physics is that of a scalar field with a cubic self-interaction. It has many interesting properties. For instance, it is known to be asymptotically free in six dimensions [1,2] which is also its critical dimension. As such it offered a much simpler forum to study this property rather than in the more complicated non-abelian gauge field theory underlying the strong interactions which also has this property. Another major area where scalar φ 3 theory has important consequences is that of condensed matter physics. For example, the non-unitary version of the model, [3], describes the phase transitions of the Lee-Yang edge singularity problem. In particular the critical exponents computed in the cubic theory produce estimates which are not out of line with those of other methods, [4,5]. Having an accurate estimate for the exponent σ for the Lee-Yang edge singularity is important in lattice gauge theory studies of Quantum Chromodynamics (QCD), [6,7]. Specifically it governs the analytic behaviour of the partition function in the smooth chiral symmetry crossover when there is a non-zero chemical potential. The latter is included in QCD as an purely imaginary parameter that leads to loss of unitarity. Though for the region of application in [6,7] physical meaningful results can still be extracted. In addition endowing the cubic model with specific symmetries means it can also describe phase transitions in percolation problems. This follows from taking the replica limit of the (N + 1)-state Potts model [8]. The critical dynamics of the phase transition in percolation has been widely studied using Monte Carlo or series methods, on discrete or spin systems. References for this comprehensive body of work will be provided in later sections. Indeed such analyses proceed over a range of integer dimensions from 2 to 6 inclusive where the latter would correspond to the mean field approximation given its relation to the critical dimension. The relation of the discrete percolation theory models to that of a continuum field theory resides in the fact that at the phase transition both scalar φ 3 theory and the spin models lie in the same universality class. What differs of course in both approaches are the techniques used to estimate the physically measurable critical exponents. On the continuum quantum field theory side these are renormalization group invariants that are determined from high perturbative loop order renormalization group functions. While these functions are scheme dependent, the critical exponents at the d dimensional Wilson-Fisher fixed point, [9], are scheme independent. In more recent years other continuum field theory techniques have been developed. Two of the main ones are the functional renormalization group and the modern manifestation of the conformal bootstrap and applied to the Lee-Yang and percolation problem for example in [10,11,12] and [13,14,15] respectively.
In terms of basic scalar φ 3 theory the multiloop renormalization of the model has proceeded in stages over the last half century or so. The one and two loop renormalization group functions were determined in [1]. This was extended to the O(N ) group in [2] where the leading order value of N for the existence of the conformal window was determined. The extension to three loops for both the Lee-Yang edge singularity and percolation problems was carried out in [4,5]. From the point of view of hindsight that computation was well ahead of its time given the difficulty of several of the three loop vertex graphs that needed to be evaluated. Moreover, estimates for the exponents in the percolation and Lee-Yang problems were extracted in dimensions d in the range 2 < d < 6 that were competitive with other results available then. To achieve a high degree of accuracy the analysis benefited from improved resummation techniques such as Padé approximants and Borel transformations, where in the latter conformal mappings were applied and the behaviour of the ε expansion was incorporated. Thereafter progress in systematically renormalizing theories to higher loop order was hindered in general by a lack of technology to push to four loops. Indeed three loop calculations were only viable due to the integration by parts (IBP) method introduced in [16]. However, with the development of Laporta's integration by parts algorithm, [17], and its implementation within a variety of publicly available packages, the four loop renormalization of φ 3 was carried out in [18]. That article covered a range of applications to various problems that had emerged in the interim. For instance, in recent years it has been shown that there is a connection of φ 3 theory with dualities in higher spin AdS/CFTs, [19,20]. This generated an interest in understanding the conformal window of φ 3 theories for various symmetry groups such as O(N ) and Sp(N ), [21,22,23]. One highlight of the four loop result of [18] was the improvement in estimates for the Lee-Yang and percolation theory exponents. What this analysis benefited immensely from was the progress in classifying two dimensional conformal field theories in the years after [4,5]. Specifically the values of the exponents of each problem in two dimensions were found exactly. For instance, the exponent σ in one and two dimensions was determined exactly in [24]. For percolation theory the two dimensional values can be derived from the unitary minimal conformal field theories of [25] when m = 2 and the central charge is c = 0. Therefore it was possible to use that data, together with hyperscaling relations, to construct constrained Padé approximants motivated by the application of this idea given in [26]. The upshot was that exponent estimates based on four loop results for three dimensions were within one standard deviation of Monte Carlo and series results. This is all the more remarkable when one recalls this is the resummation of a series that is more reliable near six dimensions down to three dimensions. We note that for brevity we will refer to results from non-continuum field theories as Monte Carlo but this will cover those from series methods too. We note that in this respect we have compiled exponent estimates from as many sources as we could. These will also include strong coupling methods, functional renormalization group techniques and several specialized approaches. An excellent live source for percolation exponents is [27].
While the Laporta approach has revolutionized our ability to extend results for many theories to loop orders that what would have been impossible to conceive of a decade ago, one is always interested in going beyond even those orders. In the short term any such developments have to proceed in simpler theories. Indeed this has been the case for scalar φ 4 theory which was renormalized at five loops in the 90's in [28,29] but not extended to six loops until around a quarter of a century later, [30,31]. Moreover the latter article [31] contained a most comprehensive analysis of the resummation of exponents using the asymptotic properties of the series. Such an analysis was much-needed after the huge jump in precision. Indeed it revisited the assumptions made in earlier approaches in the literature. Subsequent to [30,31] a novel method was applied to scalar φ 4 theory in [32] which is termed graphical functions. This extended the renormalization group functions to the staggeringly high seven loop order 1 . One highlight was the appearance of a new period in the β function which is conjectured to not be expressible in terms of a multiple zeta value (MZV), [34,32]. En route expectations concerning the non-MZV content predicted in [35] at high loop order were confirmed.
One lesson from [32] was the potential usefulness of the graphical function technique to extend the renormalization group functions. An obstruction that limited the technique to four dimensional problems was overcome by the first and fourth author, who extended the method to arbitrary even dimensions. The details of this extension will be given elsewhere [36]. This availability of graphical functions in arbitrary even dimensions immediately opened up new possibilities for the computation of the renormalization group functions in φ 3 theory in six dimensions. In this article we make use of this new tool and provide the renormalization group functions of φ 3 theory up to five loops. We emphasise that we will make no use at all of integration by parts in our computation. Consequently we will find the next term in the ε expansion of the critical exponents for the Lee-Yang singularity and percolation problems. This will form the first part of the article. The second part will be devoted to a comprehensive resummation analysis of the respective critical exponents from the ε expansion in the region between 2 and 6 dimensions. This resummation analysis will be based on technology from [31] but combined with constraints from two dimensional theories as in [18]. Thanks to the additional perturbative order and the more advanced resummation technology we are able to improve on the estimates obtained in [18]. In broad terms our estimates of the critical exponents are consistent with Monte Carlo and series results which have been similarly refined in recent years. In order to make this comparison of our five loop estimates with numerical data we have carried out an analysis on all exponent estimates in the literature that had error bars and produced a global average. The paper is organized as follows. In Section 2 we introduce the scalar cubic theory for the cases where there is one scalar field and the Lagrangian which is relevant for the percolation problem. The core machinery of the graphical function formalism used to compute the five loop renormalization group functions is discussed in depth in Section 3. The outcome of this mammoth task is provided via the explicit five loop expressions for the β function, field anomalous dimension and the mass anomalous dimension in Section 4 for both the Lee-Yang and percolation problems. As a corollary the ε expansion of the three corresponding critical exponents are determined to O(ε 5 ) too. In the subsequent section we introduce and discuss aspects of the different resummation methods we use to extract exponent estimates. Then Section 6 will provide the full consequences of that analysis for both the Lee-Yang problem and percolation theory. A central element of this section is the compilation of tables for each exponent for both problems. Each table provides comprehensive data of fixed dimension exponent estimates as well as the outcome of each of our resummations. An overall average is provided for each dimension for both Monte Carlo data and our five loop results in order to have a common comparison point. Concluding remarks are provided in Section 7. Throughout the article we will in general follow the notation of [37].

Background
We begin by outlining the essential background to the problem of extracting critical exponents for both the Lee-Yang and percolation problems. First the action for the basic cubic theory is where φ0, m0 and g0 are the bare field, mass and coupling constant. It is this version of the theory that was shown to be asymptotically free in six dimensions, [1]. The connection to the action underlying the Lee-Yang edge singularity problem is given by the coupling constant mapping g0 → ig0, which yields a non-unitary theory, [3]. The analogous action for percolation theory requires N fields φi prior to taking the replica limit defined formally by N → 0. First we note that the most general renormalizable φ 3 theory type action in six dimensions is where the indices run from 1 to N . Specific values for the fully symmetric tensor d ijk correspond to different problems and eventually lead to different prefactors which multiply the individual scalar Feynman integrals in the respective perturbative expansions. For percolation theory the d ijk tensor is related to the vertices of an N -dimensional tetrahedron, [38], and corresponds to the (N + 1)-state Potts model, [39]. In particular d ijk is given by where the N -dimensional vectors e i 1 , . . . , e i N +1 satisfy the algebra, [38], In previous calculations [5,18] these algebraic rules were used to compute the N dependence of each individual graph. For instance in [5] the renormalization group functions were written in terms of invariants which corresponded with the primitively divergent Feynman integrals. Therefore the algebra was used to determine the N dependence of the invariants as well as their subsequent value in the replica limit. Here we use a presumably new diagrammatic approach which is based on a similar one given in [5] to calculate the necessary prefactors for each individual graph. The details of this approach will be given in Section 3.3. For both the Lee-Yang and the percolation problem we renormalize in the modified minimal subtraction (MS) scheme in the dimensionally regularized theory in d = 6 − ε which retains multiplicative renormalizability 2 . This means that the number of independent renormalization constants equates to the number of terms in each action. Thus the renormalized action in terms of renormalized entities for the general theory is where and we have defined the field renormalization constant in the way which is more common in statistical physics. We highlight this explicitly in contradistinction to the high energy physics convention where it is usually defined as "φ0i = φiZ 1/2 φ ". As we use the MS scheme we recall that the coupling constant and ε dependence of the renormalization constants Zn is given by These are determined perturbatively by requiring the finiteness of the renormalized 1-PI Green's functions Γ R n (g, m, µ) = (Z φ ) n Γn(g0, m0). The conventional method to ensure finiteness is to use the Bogoliubov-Parasiuk R-operation [40,41], as well as the R * -operation [42,43,44,45]. These allow for the use of infrared regularization or infrared rearrangement for Feynman diagrams on an individual basis. This significantly simplifies the calculations. In this article however we will use the much more powerful technique of graphical functions to calculate the divergences of all the diagrams. This will be discussed in the next section. One major advantage of that approach is that it enables us to compute all diagrams in a straightforward way without any infrared rearrangement and R/R * -operations. The only simplification we use is to consider purely massless graphs throughout. To extract the mass renormalization constant we evaluate a 2-point Green's function with the mass operator inserted at zero momentum.
Once we have extracted the renormalization constants to five loops the next stage is to produce the corresponding renormalization group functions β(g), γ φ (g) and γ m 2 (g) which are defined by β(g) := µ ∂g ∂µ These ensure that after renormalization all finite renormalized n-point 1-PI Green's functions Γ R n will satisfy the renormalization group equation where ∂g = ∂ ∂g for instance. Once the renormalization group functions have been established, they lay the foundation for our application to critical exponents. In general if there is a nontrivial infrared fixed point of the β function at g * with β(g * ) = 0 then in the limit g → g * eq. (9) transforms to an equation describing critical scaling with exponents γ * φ = γ φ (g * ) and γ * m 2 = γ m 2 (g * ). In addition the correction to scaling exponent β * = ∂gβ(g)|g=g * corresponding to the β function slope at criticality will be of interest. In terms of the notation used in statistical physics the connection between these critical point renormalization group functions and the critical exponents is where ηO is the anomalous dimension derived from Z2 evaluated at g * and ν corresponds to the correlation length exponent. Knowledge of the two basic exponents η and ν means that other critical exponents can be accessed via hyperscaling relations. These are given by These will be our main focus for the percolation problem. For the Lee-Yang problem we will concentrate on η, ν, σ and ω specifically.

Graphical Functions
For the evaluation of the required Feynman integrals, we made heavy use of the graphical function technique that has been introduced in d = 4 by the fourth author in [46], extended to d = 4 − ε in [32] and recently generalized to all even dimensions ≥ 4 by the first and the fourth author [36].
Recall that the massless scalar propagator in d-dimensional Euclidean space time is the Green's function for the respective massless Klein-Gordon equation, where Γ(x) = ∞ 0 t x−1 exp(−t)dt is the gamma function. Up to a rescaling and a reparametrization which will be specified later, a graphical function is an Euclidean massless position space three-point correlation function which can be written as an integral over a product of such propagators, It is determined by a graph Γ with edges E Γ , internal vertices in V int Γ and external vertices Note that in our position space setting, external vertices can have any number of incident edges. Such a three-point function G Γ has translation, rotation and scaling symmetries, such that for all xa, with the superficial degree of divergence given by It follows that two degrees of freedom together with ∆ Γ are sufficient to parameterize the three-point function G Γ . A convenient parameterization is given by a single complex variable z ∈ C, where xij = xj − xi. Using this parameterization, we can write G Γ as where f Γ : C → R, the graphical function, is independent of the overall scale. An important feature is that f Γ (z) is a single-valued real analytic function on C\{0, 1}, [47]. Furthermore, a graphical function admits expansions of log-Laurent type at the singular points 0, 1 and ∞, [46,36], Due to the existence of the expansion at infinity in eq. (18) the graphical function naturally lives on the Riemann sphere C ∪ {∞}. These fundamental structures of graphical functions are vital for the efficiency of the graphical function method. After going from the xa, x b , xc ∈ R d coordinates to z, z via the parameterization in eq. (15), it is convenient to rename the external vertices a, b and c to 0, 1 and z. The reason for this is that eq. (15) effectively identifies the plane that is spanned in ddimensional space by the points xa, x b , xc with the complex plane C such that 0 is mapped to xa, 1 is mapped to x b and z to xc.
Graphical functions fulfill a large number of combinatorial identities. We can depict a general graph with the three labeled external vertices as Γ = z 1 0 , and identify the graph with its associated graphical function (as long as no confusion is possible), In this notation we have the following identities which can be used to add and remove edges between external vertices: A permutation of the external vertices 0, 1 and z corresponds to a specific Möbius transformation of the z variables together with an overall conformal rescaling, These two identities generate the full permutation group of the external vertices. The factorization rules in eq. (19) and the permutation identities in eq. (20) follow immediately from eq. (16), the definition of a position space three-point function in eq. (13) and the parametrization in eq. (15). Up to this point, all statements on graphical functions are valid in even dimensions ≥ 4. To handle QFTs with subdivergences we need to use a regulator. It is convenient to use dimensional regularization because the concept of graphical functions is based on the complex plane which is independent of the ambient space. Hence, it is stable under the deformation of the dimension to real numbers. In fact, all the previous identities work for general dimensions. We use the notation The general idea is to consider graphical functions in d dimensions as Laurent series in ε. The coefficient of every power in ε conjecturally reflects the structure of graphical functions in fixed even dimensions. In particular, every coefficient is a single-valued real analytic function on C\{0, 1} and admits log-Laurent expansions (eqs. (17) and (18)) at 0, 1, and ∞.
The most important identity for the graphical function technique follows from the definition of the propagator, eq. (12), combined with eqs. (13) and (15). The intuition behind this identity is that, according to eq. (12), the box operator can be used to 'amputate' single external edges of a position space Feynman diagram: Two position space three-point functions GΓ and G Γ whose underlying Feynman graphs Γ and Γ only differ by an appended edge along the external vertex c, fulfill the partial differential equation In fact, such combinatorial differential equations hold for arbitrary n-point functions. In our case of the three-point function we can translate the Laplacian xc via eq. (15) into an operator on the space of complex functions in z and z to get an effective Laplacian, which operates on graphical functions, To obtain a graphical function of higher complexity from a graphical function of lower complexity, we would like to solve this partial differential equation for the graphical function with the appended edge. To do this, we need to invert the effective Laplacian. This inversion can be roughly separated into two related problems: First, we have to find a general solution of the differential equation. Second, we need to specify the solution that gives the wanted graphical function.
The first problem is reasonably easy in four dimensions, i.e. n = 0. As long as ε = 0 the partial differential operator ∆0 factorizes into ∂z and ∂ z . Even though, naively integrating with respect to z and z results in undetermined integration constants which may be arbitrary functions of z and z respectively, these integration constants are restricted if we require the result to be single-valued. Single-valued integration needs to be performed on a suitable class of functions. Because of the denominator z − z the class of single-valued multiple polylogarithms as studied in [48,49] is not enough. However, there exists a generalization by the fourth author to generalized single-valued hyperlogarithms (GSVHs) which exactly accommodates the situation [50]. This theory comes with general and conveniently fast algorithms for singlevalued integration in z and z. For ε > 0 the generalization is straightforward: We are not interested in a full result but rather in a Laurent series in ε. This allows us to treat the ε term in eq. (22) as a perturbation and to solve the differential equation by iteratively inverting ∆0.
The situation is much more complicated for n ≥ 1 (i.e. d ≥ 6). Note that the effective Laplacian ∆n is a partial differential operator in z and z which in general does not admit a simple solution by integration. However, somewhat surprisingly, a general solution of eq. (22) for all n = 0, 1, 2, 3, . . . in the case ε = 0 was found by the first and the fourth author [36]. The function space of GSVHs is perfectly suitable for general n. The case ε > 0 is again treated as a perturbation.
The second problem-specifying the exact solution-is solved for ε = 0 by a theorem [46,36]: the structures of graphical functions-single-valuedness and log-Laurent expansionsare so restrictive that they fully specify the solution. That means there is always only one special solution among the family of general solutions that behaves like a graphical function. Finding this special solution is solved by an algorithm given in [36]. For ε > 0, handling subdivergences in this context is a bit subtle but always possible. Altogether we obtain a general and surprisingly efficient algorithm to append an edge to the external label z of a graphical function.
Using the three basic operations-adding edges between external vertices, permuting external vertices, and appending an edge-allows one to construct a wide class of graphical functions from the empty graphical function (the graphical function with neither edges nor internal vertices), which is the constant 1. See Fig. 1 for a non-trivial example of a two-point function in φ 4 theory that can be constructed from these basic operations.
In practice, there exist a few more elementary operations (products, factors, completion) that provide combinatorial relations between different graphical functions. A graphical function that can be expressed as a sequence these elementary operation applied to the trivial graphical function is called constructible and can be calculated to any reasonable order in ε (limited by time and memory demands). We want to emphasize that this concept of constructible graphical functions can easily be applied to any massless QFT in even dimensions.
Still, starting at some loop order, there exist graphical functions which cannot be constructed. Some of these graphical functions may be amendable to some reductions by elementary operations, but eventually a non-empty irreducible graphical function will be reached that has to be calculated by other means. Beyond constructability there exists a toolbox of additional identities which has some resemblance to standard momentum space techniques (e.g. momentum space IBPs). Explicitly we have special identities, approximations in ε and position space IBPs that can be used to further simplify the calculations. However this toolbox is still in its infancy and small additions may lead to dramatic increases in the applicability of the overall graphical function technique.
Eventually, some (typically small) set of (typically small) graphical functions remains that has to be integrated by different means. A brute force approach is to write the respective graphical functions as parametric integrals [47] which in many cases can be integrated using an algorithm by F. Brown [51] which is implemented as HyperInt by E. Panzer [52]. In four dimensions this parametric integration often works quite well, even though it is always very slow compared to operations on graphical functions. In six or higher dimensions the use of parametric integration is limited by the existence on squares and higher powers in the denominators of the parametric integrals as this causes problems in the implementation of HyperInt and results in large time and memory consumption. For the present fifth order computation in φ 3 theory the use of HyperInt was not necessary as all contributing graphical functions can be reduced to the trivial one via identities.
For six loops approximately 80% of the Feynman integrals are calculated. Because of the vast number and the higher complexity of graphs at six loops it is necessary to make better use of some techniques in the theory of graphical functions. Most prominently, making full use of a generalized ∆ − Y identity and position space integration by parts (IBP) requires the (notoriously tedious) solution of large linear systems. In the primitive case these identities (with some others) were implemented by the first author. This solved all primitive six loop integrals in φ 3 theory [53]. Even at 7 loops 92% of the primitive integrals could be calculated [53]. The implementation for subdivergent graphs is straight forward but still lacking. We expect that the implementation of position space IBP will solve φ 3 theory up to six loops in perturbation theory. It might be necessary to use some minor additions from other techniques such as R/R * [44,43,42] or its Hopf algebraic version [54] that make it possible to exclusively work with finite expressions. A hypothetical extension to 7 loops might already require the addition of numerical methods such as tropical parametric integration [55] to evaluate a small number of primitive graphs that cannot be evaluated via GSVHs. Such graphs are known to appear in φ 4 theory at 8 loops [56].

Periods and renormalization group functions
To determine renormalization functions it is sufficient to calculate (ε-dependent) periods, i.e. the Feynman integrals of graphs with two external vertices. If a graph has two external vertices xa, x b then translation and scale-invariance (eq. (14)) forces the respective position space Feynman integral to evaluate to |x ab | ∆ Γ P Γ (ε) where the period, P Γ (ε), is only a function of ε.
The calculation of periods is much simpler than the calculation of graphical functions: one can complete a period by adding an edge between xa and x b with a specific weight so that one is free to choose an arbitrary set of three external vertices 0, 1, z to obtain a graphical function. If this graphical function can be calculated it is always possible to efficiently integrate over the third vertex z to obtain the period. This freedom is of great benefit for calculating periods. Often there exists a particularly convenient choice which facilitates the calculation. In a certain sense graphical functions were originally invented to use them for the calculation of periods with exactly this concept. Fig. 1 shows an example of a full reduction chain of a period in φ 4 theory.
The graphical function technique is powerful enough to take a naive approach to renormalization. We calculate each amplitude to the necessary order in ε. Instead of calculating the three-point function directly in terms of graphical functions, we set one external vertex to infinity and calculate the simpler period of the resulting truncated three-point function as an effective two-point function. We add all amplitudes to get regularized (effective) two-point functions from which we read off the Z factors by the condition that renormalization renders the (effective) two-point functions finite, see eq. (5). From the Z factors we read off the renormalization functions, see eq. (6).  A detailed explanation of the graphical function method will be in [57]. All algorithms are implemented as Maple procedures in HyperlogProcedures by the fourth author [33]. The calculation of the fifth order result in six dimensional φ 3 theory is fully automated in HyperlogProcedures and takes about two days on a single core consuming 38 GB of memory. It can easily be parallelized.

Percolation theory prefactors
Our diagrammatic approach to compute the combinatorial prefactors C Γ for percolation theory follows from eqs. (3) and (4) via the diagrammatic discussion in [5, eq. (3.21) and the paragraph that follows]. For a graph Γ, we write the associated completed graph as Γ. A graph is completed by attaching all external legs to an additional vertex [58]. Completion utilizes the fact that the tensor structure of two-and three-point graphs is fixed up to a constant. Adapting the notation of [31], for the percolation theory problem we find the following relations of the combinatorial factors C Γ , which follow from the algebraic rules above where α1, α2 and α3 are associated with the two or three external tensor indices of the respective n-point functions. The combinatorial factors of the completed graphs are readily computed by contraction-deletion.
where Γ/e and Γ \ e denote the contraction and deletion of the edge e in the graph Γ. The contraction of a self-loop is the deletion of the loop in this context. The percolation theory problem is described by subsequently taking the N → 0 limit for each prefactor.

Five loop renormalization group functions
With the major task of calculating all the integrals and carrying out the renormalization, we devote this section solely to recording the results of the full five loop renormalization in the context of the two critical systems that we are interested in.

Lee-Yang edge singularity
For the Lee-Yang (LY) edge singularity problem the five loop MS renormalization group functions are where ζz is the Riemann zeta function. Each expression agrees with the earlier expressions given in [1,4,5,18] up to four loops after the coupling constant mapping is undone. We note that the five loop expression also agrees with the recent calculation of [59]. Equipped with these it is straightforward to derive the respective critical exponents which are where we have also recorded the numerical value of each coefficient in the ε expansion.

Percolation theory
For the percolation problem, denoted by P, the analogous five loop renormalization group functions after taking the replica limit as described in Section 3.3 are Again the four loop expressions are in agreement with previous MS computations, [1,4,5,18]. Consequently the critical exponents are Examining the numerical values one can easily see that the series for the critical exponents has growing alternating coefficients, which is typical for an asymptotic series. Therefore in order to obtain reliable estimates for the exponents one needs to apply resummation methods.

Resummation strategy
While our focus so far has been in relation to renormalizing φ 3 theory in six dimensions, the physical problems of interest are in lower dimensions. As noted we require a strategy to resum the asymptotic series in ε for the critical exponents in order to extract meaningful estimates in for example three dimensions. In this case ε itself would take the value 3 which is clearly not small as d = 6 − ε. Therefore we devote this section to discussing the various resummation techniques that we will apply to both the Lee-Yang and percolation problems. First we recall aspects of the resummation formalism that is well-established in this area of quantum field theory. In general, an ε-expansion series, f (ε), for a critical exponent is asymptotic with factorially growing coefficients, [60,61,62], where the constant a > 0 is related to the position of the closest singularity −1/a in the Borel plane. This number is the same for all exponents of the underlying model. By contrast the parameter b is related to the type of singularity and may take a different value for different exponents. In the two cases considered here we will only use the parameter a.
For the Lee-Yang problem we take a = 5/18 [63,64,65]. For percolation theory we use a = 15/28 [66]. In [67] the slightly lower value of 10/21 was obtained, but later on in [68] it was shown that there was a discrepancy caused by an incorrect integration contour in [67,69]. If the contour is corrected then the approach suggested in [67] leads to the same results as [66]. Given that the precise value of a is still not fully resolved yet, in our subsequent resummation for percolation we have carried out the analysis for both cases. It transpires that using either value of a produces estimates that are almost the same. The resulting error bars are consistently of similar size and the difference of the two central values is completely negligible within these error bars. For this reason, we only provide the results based on the choice a = 15/28. Finally we note that even though the value of the implicit proportionality constant in eq. (40) remains controversial [62,70], its precise value is not required for the analysis we have performed for either problem.
In order to obtain reliable estimates for critical exponents we have applied a number of different resummation techniques. These are Padé, Padé-Borel-Leroy (PBL), Borel resummation with conformal mapping (KP17) [31], double-sided Padé and constrained versions of Padé, PBL and KP17. The main purpose of implementing double-sided Padé and constrained resummations is to produce more accurate estimates especially in lower dimensions such as d = 3. For both problems the constraint arises from the known exact values for exponents in two dimensions. Those for percolation are derived from a minimal conformal field theory with m = 2 and central charge c = 0. These conformal field theories have been classified [25]. It turns out that taking into account the known exact value at d = 2 significantly improves estimates. For the Lee-Yang problem exact values for some exponents are also are known in d = 1. We will now discuss technical aspects of each of the resummation methods we used separately.
Padé, double-sided Padé. Applying the method of Padé approximants is regarded as one of the most simple resummation methods. It does not require any knowledge about the properties of the series considered. Specifically the approximant is constructed as a rational function where PL(ε) and PM (ε) are polynomials of order L and M respectively. The polynomials are chosen in such a way that the expansion of the approximant up to N = L + M + 1 coincides precisely with the initial series. For a double-sided Padé expansion information from both sides of the expansion interval is used. For the lower end, which is d = 2 in our case, we only know the first term of the expansion from conformal field theory. So to find an estimate for the series up to ε N one needs to consider approximants with L + M + 1 = N + 1. Given that d = 1 data is available for the Lee-Yang problem too we need to consider approximants with L + M + 1 = N + 2 in that case.
One of the problems with Padé approximants is that different L/M approximants can produce significantly different estimates and one has to choose a "proper" approximant. Such subjective choices might be ill-conceived and hence provide an incorrect estimate. In this article we follow the strategy suggested in [71] which is as follows. In order to obtain an estimate and error bar (apparent accuracy) of order N we will consider all non-marginal approximants of order N and N − 1 and view them as "independent measurements": where ε phys is the value of the expansion parameter where the estimate is computed. This gives the estimate and error bar as where t0.95,n is the t-distribution with p = 0.95 confidence level. In the situation where fewer than 3 approximants survive, we do not provide an error bar since usually this error bar is unreliable. We consider Padé approximants as marginal if the denominator in eq. (41) has a root in the interval [0, 2ε phys ].
The standard Padé technique also suffers from problems when the argument becomes large because power law asymptotics ε L−M become dominant. In this case the estimates have little predictive value. Thus we limit ourselves to approximants with |L − M | < 3 for which the asymptotics are not so strong and become dominant at much larger values. The double-sided Padé is free of this problem because the asymptotic growth at physical values of the expansion parameter is limited by the constraint imposed at d = 2 or d = 1.
Padé-Borel-Leroy. By contrast the Padé-Borel-Leroy method is one of the simplest methods of Borel resummation. The series under consideration is assumed to be Borel summable with factorially growing and sign-alternating coefficients as in eq. (40). In this case details of the large order behaviour such as the parameters a and b are not relevant. To obtain an estimate and error bar we also follow the technique suggested in [71] which is almost similar to the above Padé strategy.
Borel resummation with conformal mapping. An extension of the previous approach is Borel resummation with conformal mapping which has proved itself as one of the most reliable and precise resummation methods for the ε expansion [72,73,74,75,31]. It allows one to utilize information about the large order asymptotics and other properties of the series under consideration. Meanwhile there are plenty of realizations of this approach. For instance, in this paper we use the KP17 procedure which was developed in [31] for studying the ε expansion of φ 4 theory critical exponents at six loops. It is very reliable and well-documented with the technical details provided in Section V of [31].
Constrained resummations. The idea of the constrained Borel resummation was proposed by R. Guida and J. Zinn-Justin in [75] where it was called resummation with boundary condition. We have also applied this idea not only to the KP17 procedure, which is very similar to the method used in [75], but also to the above Padé and PBL methods. It should be noted that Padé approximants are not Borel resummable. It is more natural to apply constraints in the way described for double-sided Padé, but we retain Padé approximants here as it provides an additional view on constrained resummation. Also results from constrained and double-sided Padé are in fact different because Padé approximants apply to different series. Constrained resummation is based on the following series transformation where f (ε bc ) is the known value of the expanded exponent at ε bc with bc standing for boundary condition. The resummation is then applied to the series h(ε) = ∞ n=0 hnε n before being substituted into (44) to obtain an estimate for f (ε). Uncertainties are computed for h(ε) as described earlier for each method before being transformed to an uncertainty for f (ε) by standard algebraic rules.
It should be noted that this approach allows one to apply only one constraint. So for the Lee-Yang problem we will provide two constrained resummations for each method. There will be one using the value of the exponent in d = 1 and another for d = 2. We will distinguish these two constrained procedures as «c{Method}-{d bc }». So for instance cPadé-1 is the constrained Padé with the d = 1 constraint.
Overall estimate. Given that we will have a large number of estimates for each exponent from different methods we must present an overall final estimate to summarize all our results. One of the problems is that there is no method which provides the smallest uncertainty for every exponent. Here we introduce an automatic algorithm that gives higher weight to results from methods which have smaller uncertainty. So we will compute final estimates as a weighted average of all estimates with weights wi proportional to the inverse uncertainties. Specifically we define which allows us to discard almost all estimates with very large errors. If instead we were to perform a simple averaging that would significantly shift the final estimate to an incorrect value. The error estimate is computed from two parts which are the weighted standard deviation and weighted average of the uncertainties given by This strategy results in the following: 1. methods with very large error bars almost always do not contribute to an estimate and only slightly increase the error bar; 2. if methods provide significantly different results with comparable error bars, then the overall error bar increases; 3. if all methods provide almost the same value then the error decreases.
In parallel to analysing the estimates from the five loop exponents we have also repeated this exercise for Monte Carlo and series data in each integer dimension. In this way we can compare data for exponents from perturbation theory and simulations on the same level.

Resummation analysis
Having outlined the technical description of each of the resummation methods we have used to extract critical exponents as well as how we arrive at our error estimates, we devote this section to recording the actual values for a wide range of exponents. For both Lee-Yang edge singularity and percolation theory a table is provided for each exponent. In addition we illustrate that data with an associated figure. Each figure shows the exponent estimates with errors for dimensions 2, 3, 4 and 5 together with our overall Monte Carlo summary values where these are available. The horizontal axis in each figure corresponds to dimension d.

Lee-Yang edge singularity
For our Lee-Yang singularity analysis we focus on a small set of exponents. As the main work of others has centred primarily around the exponent σ we have compiled as comprehensive an amount of independent results as possible in order to draw comparisons. For η and ν, aside from the three loop work of [5], we were only able to find one study of these exponents which was [11]. That used the functional renormalization group approach and determined η and σ directly. An additional exponent termed νc was also computed. From the hyperscaling relations given in [11], it can be related to our η and ν via the expression ν = νc/σ. We have used the values for η and νc of [11] to produce an independent estimate for ν to benchmark our results against. Each of the tables follows a common theme. The top part of each is a compilation of data from numerical methods with sources. This is followed by our resummation analysis for both four and five loops. The former is provided to gauge convergence. In each case three lines summarize the unconstrained and constrained estimates as well as the combination of both. This can be compared to a similar estimate from Monte Carlo (MC) and series (srs) data. For the exponent η our results are in Table 1. We see that the estimates for the three key dimensions are stable from four to five loops. However for the only available study that we could locate [11] there is some overlap agreement for the 3 dimensional estimate in contrast to those for higher dimensions. Although this exponent is ordinarily regarded as one of the more difficult ones to reconcile between different methods since it is very close to zero. This is not the case here.
Our direct estimate of ν suffered from singularity issues. To illustrate this situation we have included Table 2 where data from those few techniques that did render a reliable estimate are presented. However the resultant large error bars mean that these values cannot be regarded as reliable. Instead we took a different tack and applied our resummation to the series for 1/ν before inverting the final numerical value. This gave problem-free data for all our approaches which is recorded in Table 3. The behaviour of ν over the dimensions indicates that there is a maximum. Clearly ν is positive and increases as d decreases to a large value in 3 dimensions before becoming negative in two dimensions where the exact values are known. The functional renormalization group results have a similar behaviour and we are in qualitative accord at the very least.
For σ a similar picture emerges in Table 4 where hdsc and hdbcc indicate results in d dimensions from separate simulations. Also FRG denotes the functional renormalization group and LPA indicates the use of the local potential approximation. In compiling the overall estimate and error for the first bank of the table we have excluded the results of [5] and the LPA data of [12] due to the absence of errors. We provide two estimates in 5 dimensions. The          first includes the FRG result of [12] while the second omits it. Clearly with the increase in the order of ε the individual constrained estimates are in good agreement for 4 and 5 dimensions but less so for 3 dimensions. This is primarily because the monotonic decrease in the value of σ from 6 down to 2 dimensions where it is positive in the critical dimension but negative in 1 and 2 means that at some non-integer dimension it will vanish. This appears to be in the neighbourhood of 3 dimensions as indicated by the relatively small value in the hundredths. Indeed this is similar to the situation alluded to for η in certain problems. Moreover the same feature is present in 3 dimensional estimates from the other methods we have provided in Table 4. This is ultimately reflected in our final five loop estimate and in particular in the error. This should not overlook however the very good agreement for 4 and 5 dimensions. One way of gauging the accuracy of our Lee-Yang exponents is to use them to provide exponent estimates in a related model. One such example is the lattice animal problem, reviewed in [78], which at criticality is related to the Lee-Yang edge singularity problem, [79]. In particular it was shown there that the Lee-Yang critical exponents in d dimensions are the same as those of the lattice animal problem in (d + 2) dimensions. In [80] estimates were given for the exponents denoted by γH and νH by calculating general dimension series up to the 15th order. The relation these two exponents have to the Lee-Yang σ exponent studied here is [79], With these we have compiled     Finally the situation with the exponent ω is less clear. This is primarily due to the lack of an exact value for this exponent in two dimensions. Therefore we are only able to record the results of unconstrained resummations which are given in Table 6. Equally we were unable to compare the estimates with Monte Carlo studies. So no definite conclusions should be drawn for ω in the Lee-Yang study.

Percolation theory
For percolation theory we recall the exact two dimensional values of the various exponents in Table 7 that were used when constraints were implemented as indicated in Section 5. We will use the 2 dimensional estimate for the unconstrained resummations as a benchmark to gauge whether our extrapolation is consistent with the exact value. In several instances there was either a significant undershoot or overshoot for both the unconstrained and constrained case. This gave a strong indication that the higher dimensional estimates could be unreliable. Consequently we examined the ε expansion of various functions of the exponent, such as its reciprocal. Some of these gave improved estimates and a projection to 2 dimensions that was more in keeping with the exact values. This is illustrated in Table 7 which also summarizes our five loop two dimensional estimates from the various resummation methods. The table also records the summary of [27]; see also [81, Table 2]. While the exact values derive from two dimensional conformal field theory we note that a Monte Carlo study [82] which predated [25] gave estimates that were in remarkably good agreement. For instance, the values of α = − 0.708 ± 0.030, β = 0.138(+0.006, −0.005), γ = 2.432 ± 0.035, δ = 18.6 ± 0.6, ν = 1.354 ± 0.015 and η = 0.204 ± 0.006 were determined in [82].
We note that here we use the value of 3/2 for ω and 72/91 for Ω. This is in contrast with the value of 2 for ω used in [18] based on [83] which also gave an argument that Ω was 96/91. The discrepancy between the exact 2 dimensional values for ω and Ω was discussed at length in [84]. In both cases the ratio ω/Ω is the same and corresponds to the fractal dimension of the system. Further a summary is given in [84] of independent calculations of both exponents. These appear to be more consistent with the respective exact values of 3/2 and 72/91. One test is the value of the exponent ∆1 = Ω/σ which is 2 for Ω = 72/91 but 8/3 for [83]. We note that naively taking our central values gives 2.04545 for ∆1.
We now discuss the results for each exponent in alphabetical order. In Tables 8-19 we have summarized the estimates for the various exponents in the literature. One excellent source we benefited from in this respect is the table of [27] which is regularly updated. Included in this are our results for the various resummations. In several of our tables we also have a line designated as ε 4 . This records the results of the constrained Padé method at four loops given in [18].
To open with, our estimates for α are presented in Table 8. One aspect that is evident is the close agreement of the estimates from both the constrained and unconstrained approaches for both 4 and 5 dimensions and to a lesser extent for 3 dimensions. The latter is in essence due to the overshoot of the two dimensional exact value for the unconstrained methods. Indeed the five loop value has a more pronounced discrepancy than at four loops. Given the slower decrease in the constrained estimate as d decreases, we would take the position that those estimates are probably more reliable. Independent Monte Carlo data for each dimension would be useful to compare with.
For β the situation is much clearer in that we have several independent results provided in Table 9. Again there is a large degree of stability within each of the comparable methods and loop order although the unconstrained Padé clearly overshoots the two dimensional exact result. From four to five loops the final estimates are remarkably stable. Comparing, however, with [85,86]   In the case of γ the independent Monte Carlo data was only available from the same articles as our comparison for β. There was a similar picture for our estimates in that there was remarkable stability between four and five loops for the three dimensions of interest as is clear from Table 10. However the check estimate for two dimensions was much closer to the exact value than that for β. This may be one of the reasons why our final values for five loops are in remarkably good agreement with the Monte Carlo results of [85]. There was no error bar for the 3 dimensional result of [86]. So it is not clear how to interpret the central value that is 12% lower than both [85] and our estimate at five loops.
The situation with δ was much less straightforward and we include two tables with data. In Table 11 we have applied our various resummation techniques directly to the ε expansion of δ. This illustrates one immediate difficulty in obtaining reliable values for low d. The value for δ in exactly 6 dimensions is 2 whereas it is exactly 18.2 in two dimensions. The latter places a big restriction on the ability of the ε expansion to manufacture large corrections as d decreases. This is clear in both our four and five loop estimates from both our unconstrained and constrained resummations. Moreover the skew that is necessary to meet the two dimensional constraint clearly makes the convergence between loops in 3 dimensions impossible. Equally there is no agreement with the few independent estimates which both have more precise values. Although the direct ε expansion approach is problematic we have provided it here partly to be transparent in the application of our methods but also to highlight the hidden pitfalls in naively applying various resummation methods. Equally to address this problem we have instead tried a second approach which was to sum the ε expansion of 1/δ, determine a numerical value and then invert this. The results of this exercise are given in Table 12. Many of the failings with the direct evaluation are absent now. There is a degree of stability between the various four and  five loop results especially in 3 dimensions. Equally and more importantly we find that this and the 4 dimensional results are certainly not out of line with the results of [88,86] although our 3 dimensional result has large errors. While the extrapolation to 2 dimensions has larger errors it does accommodate the exact value. In some sense this much more consistent agreement across Table 12 is an a posteriori justification of following this second strategy.
We now turn to one of the more widely measured exponents which is η with data recorded in Table 13. As is the case with η in other problems its value in percolation is relatively small and close to zero being associated with the wave function anomalous dimension. Consequently it is difficult to obtain an accurate value for it that is in consistent agreement with other estimates. This is evident in the numerical values of [85,88,90,91,89] in 3 and 4 dimensions although those of [91,89] are the most accurate and do overlap. On the ε expansion side the Padé resummations suffered from the presence of at least one singularity in 2 < d < 6 or were not monotonic. For this case we indirectly estimated η using the respective scaling and hyperscaling relations The numerical values for each of these are bracketed to indicate that they were not derived directly. A more difficult issue to circumvent for η in the constrained approach is the relatively large and positive exact value in 2 dimensions. This means that η would have to be identically zero somewhere between 3 and 2 dimensions as d decreases. So while our final 3 dimensional estimate of η has shown an improvement from four to five loops this is to be balanced against a necessarily large error. The situation for ν is similar as can be seen in Tables 14 and 15 where we note that there has been a significant number of independent measurements of ν especially in 3 dimensions. This has provided us with a very reliable benchmark to compare with. Again we had to apply our resummation methods to both ν and 1/ν since the estimate of the former undershot the exact 2 dimensional value. This was not the case for the latter which we would regard as our results for ν. From Table 15 it is clear that our 4 and 5 dimensional five loop estimates are in very good accord with the Monte Carlo values and are an improvement on the four loop ones. The situation for 3 dimensions is good and also closer to numerical values.
Examining Table 16 which summarizes our investigation for σ, we note that the only independent estimates are in 3 dimensions. We have clearly obtained excellent agreement with these which exceed the situation with other exponents. This outcome has to be qualified by noting that in strictly 6 dimensions σ is 0.5 with the exact result in 2 dimensions being just under 0.4. In other words the behaviour of σ in the intervening dimensions is a shallow decrease as d decreases which underpins the very good overlap. Consequently even though there have been no measurements of σ in 4 and 5 dimensions that we can compare with, we would expect our five loop estimates to be in line with any future Monte Carlo studies in those dimensions.
There is a similar picture for τ which is apparent in Table 17. Again the behaviour of τ from 2 to 6 dimensions is relatively flat due to the exact values in these dimensions being of a similar order. In this case there are independent data in 4 and 5 dimensions. In the absence of these we could have repeated our final comment for σ. So it is reassuring that our five loop 4 and 5 dimensional estimates are indeed in excellent agreement with the independent studies. The situation with our 3 dimensional result is in similar accord.
For the exponent Ω there are again a large number of independent measurements that we have shown in Table 18 (7) 0.8449(7) Table 9: Estimates for β in percolation.
more in some cases. By contrast to σ and τ there is a much larger variation in Ω across the dimensions rising from zero in strictly 6 dimensions as d decreases. Despite this both our constrained and unconstrained resummations as well as from four to five loops appear to have a degree of stability. Although our check estimate against the exact two dimensional value is not as good as for other exponents, it is encouraging that our 3 dimensional five loop estimate has overlap with independent data. This is clearly more pronounced for 4 and 5 dimensions. For instance, our final five loop 4 dimensional estimate of 0.41 (2) for Ω is consistent with the recent 4 dimensional value of 0.40(4), [103].
Finally we arrive at the correction to scaling exponent ω with the situation summarized in Table 19. As mentioned earlier for our constrained resummations we have taken the exact value in 2 dimensions to be 3/2. The independent data is mostly available in 3 dimensions and like Ω covers a wide range from around 1.0 to 1.62. Again similar to Ω, ω rises from zero to a large 2 dimensional value. The overall picture is that in 3 dimensions our five loop estimates are close to one standard deviation away from our calculated global value from the Monte Carlo data. Though the constrained data in 3 dimensions is in better accord. However the 2 dimensional projected value for our unconstrained resummations is not close to the exact value. What would be useful are independent numerical studies in 4 and 5 dimensions to compare our results with.              For the most part our focus in this section has been on providing estimates for various exponents. To do this we used the ε expansion expressions for η and ν and then derived O(ε 5 ) expansions for the other exponents through the scaling and hyperscaling relations of (11). We then applied various different summation methods to each exponent to arrive at estimates for the three dimensions of interest. One question arises as a consequence of this and concerns whether the individual estimates are then consistent with the scaling and hyperscaling relations. Equally another question is which pair of exponents could reasonably be regarded as the ones from which all the other estimates can be accurately described and thereby be our final exponent estimates. To this end we have presented our analysis in Table 20. In particular we have chosen σ and τ as our two independent exponents and note that the results using the cKP17-2 technique are the most reasonable to use for final predictions. This is because the two dimensional constraints have been implemented and these two exponents are accurate with a tight error bar in each dimension. Using these cKP17-2 values we have computed the exponents for α, β, γ, δ, η and ν using (11). In the Table the independent values for σ and τ are given above the rule with the values derived from scaling relations recorded below. For the remaining two exponents ω and Ω, their cKP17-2 values have not been included as they are not accessible by scaling or hyperscaling relations and those estimates are already in their respective tables.
Comparing these scaling relation values of Table 20 with those from the direct summation ones given in the earlier tables, none look far out of line. One exception might possibly be the 3 dimensional value of α as it appears to be more consistent with the unconstrained analysis. However looking at the extrapolation to 2 dimensions suggests it is probably on a better trajectory. The lack of Monte Carlo or series computations for this exponent means that we have no way of gauging our α estimates against an independent analysis. For the other exponents the view is that the individual resummations are consistent with the scaling relations and also with independent data from other methods where that is substantial. In the case of δ where there are only a few such other cases the estimates in Table 20 are not inconsistent. Our final 3 dimensional estimate for η appears to be on the low side but this exponent is difficult to measure accurately. For ν our 4 and 5 dimensional values are remarkably close to the global average we compiled. While that for 3 dimensions is not as accurate, it does lie within the errors of the global estimate.
In presenting our final estimates in this Table we need to be clear in stating that this shows our results are consistent with scaling relations both by constructing the ε equations directly and resumming them before comparing the estimates obtained from using two independent summed values in the (hyper)scaling relations. So in essence it is a self-consistency check. By contrast it is usually the case that in the Monte Carlo and series approaches the focus is on one or two specific exponents. Those values are then used to generate the remaining exponents via (11) rather than make extra measurements. So until all techniques have achieved a large degree of computational accuracy it is probably the case that in comparing exponent estimates the overall picture is still not perfect.

Conclusions
The main result of this paper is clearly the provision of the five loop renormalization group functions of φ 3 theory in six dimensions in the MS scheme. This level of precision could not have been achieved without the use of the graphical function formalism developed in [46,32]. That method was originally pioneered in four dimensions to renormalize φ 4 theory to seven loops in the same scheme [32,57]. To achieve the level of five loop accuracy here required the extension of graphical functions to six dimensions that was provided in [36,53]. An indication of the advantage of such new and powerful techniques can be gained from the dates that the previous loop order results became available for the cubic theory. The one and two loop renormalization group functions were recorded in the early 70's [1] with the three loop extension appearing within seven years, [4,5]. Similar to the extension of φ 4 theory from five to six loops, there was a quarter of a century lull before φ 3 theory was renormalized at four loops [18]. The relatively quick extension, in terms of time, to five loops here is suggestive that with suitable investment in the underlying mathematics of the graphical functions approach, higher loop orders are potentially within reach in this and other theories.
Such higher order computations are not purely academic exercises since the second part of our study was to extract improved estimates for critical exponents in two important problems. Both the Lee-Yang edge singularity and percolation problems at criticality lie in separate universality classes but both have a φ 3 continuum field theory at their heart. The main differ-ence is that the respective versions of the cubic theory are decorated with different symmetry properties. Given that we have renormalized the pure φ 3 Lagrangian at five loops, it was a relatively simple exercise to include the respective symmetry decorations and determine the two sets of renormalization group functions. From these the ε expansion of the critical exponents were constructed to O(ε 5 ) before a variety of resummation techniques were applied to extract numerical estimates. Moreover, we devoted a significant part of determining estimates to a careful error analysis using the same formalism provided in [31] for φ 4 theory. What ought to be recognized is that on the whole not only do the five loop results improve upon the four loop results, as well as showing convergence, but also that there is good agreement with other techniques in both problems for the dimensions of interest. These include Monte Carlo and strong coupling methods to name but a few. In making this remark it should not be overlooked that this includes 3 dimensions where we have summed from d = 6 − ε down to d = 3 with a large value of ε. In one sense this confirms the role of φ 3 theory as being in the same universality class. What was useful in making this comparison between the exponents from discrete systems and the continuum field theory was collating the available data for the former to produce a global average. The associated error bars were produced with the same routine that we used for the five loop results from the various resummations. In this respect we were endeavouring to compare the picture in the discrete and continuum sides in the same way.
The determination of the five loop renormalization group functions in the core cubic theory opens up the possibility of studying related six dimensional cubic theories to the same level of precision. For instance the conformal bootstrap formalism represents a powerful tool to calculate exponents. It was used in [106] to determine exponents for φ 3 theory in a variety of representations of the Lie group F4. While the three loop comparison in that article was in reasonable agreement for various exponents calculated with the bootstrap, the four loop study of [107] gave an improvement towards convergence. It would therefore be interesting, given the accuracy of results in [106], to extend to the F4 study of [107] to five loops. Aside from this particular symmetry of the cubic theory, an intriguing property of φ 3 with a biadjoint symmetry was observed in [108]. For any Lie group it transpires that this theory is asymptotically free due to the two loop coefficient of the β function; the one loop term is identically zero. One feature of the four loop result was the appearance of higher order group Casimirs at three loops. This is one order earlier than in four dimensional non-abelian gauge theories. So the bi-adjoint theory offers a window into the type of group Casimirs that could appear in six loop gauge theory β functions such as QCD. Although such problems are worth pursuing, the more interesting extension of our current work would clearly be to six loops. This would obviously provide a higher level of precision with which to compare future numerical studies of the discrete spin models in the same universality class. The graphical functions method that was used here, and was successful in extending φ 4 theory to seven loops [32,57] should be regarded as a starting point to achieve the six loop renormalization of φ 3 theory.