Universal Renormalons in Principal Chiral Models

Perturbative expansions in many physical systems yield 'only' asymptotic series which are not even Borel resummable. Interestingly, the corresponding ambiguities point to nonperturbative physics. We numerically verify this renormalon mechanism for the first time in two-dimensional sigma models, that, like four-dimensional gauge theories, are asymptotically free and generate a strong scale through dimensional transmutation. We perturbatively expand the energy through a numerical version of stochastic quantization. In contrast to the first energy coefficients, the high order coefficients are independent on the rank of the model. Technically, they require a sophisticated analysis of finite volume effects and the continuum limit of the discretized model. Although the individual coefficients do not grow factorially (yet), but rather decrease strongly, the ratio of consecutive coefficients clearly obey the renormalon asymptotics.

Perturbation theory, the expansion in a small parameter, is a straightforward approach to many physical systems, both classical and quantum. The high-order behaviour of the perturbative expansion, however, can be very elaborate: it may not be a convergent, rather an asymptotic series. This situation already occurs for relatively simple quantum mechanical systems, like the anharmonic oscillator expanded around zero anharmonicity, and even for ordinary integrals (as toy models for path integrals) [1]. In quantum field theories (QFTs) the asymptotic nature of an expansion in powers of the coupling constant α hints at a physical instability, as pointed out by Dyson for Quantum Electrodynamics [2].
The typical dependence of an observable E on α contains factorially growing coefficients, E = ∞ n=0 c n α n (α → 0) with c n n→∞ ∼ γ n n κ n!. (1) Straightforward minimization and the application of Stirling's formula reveal that the summand with the smallest magnitude comes at n * ≈ 1/γα and its value exp(−1/γα) is a proxy for the limited accuracy of the asymptotic expansion. If the coefficients c n have an alternating sign, the series (1) can be Borel-resummed and this way a unique value can be assigned to the observable E. For sign coherent series, on the other hand, Borel resummation comes with an (imaginary) ambiguity proportional (in leading order) to exp(−1/γα). In practical calculations this is often not an issue, thanks to the smallness of expansion parameter α. However, in asymptotically free systems -such as four-dimensional non-abelian gauge and two-dimensional nonlinear sigma models -the longrange regime is always strongly coupled and the ambiguity can severely limit the ability to make physically meaningful statements about E. These effects go under the name of renormalons [3]. Operator product expansion (OPE) offers another view on them as it involves nonperturbative condensates and strong scales with a similar dependence on α (see, e.g., [3] and Eq. (7) below).
From experience with various models, the emergence of sign-coherent asymptotic expansions is connected to vacuum degeneracy, as labelled by topological quantum numbers. What makes this subject so fascinating is that the nonperturbative nature of the perturbative ambiguity seems to be connected to the system's nonperturbative classical tunnelling solutions with its typical factors of exp(−1/α). This concerns instantons as stable topological configurations and their superpositions, but also unstable saddles, as is the case in the models at hand [4]. There is mounting evidence from resurgence theory calculations that, taken together, information from asymptotic perturbative series and from topology leads to a cancellation of nonperturbative ambiguities and could potentially be used to construct a nonperturbative continuum formulation of QFTs [5][6][7][8][9]. So far, a rigorous mathematical proof of the resurgence picture does not exist for most theories of interest and the calculations rely on certain assumptions (or special features of supersymmetric theories). While these assumptions are physically and mathematically well motivated, it is nevertheless important to check their validity. Moreover, most studies of the renormalon mechanism are based on the summation of a special class of Feynman diagrams. The only known ab-initio determination of high order perturbative coefficients is provided through numerical simulations on space-time lattices. The method of choice is not the common Monte Carlo framework, rather stochastic quantization (Langevin dynamics) [10] and the numerical version thereof, combined with numerical perturbation theory to Numerical Stochastic Perturbation Theory (NSPT) [11,12]. It has the great advantage that its effort grows only quadratically in the expansion order, not factorially as in diagrammatic perturbation theory. With this tool renormalons in fourdimensional SU (3) Yang-Mills theory have been clearly demonstrated in two observables: the lattice action (plaquette) [13,14] and the energy of static sources (Polyakov loop) [15]. High expansion orders (up to 35 in 1/β), extrapolations in volume and Langevin time, and other sophisticated lattice methods had to be used to improve on previous studies that could not find renormalons.
Very recently, quarks have been included in that framework [16].
We are not aware of any such calculations for theories other than Quantum Chromodynamics. To provide cross checks of the universality of the renormalon picture and resurgence it is important to have reliable first-principle results for a large variety of different theories. In this letter, we present first numerical results for the perturbative coefficients of the energy density of 1+1 dimensional principal chiral models P C(N ), with special emphasis on the universality in the rank N . In these models, the degrees of freedom are SU (N ) group valued fields, and the Euclidean action is nothing but the obvious kinetic term . Superficially this looks like the action of a free theory, but we emphasize that the constraint U ∈ SU (N ) introduces couplings between the field components and makes the P C(N ) models highly non-trivial. On lattices with spacing a the derivatives translate into nearest neigbor interactions, and the lattice action reads Like their O(N ) [17] and CP (N −1) cousins, these sigma models are asymptotically free [18] and generate a mass and strong scale through quantum fluctuations. From a statistical physics analogy, the energy density E is related to the β-derivative of the partition function, in our convention where V stands for the number of lattice sites. Note that the energy density has mass dimension d = 2.
As βN is related to the inverse of g 2 (see Eq. (2)), both the fields and the observable are expanded in powers of where we have immediately used that E only contains integer powers of 1/β [19]. Actually, the first few terms of this weak coupling expansion are known analytically [20] and will be used as benchmarks for our numerical results. For more details on the expansion within NSPT we refer to the Supplemental Material.
To develop an expectation for the renormalon behavior of the energy expansion, we first of all notice that, although their homotopy groups are trivial, P C(N ) models contain nonperturbative saddles [21] (unitons), which may cure the ambiguity of a sign coherent perturbative expansion [4,7]. Secondly, we invoke from the large-N expansion [22] the two-loop relation between the lattice spacing (inverse cut-off scale), the generated strong scale Λ L (proportional to the mass) and the bare lattice coupling β [23] The OPE relates the constant γ in Eq. (1) to the running of the coupling β = 1/α and the energy dimension of the observable E: since our observable has mass dimension two, its perturbative part should, in leading order, receive nonperturbative corrections of the form exp(−2β/β 0 ). We therefore anticipate the energy coefficients to behave like and, applying the general arguments from above, the expansion coefficients E n should start to grow around order Their ratios divided by the order, should approach a constant. Note that these leadingorder statements should hold independently of the rank N . Three-loop corrections are related to the (regularization dependent) beta function coefficient β 2 , which contains order O(1/N 2 ) terms in the lattice scheme [24]. We apply NSPT to calculate the expansion coefficients E n of the energy density on symmetric two dimensional lattices with the same number L of sites in the spatial and Euclidean time direction, V = L × L, and periodic boundary conditions in both of them. To study finite size effects and the assumed universality of Eq. (12), we consider different lattice geometries and a variety of different ranks N . In particular, we calculate the expansion coefficients up to E 10 for N = 12 and even up to E 20 for N = 3, 4, 5, 6, see the Supplemental Material for details. The simulations for different N are completely independent.
In Langevin simulations the finite stochastic time step introduces a systematic error and it is necessary to perform the extrapolation → 0. For the numerical integration of the Langevin equation we utilize the Runge-Kutta algorithm from [15], which is exact up to terms proportional to 2 . We use up to five different values of in our simulations and perform the → 0 limit by fitting a function linear in 2 to our data.
A remarkable feature of NSPT simulations is that neither the lattice spacing a nor the coupling β enter the calculations explicitly. All computations are done directly with the expansion coefficients. Therefore, it is not possible to assign a physical volume to our lattice and it is not straightforward to go to the infinite volume limit. Indeed, from OPE arguments very large finite size effects are expected, even on the largest lattices that are achievable in present day simulations [14,15]. Fortunately, the OPE enables us to infer the functional form of the finite volume dependence, which makes it possible to extrapolate our results to infinite volume.
For the results presented in this work we used the following equation to take finite volume effects into account: where the coefficients F k are polynomials of order k in ln(L). The details of the infinite volume extrapolation are somewhat lengthy and we defer a more detailed discussion of the extrapolation and its systematics to the Supplemental material. Fig. 1 shows a synopsis of our numerical coefficients E n for the P C(6) model, i.e., a fixed N . They perfectly meet the analytically known formulas for E 1,2,3 and have been determined very precisely over many orders of magnitude. This figure also demonstrates the importance of performing the extrapolation to = 0 and to infinite volume: for high orders the corrections to the expansion coefficients are extremely large and the finite volume results are off by orders of magnitude.  It is interesting to note that the coefficients E n seem to fall off exponentially with n. Up to the expansion orders we consider, the asymptotic nature of the expansion is completely hidden by the large value of n * ≈ 50, only after which the coefficients grow factorially, see Eq. (11). With our current numerical setup it is not feasible to calculate expansion coefficients up to such high orders to directly observe this growth of the E n . This is at variance with four-dimensional gauge theories, where β 0 = 11 and for the plaquette one computes a much smaller n * ≈ 4/11, such that those coefficients grow from the start [14].
However, the derivation of n * assumes that the asymptotic behavior of Eq. (10) of the expansion coefficients sets in before the coefficients start to grow, thus the ratios r n of consecutive E n should approach the constant in Eq. (12) already for n < n * . These ratios are therefore a more sensitive signal for renormalons.
To calculate the ratios r n we first perform the extrapolation to = 0 and then use Eq. (13) to take finite size effects into account. The results are plotted in Fig. 2. We find that the asymptotic behavior sets in somewhere around order n = 15 independent of the rank N , in accordance with the renormalon picture.
While our data clearly show the expected asymptotic behavior, the error bars on the ratios are relatively large. It is tempting to look for a plateau in the ratios r n to perform a fit to a constant and get a better estimate for the asymptotic behavior of the expansion coefficients. The problem with this approach is that subsequent ratios r n and r n+1 are strongly correlated, since the coefficient E n is used in the calculation of both of them. The simulations and fits for different rank N are, however, independent. In Fig. 3  orders n 15, after the asymptotic behavior sets in, the ratios r n should no longer depend on N and combining data for different ranks is justified. The plots in Fig. 3 are in very good agreement with the prediction from Eq. (12) with relative errors that are smaller than 10%.
To summarize, we have determined the perturbative coefficients of the energy in particular two-dimensional sigma models by a suitable lattice technique -implicitly summing up factorially many diagrams -including dedicated continuum and infinite volume extrapolations. Using the first few analytically known coefficients as benchmarks, we have determined up to 20 high order coefficients which spread over many orders of magnitude. Their ratios (divided by the order) clearly approach a constant consistent with the renormalon picture. The latter is based mainly on the coupling dependence of the strong scale (and the mass dimension of the quantity), thus reflecting nonperturbative physics. Our ab-initio results validate the universality of the renormalon/resurgence picture, i.e., the independence on the rank of the sigma model. The method utilized in this study can easily be extended to twisted boundary conditions in one or even two directions, which ought to reduce the volume dependence. Likewise, asymptotic series in other observables and/or other sigma models can be studied to shed more light on the resurgence picture.
Acknowledgments -We thank Gerald Dunne, Mithaẗ Unsal, and in particular Gunnar Bali for useful discussions. We are indebted to Jakob Simeth for his contributions to the NSPT code used for our simulations. The computations were performed on a local cluster at the University of Regensburg and on the Linux cluster of the LRZ in Garching. GNU parallel [26] was used to speed up the data analysis. We acknowledge support from DFG (Contract No. BR 2872/6-2, BR 2872/8-1). Open Access -The NSPT code used in this work [27] as well as the original data and the data analysis scripts [28] are available on GitHub.

NSPT for the Principal Chiral Model
The framework of NSPT applied in this work is not new, but not well-known outside the lattice community. For the convenience of the reader we therefore give a brief introduction to NSPT in this section. A more thorough exposition can be found in the referenced original works. NSPT was first developed in the context of QCD in [11,29] and a review can be found in [12].

Numerical Perturbation Theory
The idea of numerical perturbation theory is to formally perform a weak coupling expansion of the lattice fields U in powers of β − 1 2 up to order β −M , where β is the lattice coupling. In this work we only consider expansions around the vacuum, where the leading term U 0 is given by the unit matrix on all lattice sites. Algebraic operations with these truncated series are straightforward, Once addition and multiplication are defined, any analytic function of the fields U can be evaluated by inserting the field expansion into the power series of the function. The most expensive operation is the multiplication of fields, which requires O(M 2 ) multiplications of coefficients [30]. The numerical cost of an NSTP simulation is consequently roughly proportional to M 2 . Compared to diagrammatic perturbation theory, where the number of Feynman diagrams to be taken into account typically grows like O(M !), this is very efficient.

Stochastic Quantization on the Lattice
An obvious precondition for the use of numerical perturbation theory is that all the functions involved in the computations can be expanded in powers of β − 1 2 . Most state of the art lattice simulations use Monte Carlo methods based on the Metropolis-Hastings algorithm to sample the configuration space. Metropolis-Hastings has a big disadvantage from the point of view of numerical perturbation theory: It employs an accept-reject step, where a proposed configuration update is accepted with a certain probability depending on the change in the action. An accept-reject step can not be formulated as a function of powers of β − 1 2 and is therefore not suitable for numerical perturbation theory.
One alternative to Metropolis-Hastings type algorithms is Stochastic Quantization [31], which is based on the Langevin equation. Stochastic Quantization introduces a new dimension, usually called the stochastic time τ . The evolution of the fields in τ is governed by the Langevin equation. For a scalar field φ with action S[φ] the Langevin equation is given by whereη(x, τ ) is a Gaussian noise term with the properties In the equations above · · · η denotes the average over the noiseη. For numerical calculations the partial differential equation (17) has to be replaced by a finite-difference equation. Making τ discrete with a step size dτ = leads to where −F x [φ, η] is a discretization of the τ −integral of the right hand side of Eq. (17). A choice with an error of order is the Euler method [32] with the Gaussian noise η, which has zero mean and obeys the discretized version of Eq. (19) η(x i , τ n )η(x j , τ m ) η = 2δ xi,xj δ τn,τm .
It can be shown that for sufficiently large τ n the field configurations {φ(x, τ n )} produced by the discretized Langevin with F x given by Eq. (21) are distributed with probability density exp(−S[φ]) [33], where the equilibrium actionS differs from the continuum action S by terms of O( ),S Evidently, discrete Langevin simulations suffer from a systematic error, since they sample configurations with respect to a probability distribution that differs from the desired one. This is a disadvantage of the Langevin ansatz and makes it necessary to run simulations for several different values of and to extrapolate the results to = 0. On the other hand, the big advantage of stochastic quantization from the point of view of numerical perturbation theory is the absence of an accept-reject step. Stochastic quantization in combination with numerical perturbation theory goes by the name of NSPT and can be used to numerically compute expansion coefficients on the lattice.
For completeness we mention a recently developed new formulations of NSPT, where the Langevin equation is replaced by an stochastic molecular dynamics (SMD) algorithm [34,35]. The SMD based NSPT has potential advantages over the Langevin formulation, like smaller autocorrelation times. In this work we stick to the established Langevin method, which has proven itself in practice in large scale NSPT simulations in the context of QCD.

Discrete Langevin for Constrained Systems
So far we have only discussed the simple case of an unconstrained scalar field φ. The generalization to fields which are elements of a Lie group is straightforward [33,36]. For U ∈ SU (N ) the discretized Langevin equation reads where T a stands for the generators of SU (N ) (normalized to Tr(T a T b ) = δ a,b /2) and the generalization of the Euler term is given by with the noise η a = η(a, x i , τ i )T a and the Lie derivative Note that ∇ a x acts only on the field U (x) at site x. The noise η(a, x i , τ i ) is Gaussian with zero mean and variance For the action (2) of the P C(N ) model the Lie derivative is given by Exploiting the property of the SU (N ) generators, the sum over Lie derivatives in the exponent of Eq. (24) can be performed analytically: is an anti-Hermitian traceless matrix and consequently exp i a ∇ a x S[U ]T a ∈ SU (N ), as it should be to ensure that the updated field is also an element of the group.
The use of the Euler integrator (25) again leads to systematic errors of order O( ). Runge-Kutta type algorithms for discrete Langevin updates of SU (N )-valued fields have been discussed in the literature [15,33,37]. In this work we employ the algorithm from [15], which is of order O( 2 ). We find that the higher numerical cost in comparison to the Euler method is offset by the possibility to use larger values of in the simulations.
Having defined a discrete Langevin equation for fields U , it seems straightforward to apply numerical perturbation theory to this framework. There is, however, a subtle issue. Plugging the expansion (14) into Eq. (28), it becomes clear that the force starts with terms of order β 1 2 , whereas the noise term in Eq. (21) is of order O(1) and the fields are expansions in β − 1 2 [38]. Evidently, this leads to inconsistencies. The solution is to redefine the time step → := β . By absorbing β in the time step, the force term starts with order β − 1 2 . Note that the noise is now also of order β − 1 2 and that the leading order term of the fields is not changed during an Langevin update. Since the leading order term is the point around which the fields are expanded -in our case the vacuum -this is what we should expect.

Lattice Setups
For our simulations we use only symmetric lattices with V = L × L and a lattice spacing a that is the same in all directions. All simulations are performed with at least three different values of for every lattice geometry considered, see Table I for details about the lattice setups. As discussed in the previous section, the noise enters the NSPT Langevin equation only at order β − 1 2 . It takes several Langevin sweeps before the noise can affect the higher order terms of the field expansion. Moreover, from Eqs. (15) and (16) it is obvious that higher order coefficient have no influence on lower order terms in NSPT calculations. As long as the field coefficients of a given order l are not thermalized, it makes therefore no sense to update terms of order m > l. The initial condition of our simulations corresponds to a "cold start", where all fields are equal to 1. To initialize the higher orders we start with an expansion up to order two (in β − 1 2 ) and perform n init = 500 Langevin sweeps. Then we increase the expansion order by one and run the simulation for another n init time steps. This is repeated until we have reached the desired expansion order M , after which we perform a final n init + 1/ initialization sweeps. We emphasize that our lattice fields only fulfill the constraint U ∈ SU (N ) up to the current expansion order. When we go to a higher order, the unitarization is no longer valid and we have to unitarize the fields up to the new order. It is not straightforward to enforce U ∈ SU (N ), because it is in general not easy to see how the constraints det(U ) = 1 and U † U = 1 translate to conditions for the expansion coefficients U k . A wellknown trick is to work with fields A ∈ su(N ) from the algebra instead. Any element of SU (N ) can be written as for a suitable A. For U ∈ SU (N ) the field A = A k β − k 2 has to fulfill A = A † and Tr(A) = 0, which can be achieved by making every A k individually Hermitian and traceless: To go back and forth between the fields U and A one utilizes the series expansion of ln(1 + x) and exp(x), respectively. Taking advantage of the expansion around the vacuum, which fixes U 0 = 1 and A 0 = 0, considerably simplifies the calculations. By taking the route over the auxiliary fields A, normalizing U becomes straightforward, albeit numerically expensive. Once the lattice is initialized we perform n up = 500/ Langevin sweeps with measurements every n m = 100 sweeps. Note that we chose the same stochastic time duration for all our simulations, since we expect the autocorrelation to depend on the stochastic time, not the number of discrete Langevin time steps.

Constraint Violation in Langevin Runs
In practical calculations it is important to keep in mind that the NSPT algorithm described in the last section respects the constraint U ∈ SU (N ) only if expressed in infinite precision arithmetic. Round-off and numerical errors can lead to a violation of unitarity.
During our runs we periodically check if the fields still fulfill the constraints. Calculating A every time would be very expensive and considerably slow down the simulations. We therefore check, order by order in the expansion, if the fields U are still unitary. To this end we calculate with V k the expansion coefficients of unitarity, where · F stands for the Frobenius matrix norm. The expectation value ∆ k (over all lattice sites) is taken as an estimate of the violation of the constraint U ∈ SU (N ). Monitoring the Langevin histories shows that the initialization is in general not sufficient to thermalize the highest order coefficients of the energy density. We do not observe any significant violation of unitarity for the lattices with V ≥ 32 × 32, but the small lattices with V 20 × 20 show large ∆ k even for moderate Langevin times τ . The deviation from unitarity strongly depends on the lattice volume and on the rank N . In general, all other parameters being equal, ∆ k is larger for higher rank N and higher expansion order k. For the final computation of the expectation values E n we make sure to choose only data from a stochastic time interval where the simulation is equilibrated and unitarity violations are small. An example for N = 6 is shown in Fig. 4. Remember that the E n are coefficients for an expansion in β −1 , whereas the index on the ∆ k corresponds to an expansion in β − 1 2 . To be consistent, for a given lattice size we choose the same τ -interval for all orders and all parameter sets. In table II we summarize which data we used in the final fits. For convenience we simply call this data set setup I from now on.
The choice of which data to include is somewhat arbitrary and one should verify that it does not influence the final results. As a cross-check we consider two additional data sets. The first consists of all the setups in Table I and for the second one we use the configurations  of Table II. In both cases we take data from the stochastic time interval τ ∈ [250, 500]. For later use we will refer to these as setup II and setup III, respectively. Remarkably, as will be discussed in more detail in the next section, within uncertainties the results for the ratios r n for all three data sets overlap. The reason is probably that for large lattices with V 24 × 24, which seem to have a higher weight in the infinite volume extrapolation, unitarity violations are very small.
The Runge-Kutta integrator employed in the simulations is of second order. The extrapolation to = 0 can therefore be performed by fitting a function linear in 2 to our data. The systematics of neglecting higher order terms in the fit function are estimated to be where is the largest value used for the given lattice setup which is not larger than the median of the values available. Results for a run with N = 3 and L = 32 are shown in Fig. 5. In our estimates for the statistical uncertainties of the expansion coefficients we take the autocorrelation into account. The integrated autocorrelation time is estimated using the automatic windowing algorithm from [39, Appendix C] with c = 5. Where available, we compare the NSPT coefficients with results from analytic perturbation theory and find good agreement. The final uncertainty for the expansion coefficients extrapolated to = 0 is obtained by adding the uncertainties from the fit and the systematics in quadrature. 3.00 · 10 −9 3.10 · 10 −9 3.20 · 10 −9 3.30 · 10 −9 3.40 · 10 −9 3.50 · 10 −9 3.60 · 10 −9 3.70 · 10 −9 3.80 · 10 −9 3.90 · 10 −9 4.00 · 10 −9

Finite volume effects
The extrapolation to infinite volume is based on OPE methods and we closely follow reference [14]. The energy density is symmetric under an exchange of a ↔ −a and the same is also true for our lattice action. Additionally, we only consider symmetric lattices with equal extend in all directions. It follows that the finite volume effects can only depend on even powers of L and can be parametrized as where E ∞ n is the infinite volume result.

OPE of the Energy Density
The lattice regularization introduces two distinct scales: The inverse lattice spacing 1/a and the inverse of the linear lattice extent 1/(La). In the infinite volume limit aL a the two scales are well separated and it makes sense to apply an OPE. The Wilson coefficients then depend on the hard modes of scale 1/a, whereas the soft modes of scale 1/(La) can be described in terms of expectation values of local operators. Formally we can then expand the expectation value of the perturbative energy density as (37) where with the infinite volume expansion coefficients E ∞ n . The expectation value O 2 has to be proportional to 1/(La) 2 on dimensional grounds and we write it as Finally, absorbing a constant factor into the definition of the f k , the Wilson coefficient is With the help of the equations above we can write E pert (L) in two different ways: and Note that we are using a different index convention than [14] here. We are interested in calculating the coefficients E ∞ n from the finite volume results E n (L). To this end we can use the β-function of the P C(N ) model [24,25], to expand α k (1/(La)) around α k (1/a) in Eq. (41). Comparing (41) and (42) order by order in α k (1/a) finally yields expressions for the functions F n (L) in terms of the coefficients f k , c i and the β-function. In any volume the leading order of the expansion -the expansion pointis fixed and therefore f 0 = F 0 (L) = 0. The first few non-trivial functions read: and In general F k (L) is a polynomial of degree (k−1) in ln(L) with coefficients depending on {f l } l≤k , {c j } j≤(k−1) , and on the β-function (via the coefficients β 0 , β 1 , · · · ). A simple closed expression to generate F k (L) for given k is not known. We use SymPy [40] to explicitly compute the expansion of Eq. (42) up to order α 20 , from which the F k (L) can be read off.

Fitting the Volume Dependence
Knowing the functional form of the finite volume effects, we can perform fits to our data to extract the infinite volume coefficients. In its most generic form the fit function we use is given by where F n depends on the unknown fit parameters {f k } and {c i }.
The f k and c i couple finite size effects for different expansion orders. We follow the ansatz of [14] and perform a simultaneous fit to all expansion coefficients. In all our fits we neglect higher order terms in the βfunction and set β i>1 = 0. The leading coefficients are set to their known (and regularization independent) values. The systematic error of truncating the β-function is estimated by performing separate fits where β 1 is also set to zero. In our fits we observe a behavior that has also been noticed in [14]: In the functions F n the terms containing c i f j−i for fixed j and different i are hard to distinguish in the fitting procedure. The reason can be understood by looking at Eq. (42): The running of the terms c i f j−i α(1/(La)) i α(1/a) j−i is very similar for fixed j, especially if j is small. This introduces strong correlations between such terms in the fit and ultimately leads to large uncertainties in the infinite volume expansion coefficients. Moreover, when we include systematic errors for the extrapolation → 0 in the uncertainties for E n (L), the fits yield unrealistically small χ 2 values. Our estimate for the systematic errors is probably too conservative and does not put strong constraints on the fit parameters. As a cross-check we repeat the fits with only the statistical uncertainties included.
The vastly different scales of the E n for different expansion orders make it hard to compare the coefficients directly. Instead, we consider the ratios r n obtained with the infinite volume coefficients E ∞ n from the fits. Uncertainties for the ratios are calculated with Gaussian error propagation from the uncertainties in E ∞ n . The plots in Figure 6 clearly show that we get unacceptably large uncertainties if the coefficients c i are included in our fits. More data and in particular data for much larger lattice volumes would be needed to be able to capture the difference between the f k c i terms.
Since we can not resolve the c i anyhow, we leave them out of the fitting procedure entirely. Setting c i = 0 drastically reduces the parameters in our fit function, while keeping its general functional form -a polynomial in ln(L) -intact. This works astonishingly well and the results are plotted in Figure 7. Again we find very small χ 2 values of order 10 −1 for the fits with the systematic errors included. With the exception of the fits for N = 6 and N = 12, even if we ignore the systematic errors the χ 2 values are of order 1. All four fits yield, with overlapping error bars, the same final results for the ratios. Moreover, the independent fits for different N lie on top of each other after the asymptotic behavior sets in at n ∼ 15.
The results plotted in Figs. 6 and 7 are obtained with the data from setup I. It is essential to check how strong the choice of the τ interval and the lattice configurations used for the fits influences the final results. In Figs. 8 and 9 we show the results for the ratios obtained with setup II and setup III, respectively. Finally, in Figure 10 we show the final results after averaging over the rank N by fitting a constant to r n (N ) for fixed n for the setups and fit variants we considered. There is very good agreement between all results. We can therefore conclude that our extrapolation to infinite volume is very robust and systematics from neglecting higher order terms in the beta function are negligible. For the results in the main part of this work we use the fits obtained with finite β 1 and with systematic errors for the → 0 extrapolation. We do not include systematic error estimates for the truncation of the β-function, since setting β 1 = 0 in the infinite volume extrapolation does not change the results within uncertainties.