Newtonian Binding from Lattice Quantum Gravity

We study scalar fields propagating on Euclidean dynamical triangulations (EDT). In this work we study the interaction of two scalar particles, and we show that in the appropriate limit we recover an interaction compatible with Newton's gravitational potential in four dimensions. Working in the quenched approximation, we calculate the binding energy of a two-particle bound state, and we study its dependence on the constituent particle mass in the non-relativistic limit. We find a binding energy compatible with what one expects for the ground state energy by solving the Schr\"{o}dinger equation for Newton's potential. Agreement with this expectation is obtained in the infinite-volume, continuum limit of the lattice calculation, providing non-trivial evidence that EDT is in fact a theory of gravity in four dimensions. Furthermore, this result allows us to determine the lattice spacing within an EDT calculation for the first time, and we find that the various lattice spacings are smaller than the Planck length, suggesting that we can achieve a separation of scales and that there is no obstacle to taking a continuum limit. This lends further support to the asymptotic safety scenario for gravity.


I. INTRODUCTION
Quantum gravity is one of the great outstanding problems of theoretical physics. One possible approach to obtaining a consistent, predictive theory is the asymptotic safety scenario of Weinberg [1]. It is well-known that general relativity is not perturbatively renormalizable [2,3]. Although a low-energy effective theory can be formulated [4,5], it comes with an infinite number of unknown couplings and is thus limited in terms of its predictive power. Asymptotic safety rests on the hope that there is a strongly coupled ultra-violet fixed point, making it effectively renormalizable nonperturbatively, with only a finite number of parameters needing to be specified from experiment. One would also hope that the number of relevant parameters needing to be specified from experiment is small, since a theory with few input parameters is more predictive than one with many. If asymptotic safety is realized for gravity, nonperturbative methods will be crucial to convincingly establish this and to explore the detailed properties of the theory.
Evidence that the asymptotic safety scenario is realized for gravity comes mainly from the lattice [6][7][8][9] and from the functional renormalization group [10][11][12], which was pioneered for gravity in [13] (see, e.g., [14][15][16][17][18][19][20][21][22] for introductions and reviews). This paper follows up on previous work [9] using Euclidean dynamical triangulations (EDT), which is one of the original approaches to quantum gravity on the lattice [23][24][25]. EDT is a * jfunmuthyockey@gmail.com discretization of Euclidean quantum gravity, where the path-integral is given by a sum over geometries, weighted by the Einstein-Hilbert action plus the action of the matter sector. In a lattice formulation of an asymptotically safe theory, the ultraviolet fixed point would appear as a continuous phase transition, the approach to which defines a continuum limit. This is the first test that EDT must pass. In addition to this, the theory must reproduce classical general relativity in the appropriate limit. If EDT could pass these two tests, it would be a candidate for an ultraviolet-complete theory of four dimensional quantum gravity.
We briefly review the evidence so far that EDT may realize the asymptotic safety scenario for gravity. Ref. [9] introduced a fine tuning of the exponent of a local measure term, showing that the tuning of this parameter is necessary in order to recover semiclassical physics. Although the local measure term was first introduced some time ago in Ref. [26], the nice features of this tuning have only been appreciated recently. Once the tuning is done, four-dimensional geometries are recovered that resemble Euclidean de Sitter space. There also appears to be no obstacle to taking the continuum limit by following the first order line to a possible critical endpoint, and ensembles following this procedure were generated at a number of different lattice spacings. The global Hausdorff dimension was measured using finite-volume scaling and shown to be close to four [9]. The spectral dimension, which is a fractal dimension defined by a diffusion process, varies with distance scale and approaches a value close to four at long distances. The variation of the spectral dimension with distance had also been arXiv:2102.04492v1 [hep-lat] 8 Feb 2021 found earlier in other approaches [27][28][29]. The average over geometries in Ref. [9] gives a result that is close to that of Euclidean de Sitter space, and the quantitative agreement with the classical solution gets better as the proposed continuum limit is approached. The agreement between the classical solution and the lattice data is actually the worst at long distances, but improves as the lattice spacing is reduced. This might seem surprising, since it is typically the short-distance behavior that is modified by discretization effects, but this type of effect on long-distance behavior is typical when a symmetry of a theory is broken by the regulator, for example by the finite lattice spacing in the case of lattice regularization. Then a fine-tuning is needed to approximately restore the symmetry at finite lattice spacing. Residual symmetrybreaking effects appear at finite lattice spacing and can modify the physics that depends on the symmetry. The correct long-distance physics is then only recovered when the symmetry is restored in the continuum limit. An example of this is the Wilson fermion formulation of lattice quantum chromodynamics, where the lattice regulator breaks chiral symmetry. There a fine-tuning is required to restore chiral symmetry, and even then, at finite lattice spacing the chiral symmetry breaking leads to distortions of the pion sector, which contains the lightest physical states of the theory. Ref. [9] argued by analogy to the Wilson fermion case that the symmetry that is broken by dynamical triangulations is continuum diffeomorphism invariance.
Further evidence that dynamical triangulations recovers the correct long-distance limit was given in Ref. [30], where it was shown how to incorporate Kähler-Dirac fermions [31]. This approach generalizes the staggered fermion formulation to the random lattices of dynamical triangulations without the need to introduce vielbeins or spin connections. It is well known that the Kähler-Dirac action reduces to four copies of Dirac fermions in the flat-space, continuum limit [32]. Ref. [30] found evidence that this is the case for Kähler-Dirac fermions on dynamical triangulations in the large-volume (small curvature) limit, but only if the continuum limit is also taken. This is seen in the approximate four-fold degeneracy in the low-lying eigenvalues of the Kähler-Dirac matrix, and in the degeneracy of scalar bound states in the continuum limit. The four-fold degeneracy is lifted by lattice effects in a similar manner to what is found with staggered fermions in lattice quantum chromodynamics (QCD) [33], but as in lattice QCD, the degeneracy appears to be restored in the continuum limit. The evidence that these discretization effects vanish suggests that continuum Kähler-Dirac fermions in four dimensions are recovered at that point. An additional advantage of Kähler-Dirac fermions is that they possess an exact U (1) symmetry, which is related to continuum chiral symmetry. A study of fermion bilinear condensates provides strong evidence that this U (1) symmetry is not spontaneously broken at order the Planck scale, implying that fermion bound states do not acquire unacceptably large masses due to chiral symmetry breaking. These results for Kähler-Dirac fermions in EDT are highly non-trivial and provide further evidence for the asymptotic safety scenario for gravity and matter.
In this work we continue our investigation of matter fields living on dynamical triangulations, this time focusing on scalar fields. We follow the work of de Bakker and Smit [34] in our implementation of scalar fields. They looked at scalars propagating on EDT backgrounds (without the inclusion of a local measure term) and the binding energy of bound states of scalar particles. They found clear evidence of gravitational binding, such that there was an attractive force between the scalars, but they were not able to make contact with the Newtonian limit (see [35] for recent work attempting to understand the systematic errors associated with the original de Bakker, Smit analysis). In the original reference [34], these authors outlined a prescription for recovering the Newtonian limit. In this work we follow their prescription, using our approach to taking the continuum limit and our general strategy, which has been successful for other observables. We confirm the finding that there is an attractive force between scalar particles, and after taking the continuum, infinite volume limit, we find results compatible with Newtonian gravity in the non-relativistic limit. We can therefore use Newton's law to infer a value of Newton's constant G, which allows us to convert lattice units into units of the Planck length. We find that our lattice spacings are smaller than the Planck length and that for our finest lattice spacings we are starting to see a separation of scales, such that the lattice spacing is starting to become much smaller than the Planck scale. This provides evidence that not only does the formulation recover the correct long-distance physics, but also that there is no barrier to taking the continuum limit. This paper is organized as follows: Section II reviews the EDT formulation and the details of the lattice simulations. Section III reviews how to add scalar fields to EDT and how to compute the scalar propagator and the binding energy of a bound state of two scalar particles. Section IV presents our numerical results for the binding energy and the analysis needed to recover the form of the interaction and Newton's constant. We conclude in Section V. Finally, an appendix gives some details of the fitter used in the analysis.

II. EUCLIDEAN DYNAMICAL TRIANGULATIONS
A. The model In the continuum, for fixed global topology, the fourdimensional Euclidean quantum gravity partition function is given by the path integral sum over all geometries, weighted by the Einstein-Hilbert action and the action for the matter sector, where the Euclidean Einstein-Hilbert action is with R the curvature scalar, Λ the cosmological constant, G Newton's constant, and we adopt for the matter sector a real, non-interacting massive scalar field minimally coupled to gravity, with bare mass m 0 for the scalar field. The partition function for lattice quantum gravity that we use in this work is that of Euclidean dynamical triangulations, where the path integral at finite lattice spacing is approximated by the sum over all four-geometries constructed by gluing together four-dimensional, equilateral simplices. It is given by [23,36] where the factor C T divides out equivalent ways of labeling the vertices in a given geometry, the term in brackets is a local measure term with the product over all triangles, and O(t j ) is the order of triangle j, i.e. the number of four-simplices to which it belongs. S ER is the Einstein-Regge action [37] of discretized gravity, where κ = (8πG) −1 , λ = κΛ, δ j = 2π − O(t j )arccos(1/4) is the deficit angle around a triangular hinge t j , and where the volume of a d-simplex of equilateral edge length a is given by It is standard to absorb the overall numerical factors into constants and to perform the sums in Eq. (5) so that the lattice action is given by the concise form with N 4 the number of four simplices and N 2 the number of triangles. The parameters κ 2 , κ 4 , and β are inputs to the simulations whose values must be adjusted in order to approach the continuum limit. We do not include the matter action in the Boltzmann weight when generating our ensembles; this is known as the quenched approximation. Although this is an uncontrolled approximation, we still expect many features of the full theory to be preserved in the quenched theory. The main advantage of quenching is that it allows us to reuse the ensembles generated and analyzed in previous works [9].

B. Simulation details
The details of the generation of the lattice configurations used in this work are given in Ref. [9], and they are summarized briefly here. The lattice geometries are made by gluing together four-dimensional simplices along their three-dimensional faces. The four-simplices are equilateral, with constant edge length a, and the dynamics is encoded in the connectivity of the simplices. We sum over a class of triangulations known as degenerate triangulations [38]. This class of triangulations does not obey the combinatorial manifold constraints, though it does lead to a reduction of finite-size effects by approximately a factor of 10 over combinatorial triangulations, giving it some numerical advantages [38]. When the triangulations are degenerate, not all of the neighbors of a given four-simplex are necessarily distinct. Also, distinct foursimplices may share the same five vertex labels. Such configurations are not allowed when the lattices obey the combinatorial manifold constraints. Evidence for the existence of a continuum limit for degenerate triangulations with a non-trivial measure term was given in Ref. [9]. It is likely that this universality class would be shared by combinatorial triangulations if the continuum limit does in fact exist, as discussed in Ref. [9].
The algorithm for evaluating the partition function in Eq. (4) is now standard, and consists of a set of ergodic local moves, known as the Pachner moves, which are used to update the geometries [24,39,40]. The proposed local changes are then accepted or rejected using a Metropolis step. Most of the lattices used in this work were generated using a parallel variant of this standard algorithm known as parallel rejection [9]. Parallel rejection partially compensates for the low acceptance of the proposed local changes by the Metropolis accept/reject step. We define a sweep as a fixed number of attempted moves, though the precise number varies across ensembles. A sweep could be 10 8 or 10 9 attempted moves, with the acceptance rate depending strongly on the location in the phase diagram. Acceptance rates range from around 10 −2 to 10 −4 . Measurements are made after 10 to 50 sweeps, and our longer runs were for tens of thousands, or even one hundred thousand sweeps.
The global topology of the geometries is fixed to S 4 . The topology remains fixed since the local Pachner moves are topology preserving, and we choose S 4 by starting from the minimal four-sphere at the beginning of the Monte Carlo evolution. In order to take the infinite lattice volume limit it is necessary to fine-tune the bare parameter κ 4 to its critical value. This amounts to a tuning of the bare cosmological constant. Although this tuning is required to take the infinite lattice volume limit, it does not guarantee that we can take the infinite physical volume limit. The infinite lattice volume limit can be taken even in an unphysical phase where the extent of the lattice "universe" is only a few lattice spacings, so that the extent of the physical volume is on the order of the ultraviolet cut-off scale. As shown in Ref. [9], there is a region of the phase diagram with extended geometries, so that the large lattice-volume limit also corresponds to the limit of large physical volume.
The simulations are done at approximately fixed lattice four-volumes. The Pachner moves require the volume to vary in order for them to be ergotic, though in practice the volume fluctuations about the target four-volume are constrained to be small. This is done by introducing a volume preserving term in the action δλ|N f 4 − N 4 |, which keeps the four-volume close to a fiducial value N f 4 . This term does not alter the action when N 4 = N f 4 , but it does keep the volume fluctuations from becoming too large to be practical in numerical simulations. In principal one should take the limit in which δλ goes to zero, but in practice setting it small enough does not appear to lead to significant systematic errors. In Ref. [9] the parameter δλ was taken to be 0.04, though taking this value smaller did not lead to a noticeable change in the results.
The phase diagram for the theory is shown in Figure 1. The solid line AB is a first order phase transition line that separates the branched polymer phase and the collapsed phase. Neither of these phases looks much like semiclassical gravity. The branched polymer phase has Hausdorff dimension 2, while the collapsed phase has a large, possibly infinite, dimension. The crinkled region does not appear to be distinct from the collapsed phase, but is connected to it by an analytic cross-over. The crinkled region requires larger volumes to see the characteristic behavior of the collapsed phase, suggesting that it is a part of the collapsed phase with especially large finite-size effects [41]. The results of Ref. [42] for combinatorial triangulations are in broad agreement with this picture of the phase diagram.
The exponent β associated with the local measure term must be fine-tuned in order to obtain ensembles with four-dimensional, semiclassical properties. It was shown in Ref. [9] that close to the transition line AB the lattice geometries have semiclassical properties, with global Hausdorff dimension close to four and a spectral dimension that approaches four at long distances. The prescription for recovering semiclassical geometries is to approach the line AB from the left. Since a first order transition is characterized by tunneling between meta-stable states, if we simulate too close to the transition line then some of the time a given run will be in the wrong state. In order to minimize contamination from the wrong phase, we only take those parts of a run that are in the correct phase. We find that this selection procedure becomes unnecessary at the finest lattice spacing considered in this work, since we can simulate somewhat farther from the transition line without losing the four-dimensional finitevolume scaling. This is fortunate, since future runs will be at ever finer lattice spacings, so will likely not suffer from this systematic error. Table I shows the parameters for the ensembles used in this work. Most of these ensembles were generated and discussed in Ref. [9], but some of them are new. We have added a larger lattice volume at our finest lattice spacing, as well as larger and smaller lattice volumes at β = 0 to aid with the infinite-volume extrapolation. The determination of the relative lattice spacing follows the procedure of Ref. [9], which looked at the return probability of a diffusion process on the lattice geometries. The return probability is dimensionless, but varies as a function of the diffusion time step, which is not. One can rescale the diffusion time step so that the return probability lies on a universal curve; the rescaling factor then tells us the relative lattice spacing. The relative lattice spacings quoted here differ slightly from those first reported in Ref. [9]. In that work the overlap of the return probability curves was obtained without regard for the lattice volumes used in the comparison. Here we match in a self-consistent way ensembles with lattices as close to the same physical volumes as possible. The shifts are within previously quoted errors, but the central values quoted here are likely more accurate. The errors quoted here are similar in size to those previously obtained and reflect the uncertainties in matching the curves in this procedure.

III. THEORETICAL BACKGROUND
In this section we give the necessary theoretical background for extracting and interpreting the numerical data associated with the binding energy of two scalar particles. We begin with a discussion of scalar fields on dynamical triangulations, then we show how to calculate the binding energy between scalar particles on our lattices, and finally we review the nonrelativistic limit of the binding energy in the continuum limit. The first column shows the relative lattice spacing, with the ensembles at β = 0 serving as the fiducial lattice spacing. The quoted error is a systematic error associated with finitevolume effects. The second column is the value of β, the third is the value of κ2, the fourth is the number of four-simplices in the simulation, and the fifth is the number of configurations sampled.

A. Scalar fields on dynamical triangulations
In order to map Eq. (3) on to the lattice, we use a naive discretization where we associate each scalar field at x, φ(x), with a four-simplex (or equivalently a duallattice site). In this way each scalar field is restricted to only have five neighbors through the five surrounding tetrahedra (or the dual edges). The lattice action on the dual lattice can then be written as where xy is a nearest-neighbor pair between two dual lattice sites (or two four-simplices) which is counted once, and m 0 is the bare mass. Note that the lattice scalar action has a shift symmetry in the absence of a mass term [30,43,44]. This is also true of the continuum action Eq. (3) for zero mass and in the absence of any additional scalar self interaction terms. This shift symmetry ensures that the renormalized mass goes to zero as the bare mass approaches zero without any fine tuning. It is a nontrivial check of the calculations that this is the case. Expanding and collecting terms above-as well as noting that we work on a compact topology-and normalizing the coefficient of the kinetic term to one, we can write Eq. (8) as with L the dual lattice Laplacian plus a mass term, and the sums over x and y are unrestricted over the whole lattice. In the continuum L is the Klein-Gordon operator and the inverse of the L matrix is the scalar propagator.
In the next subsection we review the form of the binding energy between scalar particles in the presence of gravity on the lattice.

B. The binding energy
The binding energy between two particles is given by where m is the mass of a single particle, and M is the mass of the two-body bound state. If E b is greater than zero, it indicates that the two-body state has a lower energy than twice the energy of the singleparticle state, and thus a bound state can form.
To compute the binding energy between two scalar particles, we must first calculate the propagator for a free scalar field living on a triangulation. Given the lattice scalar action of Eq. (9), we need only invert the matrix L xy to obtain the propagator. For an arbitrary triangulation, we can write down L xy explicitly, where D x is the number of neighboring four-simplices to a four-simplex (in our case always five), m 0 is the bare input mass, δ xy is the Kronecker delta, and A xy is the adjacency matrix, which has matrix elements A xy = 1 if x and y share a dual edge 0 otherwise.
The matrix elements of L −1 contain the propagators between two four-simplices. To compute the binding energy, we need the squares of the propagators as well, The interpretation of (L −1 xy ) 2 is the two-particle propagator between x and y.
With the one-and two-particle propagators, we can compute the average two-point correlation function as a function of geodesic distance on the lattice. The one-and two-particle, two-point correlation functions are, respectively [34], and as a function of r, the geodesic distance between dual lattice points. In Eqs. (12) and (13) there are two averages shown. The first is the average over all possible distances a separation |x − y| = r away. In practice this is not done in its entirety, and a fixed number of source points are chosen, from which propagators and distances are calculated to all other points. The second average is indicated with the angled brackets, which denote an ensemble average-an average over configurations. From G and G (2) it is possible to compute the binding energy between two scalar masses. We can use the asymptotic forms of G and G (2) , which are expected to feature an exponential fall-off for large distances, to extract the binding energy. Above, m is the renormalized one-particle mass, and M is the two-particle renormalized mass. Consider the function, which for sufficiently large source-sink separations becomes We can identify the binding energy with the exponent, and we can identify the power-law exponent, γ ≡ q − 2p. Thus E b measures the difference in energy between a twobody system in its ground state and twice the mass of a single particle in its ground state, with gravitational effects taken into account. With these definitions, taking the logarithm of F , we find where Z E is a constant. The single particle propagator has the same form after taking the logarithm, Besides the constant shifts present in both equations, the renormalized mass and the binding energy both appear as linear terms, while the power-law behavior appears as a logarithmic correction. In the physically relevant case, and using our conventions, E b and m should be positive, indicating an attractive gravitational force and particles with positive energy density, respectively. In the next subsection we consider the non-relativistic limit of the binding energy between two scalar masses.
C. Binding energy in the non-relativistic limit In this subsection we consider the energy levels associated with gravitational bound states. We follow Ref. [34] in assuming that the mass of the constituent particles is much lighter than the Planck mass such that the particles are weakly coupled. In this case the magnitude of the binding energy is well below the mass of the constituent particles and we can work in the nonrelativistic limit. In that limit the effective description for the bound states of two scalar masses by gravity is simply given by Schrödinger's equation with Newton's gravitational potential. The bound state solution is then just the Schrödinger solution for positronium, but with the fundamental constants that appear in the Coulomb potential exchanged for those of Newton's potential.
Thus we consider the Schrödinger equation, with potential where m is the particle mass, µ is the reduced mass equal to m/2 in the degenerate case considered here, and G is Newton's constant. The solution to this equation for a potential of this form is well known. The energy eigenvalues, given by the principal quantum number n, are in units where = c = 1. The ground state binding energy for two identical scalar masses held together by gravity is therefore E 1 = m 5 G 2 /4, giving us a clear prediction for the dependence of the binding energy on the constituent mass. However, in order to make contact with this power law using lattice methods we first need to take the continuum, infinitevolume limit of our calculation. To get an idea for what type of discretization and finite-volume effects we might expect, we recall that the fractal dimension of the geometries varies as a function of distance scale. Even at the largest distance scales that we can probe, the spectral dimension measured on the lattices does not get much above three. It is only in the continuum, infinite-volume limit that the spectral dimension at long distances extrapolates to four, as it must to reproduce our world. In a 2+1 dimensional world, the same derivation for the binding energy would lead to E 1 ∝ m 2 , so we might expect a large extrapolation of the exponent in the power law, from ∼ 2 to ∼ 5, if the scalar fields on our lattices see a long distance effective dimension of around three at coarse couplings and small volumes. This expectation is born out by the data, as we show in the following section.
To further interpret our results, it is useful to model the binding energy as a function of the dimension d. By dimensional analysis, we have E 1 ∝ m in 1+1 dimensions. Taking a simple quadratic fit to the three values of the exponent quoted here for integer dimension leads to the scaling E 1 ∝ m α , with The values of d that determine this dependence interpolate over the range of effective long distance dimensions that were found for our lattices in Ref. [9], so this simple estimate of the relationship between α and d should be reliable enough to determine what numerical value of d is implied by the α obtained from our simulations. The other limit that must be taken in our calculation is the non-relativistic limit. Again following Ref. [34], we can get a rough estimate of the size of relativistic corrections to Eq. (22). Consider the Hamiltonian If we replace the momentum p by 1/r and minimize the energy, we find This implies that Gm 2 /2 1 in the non-relativistic limit. The region where we see power law behavior for the binding energy as a function of scalar mass in our numerical data is compatible with this condition.

IV. NUMERICAL RESULTS
In this section we discuss the details of the numerical analysis of the correlators, which yields the binding energy and renormalized mass. We then explain the details of the power-law fits to the binding energy versus the renormalized mass. Finally, we discuss the infinitevolume, continuum extrapolation of the exponent in the power law as well as our determination of Newton's constant.

A. Correlation functions
The correlators defined in Eqs. (12) and (13) are obtained from exact inversions of the matrix L xy calculated on a given lattice configuration for a given bare mass value. We do not consider every simplex on the lattice as a possible source. Instead, we vary the number of sources on each configuration from one, five, 20, and 60 in order to assess the effect the number of source simplices used has on the statistical error. We find that for a large number of sources, say 60, the systematic errors associated with modeling the deviations of the data from the model fit function are dominant. These deviations could be due to excited states, and finite-size and discretization effects. In order to avoid this difficulty of modeling the systematics, we use a single source per configuration in our main analysis. All source simplices are selected randomly from the largest three-volume cross-section of the entire lattice. This is done by first shelling a configuration starting from a source chosen at random, and then only selecting sources for the propagator from the largest slice in the shelling. We find that restricting our sources to come from the largest three-slice minimizes finite lattice spacing effects, and it is the same procedure that we have used in previous work on the spectral dimension [9] and for our studies of Kähler-Dirac fermions [30].
An example of correlator data is shown in Fig. 2, and an example of the ratio F defined in Eq. (15) is shown in Fig. 3. Both plots use log-linear coordinates and show results for several masses. These figures display a feature that appears across all ensembles, to varying degrees. We see around r ≈ 10 lattice spacings a bend in the correlator, and a peak where the derivative of log[F ] changes sign. We also see this feature is a function of the bare The logarithm of the ratio between the two-particle, two-point correlator, and the square of the one-particle, twopoint correlator, F (r). Five different bare masses are shown on the N4 = 8000, β = 0 ensemble. We see a peak in the data, separating a positively sloped region and a negatively sloped region, which is pushed to larger r values for larger bare mass values. The distances displayed on the horizontal axis are in units of the distance between the centers of adjacent four-simplices, i.e. a dual edge length. mass, and as the bare mass is increased, this bend is pushed out further to larger distances. The same thing happens as the volume of the system is increased. In Fig. 4 we see the peak is pushed to larger distances as the system volume is increased. This indicates that the turn-over in the data is most likely due to long-distance lattice artifacts. As noted in Ref. [9], there are baby universes that branch off of the mother universe, where the baby universes can be quite long, although their crosssection is of order the lattice spacing. This effect is most pronounced on our coarsest lattices, where it can signif-  4. The logarithm of the ratio of the two-particle correlator to the square of the one-particle correlator as a function of distance for multiple volumes at β = 0, and m0 = 0.02. We see as the volume is increased the peak is pushed to larger values of r indicating the turn-over in the data is most likely a long-distance lattice artifact. The distances displayed on the horizontal axis are in units of the distance between the centers of adjacent four-simplices, i.e. a dual edge length.
icantly modify long-distance physics, although the effect appears to vanish in the continuum limit. It is useful to keep this in mind when choosing a fit window to extract masses from our correlation functions, since it sets an upper bound on how far in the Euclidean time extent we can fit and still expect our model fit function to describe the data. This bend in the data that we see is most likely due to baby-universe effects at long distances. At short distance scales we expect the usual discretization effects, as well as excited state contamination. Thus, the fit window to the correlation function is rather constrained in our current approach.
The peak in log[F ] is one of the first clues on how to extract a physically motivated answer for the binding energy. From the definition of E b in Eq. (17) the coefficient of r should be positive if the two-particle state is bound. We see this is only possible between r = 0 and the peak around r ≈ 10 lattice spacings (for the specific ensemble in Figs. 2 and 3). In fact, the existence of such a region is already encouraging, since it implies there exists an attractive force between scalar masses inside the dynamical triangulations framework. This was noticed already in Ref. [34].
Additionally, looking at Fig. 3 we can see a change in concavity for the larger masses around a value of r ≈ 5. This inflection point marks the change of the concavity from a region that is concave up, to a region where the data turns over i.e. the peak. This inflection point denotes the end of the valid fitting region according to Eq. (18), since after this point the long-distance effects begin to dominate the shape of the function. Across all bare masses and ensembles, we fit to a region that begins at r = 1 and ends around the inflection point of log[F ]. We fit this same range in the one-particle correlator, and the F function. The choice of fit function is decided by Eqs. (18) and (19). Thus, we use a function of the form,  (26) we can extract the binding energy, the renormalized mass, and the exponents β and γ as a function of the bare mass. The fits are done with non-linear least squares fitting including the correlations of the dependent data. The errors are estimated using single-elimination jackknife resampling, including the off-diagonal terms in the correlation matrix. The size of autocorrelation errors is estimated using a blocking procedure; the data is blocked until the errors no longer increase. In order to retain enough information to resolve the correlation matrix when performing fits, the data is not blocked, but the errors are inflated to reflect the increased error due to autocorrelations. The fits are performed under a jackknife, and the correlation matrix is reconstructed for each individual fit under the jackknife from the data on each jackknife subensemble. By including correlations in the fit, the χ 2 per degree of freedom is expected to be a reliable measure of goodness of fit. We compute from the χ 2 and the number of degrees of freedom a confidence interval (a p-value) for the fit, correcting for finite sample size. We make a histogram of p-values from the fits for which the fit parameters are propagated through to the rest of the analysis. This includes fits from all ensembles. The resulting histogram of p-values is relatively uniform, and is shown in Figs. 5, and 6 for the correlator fits, and F fits, respectively. Only the lowest bin possesses a small spike. Since the fits in this bin are scattered throughout the parameter values of the analysis more or less at random, we do not ascribe an additional error to this slight Given the results for the binding energy and the renormalized mass for a wide range of bare mass values on many different ensembles, we are able to test the theory presented in Sect. III C. This is done in the following subsections.

B. Mass dependence of the binding energy
The dependence of the renormalized mass on bare mass is shown in Fig. 9 for four different volumes at fixed lattice spacing (β = 0). It is clear from this plot that the renormalized mass goes to zero as the bare mass also approaches zero, which is a consequence of the shift symmetry of the lattice action. This provides a useful check of our calculation. The dependence of the binding energy on the renormalized mass is shown in Figs. 10, 11, 12 and 13 for four different ensembles. In order to make contact with Newtonian gravity, we must look for a power-law dependence for the binding energy as a function of renormalized mass, as given in Eq. (22). As a first step, in order to eventually be able to compare results across lattice spacings, we put the results in the same lattice spacing units. Before performing the fits we re-scale all the binding energies and renormalized masses to that of the fiducial lattice spacing at β = 0 using the relative lattice spacings given in table I.
To study the power-law behavior, we must also determine a fit window for the masses m, to which we fit the data. To find the beginning of a fit range, we search for the smallest bare mass for which the expected physical inflection exists in the quantity log F (e.g. in Fig. 3, ≥ m 0 = 0.02). This identifies the minimal bare mass at which physical behavior appears in the correlation function. The renormalized mass m corresponding to this bare mass is the beginning of the fit window. The end of the fit window is determined by a change of inflection in the plots of binding energy versus renormalized mass. This point is where the nonrelativistic power-law behavior has been overtaken by effects due to strong coupling at larger mass values.
For our power-law fits, we assume the functional form where A and α are fit parameters, which in the continuum, nonrelativistic limit are expected to be A = G 2 /4, and α = 5, as given in Eq. (22). We find that this simple fit function is a good description of the data on all of our ensembles, with two exceptions: our two coarsest ensembles (β = 1.5 and β = 0.8). For these ensembles we notice negative (in our convention) binding energy at small masses indicating the absence of an attractive force, which can be seen in Figs. 13 and 14. We do not have a good model for how discretization effects modify the expected behavior of the binding energy at very coarse lattice spacings, but it is at least encouraging that this unphysical behavior is absent on our three finest lattice spacings. We have the option of dropping these coarse lattices in our continuum extrapolation, and this is something we do as a cross-check, since we have to model the unphysical behavior of the binding energy on these two ensembles. In order to describe this data, we choose a model with two additional fit parameters beyond the simple power law of Eq. (27). The motivation for the fit function to the coarser ensembles is data driven; this is the simplest ansatz that describes the data that also reduces to the expected fit form when the new parameters are taken to zero. Thus, for our two coarsest ensembles we use the fit function with A, B, α, and C the fit parameters. As before, A and α can be identified with their continuum, infinitevolume counterparts in Eq. (22). For these two ensembles the criteria for selecting the starting mass value of the fit is never satisfied i.e. the correct inflection in log[F ] is not observed for any bare mass. This is most likely due to large discretization errors on these coarse lattices masking the physical behavior. Therefore the start of the fit window is somewhat arbitrary on these ensembles. We choose a fit range that begins in the region where the binding energy trends negative and ends before the inflection in the E b versus m plot at larger masses. We vary this fit range to include a systematic error due to this choice. The form in Eq. (27)-even at finite lattice spacing-is re-enforced by the existence of the shift symmetry, which ensures that the bare mass is only multiplicatively renormalized, and hence, the binding energy is strictly proportional to some power of the renormalized mass. In Figs. 10, 11, and 12 we show examples of the binding energy plotted against the renormalized mass, along with a best fit line and the fit range used (in black), for three different lattice spacings. These are the finer lattices at rel = 0.7 and rel = 0.8, and one of the coarser ensembles at rel = 1, respectively. In Figs. 13 and 14 we show the same quantities for the extra coarse, rel = 1.59 ensemble, and rel = 1.28 ensemble. We see good agreement between the fit functions Eq. (27) and (28), and the data.
These fits are done by taking the correlations in the data into account. We use weighted orthogonal distance regression [45,46] to incorporate correlations in the renormalized mass and in the binding energy data sets simultaneously. For the weights we use the inverse covariances in both data sets to obtain the best fit to the data points using χ 2 minimization. A detailed dis- cussion of this procedure can be found in appendix A.
To assess a systematic error associated with the choice of fit range, we vary the start and end points of a fit range over a reasonable set of values guided by the quality of fit and tabulate the results. We then calculate the standard deviation of those results and include it as a systematic error, adding it in quadrature to the statistical error of the result from the central fit to give a total error. We perform this power-law fit across all of our ensembles, extracting a power α and a coefficient A. From that coefficient A we calculate √ 4A, which we associate with a value for G at fixed volume and lattice spacing. While this association with G at finite lattice spacing or volume may suffer from systematic errors, in the continuum, infinite volume limit the quantity √ 4A should extrapolate to G. With these results for α and G across ensembles, we are able to obtain their continuum, infinite-volume values.

C. Continuum, infinite volume extrapolation
To preform the extrapolation to the continuum, infinite-volume limit we take the simplest ansatz suggested from finite-size scaling, and the discretization dependence suggested by the symmetries of the theory. This approach is similar to what has been used in our previous analyses on dynamical triangulations, with the choice of fit functions given by and where H i , I i , J i , K i , and L i are fit parameters for their respective quantities. Here V is the physical volume, and rel is the relative lattice spacing. We include quadratic corrections in the inverse physical volume, and the squared lattice spacing, since we find curvature in our data when we include small volumes and coarse lattice spacings. In addition to the fit ansatzes in Eqs. (29) and (30), we also perform fits dropping the ∼ 4 rel term, which we are able to do when we also drop the two coarsest lattice spacings. The results from this fit are consistent within one-sigma to the results of the fit to the full data set including the 4 rel term. The extrapolations for α and for G are shown in Figs. 15 and 16, respectively, where they are plotted against the inverse physical volume. There, lines of constant lattice spacing are drawn, and the zero-latticespacing limit is shown in black. In Figs. 17 and 18 we show the same extrapolations to the infinite-volume, continuum limit as the fits shown in Figs. 15 and 16, but with a different cross-section through the two-dimensional parameter space. In these plots we show the values for α and G plotted versus the squared lattice spacing. There, lines of constant physical volume are drawn with the infinite-volume limit shown as a solid black line. Note  Fig. 16 however now plotted as a function of the squared lattice spacing. Here example lines of constant physical volume are plotted along with the infinite volume limit as a solid black line, and the data are represented in the same manner as Fig. 16.
that the continuum, infinite-volume limit is taken separately for α and G.
These fits are performed assuming that the data points are uncorrelated, which is reasonable given that the points are all from different ensembles. For the α extrapolation we find the χ 2 /d.o.f = 0.56, corresponding to a p-value of 0.73. For the G extrapolation we find χ 2 /d.o.f = 0.37, corresponding to a p-value of 0.87. The infinite volume, continuum extrapolated values are α = 4.6(9), and G = 15 (5). The errors on these quantities are fairly large, around 20%-30% but the value of α is consistent with what is needed to recover the Newtonian limit. In fact, this result is somewhat better than it at first appears because of the sensitive dependence of α on the effective dimension. Given Eq. (23) for the de-pendence of α on dimension, one finds that our 1σ error band on α implies a 1σ error range for the dimension of between 3.6 and 4.1. Thus our result implies Newtonian binding in a dimension quite close to 4 at long distance scales in the continuum limit. Note that at coarser coupling and smaller lattice volumes where the effective dimension of the lattice geometries is around 3 or lower, the value of α drops to 2 or lower, which is what we expect given the dependence of α on dimension. Thus our results are consistent with expectations.
The value of G that we should expect is not known a priori, since it sets the lattice spacing in physical units, but it should satisfy certain consistency checks, as discussed in Sect. III C. The constraint that the binding be nonrelativistic implies that Gm 2 /2 1, which translates into m 2 0.13 in our fiducial lattice units, given our value of G in those units. The upper range of masses in our fit window is at m 2 ≈ 0.02, so the nonrelativistic condition is satisfied.
One particularly nice feature of this calculation is that our value of G allows us to determine the lattice spacing in units of the Planck length for the first time. Because our extrapolation procedure recovers the correct Newtonian limit, we can have some confidence that the value of G computed via this method is reliable. We find that √ G = P l = (3.9 ± 0.7) fid . Thus, our fiducial lattice spacing is around 1/4 the Planck length. Our finest lattice spacing is around 1/6 the Planck length, so we see that the lattice spacing can indeed be made smaller than the Planck length.

V. DISCUSSION & CONCLUSION
One of the tests that any formulation of lattice gravity must pass is that it must have the correct classical limit. In this work we have shown that the interaction of scalar particles is well-described by Newton's potential in the appropriate non-relativistic, classical limit. This conclusion comes from our study of the binding energy of the two particle bound state as a function of the constituent scalar particle mass. The analysis makes use of a number of ensembles with multiple volumes and lattice spacings, allowing us to extrapolate our results to the continuum, infinite volume limit.
Our calculation passes a number of non-trivial crosschecks. We show numerically that the renormalized scalar mass approaches zero as the bare mass approaches zero, which is expected given the shift symmetry of the lattice action. We study the two-particle binding energy as a function of its constituent mass, and we find that it is well-described by a power law in the non-relativistic limit. At finite lattice spacing and finite volume the exponent in the power law is close to what one expects if the effective dimension of the geometry is near the measured values for the effective dimension on these same lattices in Ref. [9]. Only in the continuum, infinite volume limit does the power law correspond to that of the Newtonian potential in four dimensions. The consistency of the binding with Newtonian gravity allows us to extract a value of G from the calculation, and knowing G allows us to determine the Planck scale for the first time within EDT. We find that the Planck length is P l = (3.9 ± 0.7) fid , so that our fiducial lattice spacing is about 1/4 the Planck length, and our finest lattice spacing is around 1/6 the Planck length. This shows that as the continuum limit is approached the lattice spacing becomes a decreasing fraction of the Planck length, suggesting that there is no barrier to taking a continuum limit.
Although the calculation is done in the quenched approximation, such that the back-reaction of the scalar field on geometry is neglected, we still expect the binding of particles to be governed by tree-level graviton exchange. The effects of quenching only appear at one-loop in the low-energy effective theory, so they should not affect the recovery of the classical limit, as seen here.
Our recovery of the Newtonian potential within EDT is a strong cross-check of all of the ingredients that go into the formulation of lattice gravity used here, from our determination of the relative lattice spacing to our approach to taking the continuum, infinite-volume limit. This non-trivial result provides powerful evidence that EDT is in fact a theory of gravity, and that investigations of the formulation as a possible realization of the asymptotic safety scenario should continue. ferent data points are actually correlated, both in the x and y data-sets. This is because the fits for the renormalized masses and binding energies-while at different bare input masses-used the same configurations for each fit, thus introducing correlations between fits. In order to account for this, we did simultaneous correlated fits in both the x and y data sets using orthogonal distance regression. Orthogonal distance regression attempts to minimize the radial distance between the fit function and the input data-as opposed to the vertical distance between the fit function and the y data that is typical in non-linear least squares fitting.
The idea is the following: One seeks to minimize the function, wherex i is the collection of ideal x values which minimizes the expression, f is the fitting function, β k is the collection of fit parameters, y i is the collection of y data, x i is the collection of x data, w −1 is the inverse of the covariance matrix for the y data, and −1 is the inverse of the covariance matrix for the x data. The problem can be recast, with a change of variables, into a different expression to be minimized, Now, there are two sets of parameters to minimize, the δ i , and the β i , where δ i =x i − x i is the residual in the x data.
The above expression can be recast into a very simple form for numerical purposes. The w −1 and −1 can be combined into a single block-diagonal matrix, and the residuals can be combined into a single vector, v = (f (x + δ, β k ) − y) ⊕ δ.
This allows the above function to be written as, For this work, the covariance matrices are positive definite, so we can take the above expression one step further and perform a Cholesky decomposition, which in turn allows us to express Q as a simple dot product between vectors, Q = i r i r i , with r i = k v k L ki . To minimize Q we use the Levenberg-Marquardt algorithm, and minimize with respect to the δ i and the β i . The resulting β values are the desired fit parameters for the input function, f .