Deconfinement transition line with the Complex Langevin equation up to $\mu /T \sim 5$

We study the deconfinement transition line in QCD for quark chemical potentials up to $\mu_q \sim 5 T$ ($\mu_B \sim 15 T$). To circumvent the sign problem we use the complex Langevin equation with gauge cooling. The plaquette gauge action is used with two flavors of naive Wilson fermions at a relatively heavy pion mass of roughly 1.3 GeV. A quadratic dependence describes the transition line well on the whole chemical potential range.


I. INTRODUCTION
The study of the phase diagram of QCD on the Temperature(T )-quark chemical potential (µ) plane using first principles methods is hampered by the sign problem at µ > 0, which invalidates naive Monte-Carlo simulations using importance sampling. The µ = 0 axis is well known [1][2][3], which features a crossover phase transition around T ≈ 150 MeV for physical quark masses. The common lore suggests that this phase transition should get stronger as the chemical potential is increased, eventually reaching a critical point and changing into a first order phase transition line for higher µ.
In the literature the small chemical potential behavior of the transition line is approximated with a polynomial behavior (using the baryon chemical potential µ B = 3µ q ): . (1) (In the following µ denotes the quark chemical potential.) The value of the curvature κ 2 turns out to be quite small [10,11], just as κ 4 , which is consistent with zero within the statistical errorbars of the state of the art [16].
In this paper we employ the Complex Langevin (CL) equation [17,18], which complexifies the field manifold using analytical continuation of the variables (not to be confused with the analytical continuation in µ mentioned above) to circumvent the sign problem, and allows for direct simulations at µ > 0. The aim of this study is to show that the CL simulations allow following the transition line to previously unaccessible chemical potential values. Here we use the plaquette gauge action with naive Wilson fermions with relatively heavy quark masses to study the transition line for µ/T values up to 5. In Sec. II we describe the setup of our simulations and discuss their behavior as we get closer to the continuum limit as well as other known issues of CL simulations. In Sec. III we present in detail our methods for mapping out the phase transition line, and also present the numerical results. Finally we conclude in Sec. IV.

A. Complex Langevin for QCD
For the link variables U x,ν of gauge theories on the SU(N) manifold the discretised update with Langevin timestep is written as [19]: with λ a the generators of the gauge group, i.e. the Gell-Mann matrices, and a Gaussian white noise η axν with η axα η byβ = 2δ ab δ xy δ αβ . The drift force K axν = −D axν S[U ] is calculated from the action using the left derivative For complex actions the drift force K axν is in general complex, thus the manifold of the link variables has to be complexified to SL(N,C). In the case of lattice QCD with fermions the measure of the theory is written as with the determinant of the fermionic Dirac matrix M (µ), resulting in a non-holomorphic action S ef f = S Y M − ln detM (µ), and meromorphic drift terms. Simulating such a theory is not guaranteed to give correct results if the zeroes of the measure are visited by the process [20][21][22][23].
To monitor the process on the complex manifold SL(3,C), we use the unitarity norm (UN) where Ω = N 3 s N t is the space-time volume of the lattice. To avoid an uncontrolled growth of the unitarity norm and thus a quick breakdown of the simulation we use the gauge cooling procedure [24,25]. The system then shows a quick thermalization of the physical quantities such as the plaquette or Polyakov loop (well before Langevin time τ = 10 is reached), while the unitarity norm tends to grow slowly, and saturates for large Langevin time τ around N U ∼ O(1). As observed earlier [26], in this state, in spite of the gauge cooling, the process has distributions with slow decay in the non-compact directions where the boundary terms can no longer be neglected and spoil the correctness proof of the method [27]. We therefore use the first part of the Langevin time evolution after the physical quantities have equilibrated but the unitarity norm is still small. This is motivated by the observation that oftentimes when CL simulations yield wrong results they do so only after a certain time, i.e. they first thermalize to the correct solution and after some time start deviating from this solution again. This has been observed in a simple U(1) plaquette model in [27]. We show the behavior of the unitarity norm in this model, the observable e iϕ as well as the boundary term as a function of Langevin time in Fig. 1. Here, the observable first thermalizes to the correct value and stays at this value up to t ≈ 20. At t ≈ 20 the process develops a non-negligible boundary term, which happens when the unitarity norm reached a value above U N = 0.2. Hence, in this model we would retain correct expectation values if we stopped the simulation at an average unitarity norm of that value. Similar behavior has been observed in real time simulations for SU(2) gauge theory in [28,29] for the spatial plaquette. The generalization to QCD from those simple examples is not straight forward. Hence, we first look at HDQCD [30,31], where the spatial hopping terms in the Dirac matrix are dropped and it is a good approximation to full QCD for heavy quarks at high density. In HDQCD CL works very well in certain parameter regions and it was possible to map out the full phase diagram [26]. We investigated the occurrence of boundary terms in HDQCD in [32] and found that as β increases at fixed κ and µ the boundary terms tend to become smaller and smaller, thus moving CL so close to the correct value as obtained from reweighting that it becomes indistinguishable from the correct result within errorbars. However, even at β = 6.0 if one waits long enough CL becomes slightly wrong. We show what happens in HDQCD for V = 6 4 , β = 6.0, κ = 0.12, N f = 1, µ = 0.85 in Fig. 2. The top plot shows that initially, the observable fluctuates around a plateau at the correct value, which is computed by reweighting. After t ≈ 40 the fluctuations start to become larger and another equilibrium is reached. This coincides with the unitarity average unitarity norm reaching a value of U N ≈ 0.2 at t = 40. Hence, if we cut off the unitarity norm at this value, the observable yields a result consistent with the reweighting value. Note that for this plot some blocking was done, so the points shown are averaged over a small time window. The idea of using only short times is also supported when looking at the distribution at the observable itself or the criterion from [33], as seen in the center and bottom plot of figure 2. Here one can see that for long times, the distributions clearly develop tails indicating a failure of CL. However, when only taking short times, where the observable thermalized to the intermediate plateau, there are no tails or only very small tails indicating that CL gives the correct result here.
The second issue concerns meromorphy of the drift. The fermionic determinant has zeroes on the complexified SL(3,C) manifold, which in turn lead to singularities in the drift terms. These singularities can in certain cases lead to the breakdown of the Complex Langevin method. In [23] it was shown that a sufficient condition for the correctness of the results is that the zeroes are outside the distribution on the complex manifold.
We investigate the eigenvalue distribution of the Dirac matrix to gain an insight into this question. Calculating the whole spectrum of the Dirac matrix is very costly, and we are interested only in the small eigenvalues which can potentially cause problems, therefore we use the Krylov-Schur algorithm to calculate the smallest eigenvalues of the Dirac matrix. We find in the interesting region close to the transition temperature that for our rather large masses used in the present investigation the spectrum of the Dirac matrix seems to show a very fast decay at small eigenvalues. In Fig. 3 we show typical histograms of the eigenvalues with smallest absolute values, for various µ values, on a lattice which is close to the transition temperature.

B. Towards the continuum limit
In this subsection we describe the properties of Complex Langevin simulations of full QCD as the continuum limit is approached. One observes that the behavior of the unitarity norm improves closer to the continuum limit. In Fig. 4 we show typical simulations where the lattice spacing is decreased, while keeping every other quan- tity fixed in physical units. The initial thermalization rate as measured using e.g. plaquettes or Polyakov loops remains approximately constant, see for the Polyakov loop in Fig. 5. The autocorrelation times scale approximately with the lattice spacing, and the rise of the unitarity norm is slower for the smaller lattice spacing, such that closer to the continuum there are more statistically independent samples generated before the unitarity norm grows too high. Our strategy thus relies on the system having a short thermalization time for the physical observables, and a  longer thermalization time for the complexified process, providing a plateau region where physical observables of interest can be sampled. This strategy potentially breaks down if the plateau region is not reached before the fluctuations in the imaginary directions grow large, e.g. around a second order phase transition or on large lattices. Closer to the continuum limit however the situation improves: the time window for the plateau region increases. Moreover, in the state where the process has thermalized and there is a discrepancy between complex Langevin and correct results (caused by boundary terms), the discrepancy quickly diminishes as one decreases the lattice spacing [32].
Our goal is to scan the transition region of QCD up to large µ/T . We have seen that for HDQCD the complex Langevin method does not converge to the correct results for small lattice coupling β. We expect a similar behavior for full QCD (see [34] for a similar behavior with staggered quarks). Therefore we stay at a safe value for β and scan the temperature by varying the temporal lattice extent N t . This allows us to start deep in the confining phase and increase the temperature reaching the deconfining phase (where CL simulations already produced results concerning the thermodynamics of QCD [35]). This does not keep the aspect ratio of N s /N t intact, which does have an effect on some observables as we will see.
We use the plaquette gauge action and unimproved Wilson fermions with N F = 2. To convert to physical units we have measured the lattice spacing and pion masses, see in Table I. To calculate the drift force for the fermions we use a noisy estimator [36]. We use an adaptive algorithm [37] to control the Langevin stepsize in the update such that max(|K axν | ) < d with the control parameter d typically set to d ∼ 0.001 − 0.004.
It has been noted several times [39,40] that CL simulations can converge to wrong results even at vanishing κ β a(fm) mπa mπ (GeV) 0.14 5.9 0.09152 ± 0.00045 1.01 ± 0.0065 2.18 ± 0.025 0.15 5.9 0.0655 ± 0.001 0.4209 ± 0.017 1.27 ± 0.072 0.14 6 0.0736 ± 0.00024 0.9173 ± 0.0095 2.46 ± 0.033 0.15 6 0.05819 ± 0.00055 0.2994 ± 0.013 1.02 ± 0.054 chemical potential µ = 0, where the difference to the correct results quickly decreases with decreasing lattice spacing. This happens due to the boundary terms at infinity growing large as the process thermalizes on the complexified manifold [32]. However, within our setup we stop the run once the unitarity norm reaches a value of U N ≈ 0.1, in accordance with the plateau region discussed above. With this procedure we expect to get correct results as long as there are no physical effects requiring extremely long thermalization such as a second order phase transition. In figure 6 we compare the plaquette expectation value for a N s = 12, β = 5.9, µ = 0, κ = 0.15 simulation from CL and HMC. One can see that the visible deviations are small and agree with zero within statistical errors. The same effect for stout-smeared staggered quarks with N f = 4 was observed in [35]. Hence, we conclude that within our setup there is practically no deviation at µ = 0.

III. METHODS AND RESULTS FOR THE PHASE TRANSITION
We wish to extract the transition line from different observables. Natural choices are based on the Polyakov loop, the chiral susceptibility and the density and all their corresponding susceptibilities or higher derivatives. Since the chiral condensate has proved to be quite noisy, and it has additive and multiplicative renormalization, here we concentrate on the Polyakov loop. The Polyakov loop renormalizes multiplicatively as P ren = e −c(a)Nt P bare , Hence, if we consider ratios of Polyakov loop observables renormalization drops out entirely. In the following we will not look at the Polyakov loop itself but at a combination of the Polyakov loop and the inverse Polyakov loop P = P bare P −1 bare .
The first two of our observables are related to the third order Binder cumulant We will look at the following observables built from the Polyakov loop. 2. B 3 (P ), which is the third order cumulant for the unsubtracted Polyakov loop. Note that this does not take into account fluctuations properly, but we will show that the resulting phase transition line is similar to the first one.
A. The phase transition from B3(P − P ) A standard way to investigate phase transitions is a volume scaling analysis of cumulants like B 3 (P − P ) [41,42]. This analysis works well in the case of an actual phase transition. In QCD we have a crossover instead, so a priori this analysis does not work. However, there are many possibilities to define the phase transition temperature in the case of a crossover transition, see e.g. [12]. In order to find a criterion, we first look at µ = 0 for κ = 0.15, in different volumes. This is visualized in Fig. 7. The top plot shows that the usual volume scaling seems to work. The cumulants for different volumes cross at the point where B 3 (P − P ) = 0. We use this do define T c via For the qualitative understanding of the behavior of B 3 (P − P ) as a function of T note that B 3 (P − P ) essentially measures the asymmetry of the distribution of P . At large temperatures when P is large a symmetric distribution is expected. In contrast, at low temperatures P should be small, while the observable P (defined in (7)) is positive definite, therefore the distribution of P should be asymmetric. Note that the bottom plot in figure 7 suggests that this scaling analysis seems not to work when looking at the cumulant as a function of 1/N t , however the crossing points get closer as the volume increases. Thus, the criterion can still be used and should be valid in the thermodynamic limit.
We use this method to extract the phase transition temperature at different µ. In practice we apply a linear fit to B 3 (P − P ) as a function of 1/N T close to the zero crossing and define T c from the crossing of the fit function. Statistical errors are calculated using the bootstrap method. Results are shown in section III C.
B. The phase transition from B3(P ) One disadvantage of fluctuation quantities such as B 3 (P − P ) is that they are quite noisy and require high statistics runs. While the method presented in the previous section (III A) is close to the standard treatment of phase transitions, there are other ways to do that. Here, we are interested in the method used in [10,11]. The idea here is the following: We have an observable O(T, µ), which is not constant around the critical temperature (typically it has an inflection point). We note that O(T, µ > 0) is well approximated (for small µ values and close to T c (0)) with O(T − T shift , µ = 0). We then identify T shift as the shift of the critical temperature. We formalize this using the following definition: Provided we know T c (µ = 0) (using an independent definition e.g. from a peak of a susceptibility), we define and define the phase transition temperature via Note that this procedure relies very much on a precise determination of T c (µ = 0), which can be performed using HMC simulations, which are typically faster than CLE simulations and thus allow for more statistics.
To define T c in practice we have used spline interpolation to define a continous function for B 3 (P ) as a function of 1/N t . Similarly to the first definition, statistical errors are calculated using the bootstrap method. Again, we will show results of this procedure in section III C.

C. Results
In this section, we show numerical results of our simulations. We have used the plaquette gauge action with two flavours of naive Wilson fermions, at β = 5.9 and κ = 0.15. This corresponds to a relatively heavy pion mass of ≈ 1.3 GeV, as seen in Table I. We use the temporal extent of the lattice to vary the temperature, and we show results for two different spatial volumes N s = 12, 16, as smaller volumes proved to be too noisy.

The cumulants
We show B 3 (P − P ) for N s = 12, 16 in Fig. 8 and B 3 (P ) for the same volumes in Fig. 9.
We find that the simulations are much noisier for smaller µ, while the curves become much smoother and better behaved for larger µ. We find that in the remaining analysis it is hard to extract a transition temperature at µ = 0 due to too low statistics. Instead, in the following we use T c from the HMC analysis, see Fig. 7. Note however, that there is no discrepancy between HMC and CL, as can be seen in Fig. 6.

Phase transition temperatures and curvature of the transition line
Once we have extracted the transition temperatures in all cases, we can compute the curvature of the transition line. This is a standard value regularly computed using lattice simulations, however in conventional lattice simulations it is only accessible around µ = 0 using Taylor expansion, imaginary chemical potentials or reweighting. We fit our results with a quadratic function of the form and after the fit normalize to T c (µ)/T c (0). We show the resulting phase transitions in Fig. 10 for two different volumes, note that both axes have been normalized with T c (0), i.e. the parameter that comes from the fit of equation (12) to the data.

Comparison of the methods to define Tc(µ)
Results of the different methods are compared in table II. We used a value of a = 0.0655(1)fm for κ = 0.15 mea-    (12) where Tc(µ) is obtained from different methods: the zero crossing of B3(P − P ) as described in Sec. III A and the 'shift method' described in Sect. III B. The curvature is given for µB = 3µ, see Eq.
(1). The column Tc(0) is the parameter resulting from the quadratic fit.
In Fig. 11 we show the curvature κ 2 as function of the κ parameter in order to ascertain its dependence on the pion mass. As expected, at very high fermionic mass (corresponding to a small κ parameter) we observe a small κ 2 parameter, as seen also in an earlier study of HDQCD [26]. Curiously, we observe a non-monotonic behavior showing a maximum at intermediate masses.

IV. CONCLUSIONS
In this paper we have studied the deconfinement transition of QCD for a range of chemical potentials up to µ/T ∼ 5, using the plaquette gauge action with naive Wilson fermions at N F = 2. To circumvent the sign problem at µ > 0 we have used the Complex Langevin equation.
To stabilize the simulation we use adaptive step size and gauge cooling. To limit the effects from the boundary terms spoiling the correctness proof we monitor the unitarity norm UN (5). Following the results from simple models and from HDQCD we read the observables' averages from the thermalized plateau developing at small UN. We also made sure that our simulation does not go near the zeroes of the determinant.
We have defined the transition temperature using either the zero crossing of the third order Binder cumulant of the Polyakov loop, as well as a 'shift method' where we assume that the temperature dependence of some observable B(T ) changes to B(T − T shift ) at nonzero µ (in a range close to T c ), which then allows the definition of the shifted T c (µ).
We have carried out most simulations at β = 5.9, κ = 0.15 which correspond to a relatively heavy m π ≈ 1.3 GeV, at spatial volumes N s = 12 and N s = 16. The measured transition temperatures are well described with a quadratic dependence on µ up to µ ∼ 5T . We observe a good agreement between various definitions of the transition temperature.
The determination of the transition temperature with the Binder cumulant works relatively well, in spite of the transition being a smooth crossover. This might signal that a second order transition is nearby. A candidate for that is the transition line at the upper right corner of the Columbia plot, where the deconfinement transition is expected to turn first order at heavy quark masses. To investigate the applicability of the method further studies are needed at smaller quark masses and larger spatial volumes.