Applying constrained simulations for low temperature lattice QCD at finite baryon chemical potential

We study the density of states method as well as reweighting to explore the low temperature phase diagram of QCD at finite baryon chemical potential. We use four flavors of staggered quarks, a tree-level Symanzik improved gauge action and four stout smearing steps on lattices with $N_s=4,6,8$ and $N_t=6 - 16$. We compare our results to that of the phase quenched ensemble and also determine the pion and nucleon masses. In the density of states approach we applied pion condensate or gauge action density fixing. We found that the density of states method performs similarly to reweighting. At $T \approx 100$ MeV, we found an indication of the onset of the quark number density at around $\mu/m_N \sim 0.16 - 0.18$ on $6^4$ lattices at $\beta=2.9$.

QCD at finite isospin chemical potential (QCD I ) [36,37], while the quenched theory is the zero-flavor limit of this. The onset at µ = m π /2 can then be interpreted as the consequence of pion condensation in QCD I . In other words, by doing a phase quenched simulation at µ > m π /2, the ground state of the phase quenched theory is very different from the ground state of the full theory, since in the former case, pion condensation takes place above m π /2, while in the full theory no pion condensation should occur. Thus, in this case one generates unimportant configurations when one uses the phase quenched ensemble as the "source" ensemble for reweighting to "target" ensemble QCD at baryon chemical potential. To overcome this difficulty, we mention a particular idea, that was proposed in random matrix theory (RMT) [20], where the situation is very similar to the case of QCD. Ref. [20] states that by doing constrained simulations and suppressing the pion condensation reduces the overlap problem and makes the sign problem milder in RMT. We investigate whether similar improvements can cure the problems in QCD, and we focus primarly on the low temperature region of the phase diagram.
The organization of the paper is as follows. In Section II, we first review the density of states method in general (Sec. II A), then discuss the method applied for QCD and also give the definitions of the lattice actions and observables we use (Sec. II B and II C). In Section III, we present our numerical results regarding the density of states method and compare it to the results of reweighting from the phase quenched theory. Section IV contains the conclusions.

II. THE DENSITY OF STATES METHOD
A. Formulation of the method Suppose we have an arbitrary quantum field theory with quantum fields Φ and action S [Φ]. Then in the path integral formalism, the partition function is where all fields are symbolized with Φ in this compact notation. So in the case of QCD, we include gauge and fermion fields in Φ as well. Now we can insert a real Gaussian integral in the path integral since it changes only the overall normalization of the integral where we parametrized the Gaussian with γ and x, and Ω is the 4-volume of the system. N ∝ √ γΩ is an irrelevant normalization factor, and the operator Q can be any operator of the theory. Interchanging the order of integrations, we can write Then, if Q is chosen to be the energy and γ → ∞, the Gaussian approximates a Dirac-δ, and the second integral gives what is conventionally called in condensed matter physics the density of states. The partition function then naturally shows up as the integral of the density of states over all possible values of the energy. It is also common to call the second integral of Eq. (3) as the density of states for finite value of γ and any operator Q. Naturally, when doing Monte-Carlo simulations, we use a finite value of γ and one can think of the exponent of the Gaussian as a potential term added to the action. This term then constrains the value of Q close to the minimum of that potential, which is likely to be near x, when γ is large enough.
For our purposes, we need to be even more general to include a reweighting factor in the method to use it for non-real actions. We can write exp{−S[Φ]} = w[Φ] g [Φ], where we isolate a positive part g [Φ], that can be used for Monte-Carlo simulations. Then the definition of the density of states is Expectation values can then be written as where the expectation value with the subscript x refers to the expectation value in the constrained ensemble with that specific x value, according to for an operator A. As one can observe, ρ(x) is the partition function in the constrained ensemble with weight g [Φ]. As was mentioned above, in the limit γ → ∞, ρ(x) measures the histogram of the operator Q. Direct measurement of the histogram would be very challenging as rarely visited bins will have very bad signal to noise ratio (the statistical errors are proportional to ρ(x)). Using finite γ means a smearing of the histogram on the scale 1/γ. For a large enough value of γ, ρ(x) will have a maximum (or several maxima) around the expectation value of Q without the fixing term, and quickly decays around that. We can however measure ρ(x) also through its logarithmic derivative by carrying out simulations at various x values. Using this method we get ρ(x) with exponentially reduced errors as compared to the direct measurement of the histogram.

B. Lattice actions and observables
The system we are interested in is QCD at finite chemical potential using N f flavors of staggered fermions, defined by the partition function where µ is the one-third of the baryon chemical potential µ B , S g [U ] is the tree-level Symanzik improved gauge action using four smearing steps with ρ = 0.125. (For simplicity, the lattice spacing a was set to 1 in the notations of this and the next two sections. The subscript in Eq. (8) refers to the ensemble in which the partition function or the expectation value is calculated, i.e. here B refers to the fact that a non-zero µ B is used.) The gauge observables we are interested in are the gauge action, the spatial and temporal plaquette averages, and the spatial average of the Polyakov loop The Dirac matrix M (µ) = / D(µ) + m satisfies the γ 5 -hermiticity: where for the staggered operator the γ 5 matrix is represented by η 5 = (−1) nx+ny+nz+nt , where the n i s are the lattice site indices. From among the fermionic observables, we study the quark number density and the chiral condensate density, defined as respectively. The quark number density needs no renormalization, while the chiral condensate should be renormalized, both multiplicatively and additively. However, in the present paper we do not carry out the continuum limit, thus we do not need to carry out the renormalization.
As it was mentioned earlier, we cannot directly simulate the theory defined by Eq. (8) with the Hybrid Monte Carlo algorithm (HMC). Thus, for generating configurations we need to change either the algorithm or the integration measure, and in this latter case use reweighting in the DoS formulation. We proceed with this latter option and choose the phase quenched ensemble for generating configurations. The phase quenched partition function can be written as where, for the last equality, γ 5 -hermiticity (Eq. (10)) was used. According to the last line of Eq. (12), the phase quenched theory is equivalent to the theory with N f /2 flavors having +µ and N f /2 flavors having −µ chemical potentials, i.e. to QCD at isospin chemical potential. However, the above integration measure is not strictly positive, it can be zero as well. [50] Therefore, in Monte-Carlo simulations, we use the following partition function [38] where we have included a "pion source" λ, which renders the determinant strictly positive. Here, the τ i denotes the Pauli matrices acting in flavor space and η 5 (x) is the "staggered γ 5 " defined earlier. The flavor off-diagonal term comes from the introduction of the λψη 5 τ 2 ψ term in the action before integrating out fermions, where ψη 5 τ 2 ψ is proportional to the operator of the pion condensate. Due to non-zero λ, this off-diagonal term explicitly breaks the subset of chiral symmetry, that remained after the introduction of the isospin chemical potential. The expectation values calculated in the above ensemble are denoted as . I,λ . The probability density of Eq. (13) is used for the HMC simulations both when generating configurations for reweighting to QCD B and in the density of states method as well, completed with a constraining factor in this latter case (see below). In order to study the effect of explicit isospin symmetry breaking on the theory and properly define the pion condensate in QCD with baryon chemical potential, we introduce which is the partition function of QCD at baryon chemical potential with explicit isospin symmetry breaking, due to finite λ. This theory is referred to as QCD B,λ . With the help of Eq. (13) and (14) the pion condensate operators of QCD I,λ and QCD B,λ are respectively. Somewhat surprisingly, these two operators differ from each other. This is just a simple consequence of integrating out fermions. Similarly, other observables that are derived with the help of the determinants differ from each other in QCD B,λ and QCD I,λ . Besides the pion condensate, we study here the behavior of the quark number density in QCD B,λ , which is defined as C. Choosing the operator to constrain As was discussed in Sec. II, the physics of the system to be studied can give useful hints as to what operators could be useful to include in the fixing term. In particular, in QCD B the Silver Blaze phenomenon indicates that at low temperatures the vacuum state should persist until the quark chemical potential hits roughly the third of the nucleon mass. In the phase quenched (or QCD I,λ ) simulation, however, configurations with large pion condensate will occur when µ is over half of the pion mass. Their contribution in the observable has to cancel out eventually, and this may require huge statistics. By naive reweighting from QCD I,λ , one can not really avoid these configurations. Fixing the pion condensate to values near zero, however, could help to suppress the occurence of such undesirable configurations. Moreover, according to the results that will be presented later, there is a non-zero correlation between the pion condensate and the gauge action density (see Fig. 5), which poses the idea to test the DoS method with fixing the latter alone as well. Therefore, in this study, we have applied the DoS formulation with fixing the pion condensate or the gauge action density. The implementation of the fixing for the gauge action density is straightforward, so we turn to the pion condensate.
As was mentioned, the operators for measuring the pion condensate in QCD B,λ and QCD I,λ differ: π I,λ is real, while π B,λ is complex in general, which makes the latter more complicated to constrain. Here, we do not elaborate on this question and continue with constraining the pion condensate of QCD I,λ .
Usually, traces in lattice QCD are calculated stochastically, according to where j and k label the components of the noise vector η (i) , and N v denotes the number of noise vectors. Applying this formula for the pion condensate would be very expensive, if one would like to use it to reach reasonable precision when calculating the action for the accept/reject steps. We can overcome this problem with the help of the N pf complex scalar fields (also called pseudofermion fields) that are used in the calculation of the determinant. The determinant of Eq. (13) is represented with these fields in the following way Usually, the φ j scalar fields are refreshed at the beginning of each trajectory in the HMC algorithm, as they appear only quadratically and can thus be conveniently generated with the above distribution. We can use Eq. (19) to give another expression for the pion condensate: Note that in Eq. (20), (13)), the only difference is that the determinant is represented with pseudofermion fields in the former case. We can now include this operartor in the constraining term of Eq.(3). In this case the pseudofermionic fields no longer appear quadratically, thus a refreshment at the beginning of each trajectory using the heatbath is no longer possible. We use them as dynamical fields in the HMC evolution.
We note that the operators from (15) and (20) do not give explicitly the same result on a given configuration, only when the number of noise vectors and the number of pseudofermions goes to infinity.

D. Reweighting
Before presenting the results, we briefly overview here the reweighting formulas which we use in the DoS and for comparison as well. For calculating the expectation value of an operator O in QCD B we use the following formulas Here Z B and Z I (λ) are given by Eq. (8) and (13), respectively, and w B denotes the weight. The logarithm of this weight is given by µ 0 denotes the chemical potential, where simulations are carried out and µ is the chemical potential we reweight to.
Since det M (µ) is complex, its logarithm is defined only up to an additive 2kπi term, where k is an integer. When N f = 4, this means that the weight is not defined unambiguously. One possibility is to choose the appropriate k by demanding the weight to be a continous function of µ along the positive real axis [39], we note, however, that the correctness of this procedure and in more general, the rooting procedure with and without µ is still, to some extent, under investigation [40,41]. Nevertheless, the above ambiguity does not affect our reweighting in the aformentioned case, because we use N f = 4 in this paper. But besides that, as was mentioned in Sec. II B, we also reweight to QCD B,λ at finite λ (Eq. 14). We have various considerations for doing this. First, we would like to calculate the pion condensate in QCD at finite baryon chemical potential (Eq. 16), which can be nonzero on average at a finite lattice only when one includes the explicit breaking with λ. Second, we would like to see whether the effect of the explicit breaking in QCD B,λ can make any difference when we carry out the λ → 0 extrapolation as compared to reweighting directly to λ = 0. When reweighting to QCD B,λ at finite λ, the logarithm of the weight becomes Thus, in this case, the above mentioned ambiguity holds on, hence we use the continuity of the weights as a function of λ and µ to choose the appropriate Riemann sheet. This can be done, however, only if one knows the analytical dependence of the determinant on µ and λ. The former is known in the λ = 0 case, due to the so called reduction formula [39]. Regarding the λ-dependence, we calculated the eigenvalues of M (−µ) † M (µ), and used these with λ 2 shifted to obtain the determinant. For determining the appropriate Riemann surface of ln det(M (−µ) † M (µ) + λ 2 ), we fixed the imaginary part of ln det(M (−µ) † M (µ)) comparing it to 2 ln det M (µ) -the latter obtained by the reduction formula -, and then used the same 2kπi term when we calculate with λ via where the ξ i s are the eigenvalues of M (−µ) † M (µ) and 2kπi = 2Im ln det M (µ)−Im i Ln ξ i , where Ln is the logarithm with imaginary part in between (−π, π). One can then use similar formulas as in Eq. (21), but with the weights of Eq. (23), namely This procedure is quite expensive -the computational cost is O((N 3 s N t ) 3 ) -, thus we carried it out only on our smallest lattices.
In the DoS formulation, we used the weights of Eq. (22) O I denotes the expectation value of the operator O in QCD I , which is identical to the phase quenched ensemble. Using a leading order expansion for the weights of Eq. (26) (cf. Ref. [42]), one can rewrite ln w B as where π is the pion condensate operator in QCD I,λ (cf. Eq. (15)). In Eq. (27) we introduced the parameter λ w and performed Taylor-expansion in it. On one hand, this formula shows -at least to leading order in λ -, that when one reweights from QCD I,λ to QCD B , apart from the phase, a large pion condensate suppresses the weight. On the other hand, the above formula would be practically useful as well, because estimating the weight by measuring the pion condensate is computationally much cheaper, than calculating det(M (µ 0 ) † M (µ 0 ) + λ 2 ). However, unfortunately, we found that when one reaches the pion condensation region, the formula is no longer reliable and it overestimates the actual weights (c.f. [43]). Whether the next term in the Taylor expansion improves the behavior or not is left for further study, i.e. we calculate det(M (µ 0 ) † M (µ 0 ) + λ 2 ) directly by Gauss elimination in the following. We note here, that one can take into account the fixing term with the help of reweighting as well, according to where Q is the fixed operator, Ω is the lattice volume and γ is a parameter that controls the width of the Gaussian. ρ(x) is given by Eq. (4) applied for QCD, with the Φ fields corresponding to the link variables and g chosen to be the integrand of Z I (λ) (see Eq. (13)). The identities of Eq. (28) are valid for all x. Although the exponential factor in the expectation values seems to be quite problematic -since there is a volume factor in the exponent -, we investigate whether the distribution of Q − x can be narrow enough to compensate the large γΩ/2 factor. We refer to this approach as direct reweighting from the constrained ensemble in the following. If the fluctutation of the exponent could be made small and Z B as well as O B would not depend on x (at least, for a wide enough interval), then the method may provide reasonable results without integration in x.
Using this formalism (Equations (4)- (7)), we can do simulations based on importance sampling in the constrained ensemble and with the help of those results we can recover the expectation values in the original ensemble, defined by Eq. (1).
Since the partition sum can be written as Z B = Tr(exp{−(H − µN )/T }) and the Hamiltonian H commutes with the particle number operator N , Z B is a sum of positive numbers Z B = n,N exp{−(E n − µN )/T }. As a consistency or reliability criteria, we demand the DoS as well as reweighting to provide a positive Z B within at least 2 standard deviations. Every observable will inherit the relative error of Z B and thus if the measured Z B is not positive within a few standard deviations then even the magnitude of Z B is unclear and thus the results will be unreliable.

A. Simulation details
We performed simulations with N f = 4 flavors of staggered fermions, on 4 3 × (8, 12, 16), 6 3 × (6, 8, 12) lattices. We used the following quark mass and β pairs in the simulations: for N s = 4, (ma, β)={(0.05, 2.9), (0.02, 2.74)}, and for N s = 6, (ma, β)={(0.05, 2.9), (0.02, 2.9)} was set. We used several λa and γ values at some simulation points to see their effect on the results. In order to determine the lattice spacing using w 0 [44], the pion and nucleon masses, we used N t = 24 and N t = 32 lattices. The results and the simulation parameters for these runs are summarized in Table I. We also checked that the simulation points are in the confinement region. The Polyakov loop was small in our simulations also at finite µa. We estimated the pseudocritical β c on a 16 3 × 6 lattice by calculating the renormalized chiral susceptibility (by subtracting the chiral susceptibility measured on a 16 3 × 32 lattice and multipling by the square of the quark mass). At m π ≈ 335 MeV, we found β c to be around ∼ 3.36, which corresponds to T c ∼ 137 MeV. For the larger m π , using the same lattice sizes, we found T c to be in the range 130-160 MeV. Since the lattices are quite coarse, we applied four stout smearing steps using ρ = 0.125 to reduce the finite lattice spacing effects. Finite volume effects are expected to be moderate on the lattices with quark mass ma = 0.02 (m π aN s ∼ 2.3) and somewhat smaller on lattices at ma = 0.05 (m π aN s 3).  I: Summary of zero temperature runs to determine the pion and nucleon mass and the lattice spacing. We note that the results and errors for the nucleon mass are quite indefinite. At small quark masses, the relative error for the staggered nucleon correlator grows exponentially with time measured in lattice units, therefore using the accumulated statistics the effective mass plateau is only faintly recognized.

B. Fixing π φ
In this section we present the DoS results that were obtained by constraining the pion condensate, π φ (Eq. 20). We achieve this as it was discussed at the end of Sec. II C and we used N pf = 1 pseudofermion fields in most of our simulations. We note that by using only one pseudofermion field, in the range x ∈ [−0.18, 0.18], where x refers to the value at which one would like to constrain π φ , the simulations have a tendency to get 'stuck' and even break down because of very large HMC forces, if the timestep is too large. With a sufficenty small timestep, where the acceptance ratio is larger than ∼ 0.9, no such problems occur. No such problem was found by using more than one pseudofermion field, but in these cases the simulation is more expensive.
It is important to recall that π φ does not give the same result as the pion condensate calculated with the help of noise vectors (Eq. (15), (18)), they are equal only in the limit as the number of pseudofermions as well as the number of noise vectors goes to infinity. π φ x and π I,x are shown in Fig. 1 using N The expectation values for the pion condensate in the constrained ensembles using noise vectors ( π I,x) and with the help of pseudofermions ( π φ x), left and right panel, respectively. The simulations were carried out by constraining π φ using N pf = 1, 2, 3.
We emphasize again that both operators for the pion condensate in QCD I,λ are real and positive definite at finite λ. Therefore, one can not constrain π φ to zero or negative values, but can push it closer to zero e.g. by writing a negative x in the fixing term. We found that it is more efficient to proceed this way, rather than decreasing the width of the Gaussian of the fixing term when using a small positive x or x = 0, because a smaller width (larger γ) results in larger forces and slows down the simulation at other values of x as well. The DoS setup is validated at µ = 0 by fixing π φ and calculating the full DoS integrals, and ensuring that we obtain results consistent with simulations using no fixing. After this technical detour, we present results obtained by the method. In Fig. 2, we show the expectation value of the real part of the weights as a function of the pion condensate for various chemical potentials. (See also the left panel of Fig. 6 for a similar plot in the case of gauge action fixed simulations.) As one can see, the decay of the weights as a function of the pion condensate can be well described by an exponential.
In Fig. 3 we plot various quantities as a function of x for two chemical potential values, below and above m π /2. Note that at the larger chemical potential the peak of ρ(x), the density of states shifts to larger x values, while the average weights are much smaller at larger x values, since the fall-off of the weights is steeper at larger µa (see also Fig. 2).
In the DoS integral, the position of the peak of w B x ρ(x) determines which region of x gives the largest contribution to Z B . The shift of the peak position is determined by the decay of the weights as well as that of ρ(x). Since the decay of ρ(x) does not depend significantly on µ, based on the µ dependence of the weights (Fig. 2) it is expected that the shift of the peak is larger as µ is increased. This is the motivation to try to include this shift manually by cutting the DoS integrals in Eq. (5). In order to analyze the effect of omitting configurations with larger pion condensate, we cut the integrals in the nominator and in the denominator at the same x value, and define O B,cut (x c ) as the ratio of the two cut values, i.e.
The obtained cut results, however, depend on the value where one cuts the DoS integrals. Since no plateau is visible before the pion condensate starts sharply rising (see Fig. 3), one could not really select a correct, unique value among the possibilities, the cut results are not unambiguous. This can be understood by noting that in the range, where π I,x is really small, the value of O B,cut (x c ) (where O is an arbitrary operator) is predominantly determined by the value of the integrands at x c , as ρ(x) is strongly rising in this region. In other words, in the range in question, for any observable one gets which is an estimate for O B,xc , the expectation value of the O operator in QCD B with constraint characterized by x c . Of course, O B,xc can depend on x c , even though π I,xc and π φ xc are small. To overcome the x c dependence, one could try to carry out another type of reweighting which was introduced in Eq. (28). We found that in the range of low x (x −0.2), where the pion condensate fluctuates less, reweighting with the modified weights of Eq. (28) including the exponential factor is manageable. However, this does not eliminate the x dependence. The results at µ < m π /2 (see the chiral condensate in the upper panel of Fig. 3) also suggests that in order to have an x-independent, correct expectation value, one has to include the configurations with large pion condensate.
In the next section we show that since the gauge action and the pion condensate are slightly correlated, fixing the gauge action also fixes the pion condensate, which has a similar effect on the average weights. This allows much cheaper simulations (using the gauge action fixing) to be carried out with similar results, we therefore concentrate on those in the following.

C. Fixing the gauge action density
We now turn to the study of the case when we use the gauge action density as the fixed quantity. As an illustration, Figure 4 shows the histogram of the gauge action density and the expectation value of the real part of the weights as well as the pion condensate on the constrained ensemble characterized by x, the value at which s g is constrained. Fig. 4 shows that by constraining the gauge action density to smaller values, the pion condensate also becomes small and simultaneously the real part of the weights increases.
The imaginary part of w B x fluctuates around zero at all x. The correlation between the gauge action density and the pion condensate in QCD I,λ is also shown in Fig. 5. Although the correlation is weak in the interval of x in which ρ ∼ O(1), one can reach configurations with low pion condensate below x ∼ 13.5. Similarly to the pion condensate fixing, the real part of the expectation value of the weights as a function of the pion condensate can be described with an exponential with a strongly µ dependent slope, see the left panel of Fig. 6. On the right panel of of Fig. 6 we show the x dependence of the average weights for several volumes. Analyzing the cut DoS integral results (defined in the previous section in Eq. (29)) in the case of fixing s g affirms that when µ is large (µ > m π /2), the x-dependence of the operator dominates the final results for the expectation values and one cannot choose a unique cut value, because these depend on x. Moreover, in this case, the cut results could lead to physically problematic results. For example the Polyakov loop gets enhanced at low x, which suggests that on that configurations, the (approximate) Z(3) symmetry gets broken. The results for the pion condensate of QCD B,λ shows in Fig. 7, as λ goes to zero, π B,cut (x) also goes to zero at all x even at a larger chemical potential as well -although with large errors. This indicates that π B,cut (x) is dominated on these lattices by a contribution from the explicit breaking due to finite λa.
As in the case of constraining the pion condensate, the findings discussed in the previous paragraph suggest that the configurations with well-behaving weights are not the appropriate configurations to reproduce the expected physics at low temperature. Thus we abandon the idea of cutting the integrals by hand and in the following, we will analyze the results by calculating the full DoS integrals.  As was mentioned above, we demand that the DoS as well as reweighting should provide a positive Z B . Therefore we try to collect enough configurations to satisfy this criteria at least to a 2 sigma level, which is called our reliability condition. Since in the case of the DoS, Z B = Z I,λ dx w B x ρ(x), while in the case of reweighting, Z B = Z I,λ w B I,λ , we demand dx Re w B x ρ(x) > 0 and Re w B I,λ > 0 to hold at 2 sigma, respectively for DoS and for reweighting from QCD I,λ . The positive constant factor Z I,λ does not modify the reliablity criteria. Furthermore, we expect dx Im w B x ρ(x) = 0 and Im w B I,λ = 0, to hold, respectively for DoS and for reweighting.
In Figure 8, we show the results for the quark number density obtained by the DoS method as well as reweighting from QCD I,λ for the 4 3 × 8 ensembles. Accumulating around O(10 4 ) configurations at the points where ρ(x) is O(1), we found that the DoS method is reliable up to µa ∼ 0.40 -and indeed, gives zero quark number density within errors -on 4 3 × 8 at pion mass m π ≈ 336 MeV. At the finer 4 3 × 8 lattice with β = 2.9 and pion mass m π ≈ 437 MeV, µa ∼ 0.45 can be reached with similar statistics. These correspond to µ/m π ∼ 0.7 and 0.62, respectively, or µ/m N ∼ 0.22 . . . 0.23. Thus, we can reach considerably higher µa values than m π a/2 at these small lattices. Two comments are in order. First, the highest reliably reachable µa value certainly depends on the accumulated statistics. We will elaborate more on this later. Second, strictly speaking, the pion condensation region of QCD I starts at m π /2 only at zero temperature and it can bend toward higher chemical potentials as the temperature increases. Therefore, to have a reliable comparison, it is important to locate the aµ (π) c value, where the pion condensation sets in at the given temperature.
In order to determine this, we carried out simulations at different λa values at several chemical potentials and studied the λa to zero limit. This extrapolation, however, is not satisfactory to determine precisely aµ (π) c . Following Ref. [45,46], we also tried to fit the results by the appropriate formula of chiral perturbation theory [46], but these fits were rather unreliable, probably due to the fact that the volume is not large enough. Alternatively, one can obtain aµ (π) c (T ) directly from the lattice simulations with the help of the spectral representation for the pion condensate [42,47]. To obtain this, the singular values of the Dirac operator, ξ n , have to be calculated, which are the eigenvalues of M † (µ)M (µ). Although this approach is valid again if the volume is large enough, following Ref. [42,43,48], one can define π (impr.) according to where the spectral density, ρ(ξ) is defined as In the integral over ξ, ρ(ξ) is multiplied by a representation of the Dirac-δ distribution, thus by taking the λ → 0 limit, one arrives at N f πρ(0)/8, which is the improved operator. Therefore, it is enough to determine the lowest 200-300 singular values, build a histogram for the integrated spectral density, and take the ξ → 0 limit of a 3 N (ξ) I,λ /ξ, which after multiplied by π/2 gives the improved pion condensate. Unfortunately the approach at N s = 4 again cannot be applied probably due to the small volume, but it seems to provide reasonable results at N s = 6 (see Fig. 9, right panel). Fig. 9 justifies that we are in the pion condensed phase on 6 4 , at β = 2.9, ma = 0.05 above µ > m π /2. In order to test how the limit of the reliable application of DoS or reweighting changes as we decrease the temperature, we performed simulations at a larger temporal size, namely at N t = 12 and N t = 16. We found that although at 4 3 × 12, β = 2.74, ma = 0.02, µa = 0.34 (µ/m π ≈ 0.6), ReZ B > 0 can be satisfied to ∼ 9 sigma level by using around 1800 configurations at each x, we can not reach even positive Z B at the 1 sigma level at µa = 0.38 (µ/m π ≈ 0.67) using around 5000-6000 configurations. Estimating the number of needed configurations using the scaling of absolute errors as the inverse square root of configurations at different x suggests that more than 10 5 further configurations are needed at each x, where ρ(x) ∼ O(1), however note that this estimate becomes increasingly unreliable as the relative error of Z B gets larger, such that a one sigma shift in the average can result in an estimate several orders of magnitude larger.
It is worth to emphasize, that the positiveness of ReZ B can be satisfied using much less configurations at smaller chemical potential. In this parameter range the fluctuations of the observable might dominate the statistical error of an expectation value (depending on the observable). This is relevant espically below µ = m π /2, as the phase quenched theory also shows a Silver-Blaze behavior and thus reweighting has very little effect.
In Fig. 10 we show an estimation for the number of configurations needed to measure Z B with the higher precision of one percent accuracy on a 4 3 × 8 lattice at β = 2.9 and m = 0.05. One observes that the number of configurations needed increases rapidly with increasing µ. In the small x region where the weight average is larger this is approximately constant, but increases at least exponentially with x in the region close to the maximum of ρ(x) (which is in the region 15 ≤ x max ≤ 16 for the parameters used in the plot). A precise calculation of observables at large chemical potentials and small temperatures using this method is thus not within the reach of current computational capabilities even on very small lattices.
We also estimated the performance of direct reweighting (without introducing a fixing term in the action) for the 4 3 × 8 lattice using β = 2.9 and m = 0.05. We observe a similar behavior: we fail to satisfy the ReZ B > 0 criterion at the 2 sigma level around µ/m π ≈ 0.76 even after collecting ∼ 1.2 × 10 5 configurations. The density is consitent with zero for the chemical potentials where the positiveness of Z B is satisfied. Apart from the overlap and the sign problem, we mention a further limitation to safely reach µ ∼ m N /3 on small lattices. As one might notice at Fig. 9, the pion condensate starts decreasing around µ/m π ∼ 1.4 (at λa = 0.01). This is a saturation effect [38], the isospin density is close to half-filling at these chemical potentials. Far from the continuum, µ = m N /3 might get close to this region where saturation effects dominate the physics, further complicating the issues of reweighting.
At N s = N t = 6, β = 2.9, ma = 0.05, the ReZ B > 0 criteria is valid at more than 2 sigma until the chemical potential range µ/m N ∼ 0.2 . . . 0.22, see in Fig. 11. However, unlike the case of N s = 4, it is found that at the last chemical potential at which the condition ReZ B > 0 is satisfied more than 2 sigma, i.e. at µa = 0.34, the quark number density, a 3 n B becomes nonzero at the ∼ 5 sigma level. This chemical potential corresponds to µ/m π ∼ 0.60 (µ/m N ∼ 0.20). At the same lattice size at T ≈ 100 MeV, but at a smaller quark mass ma = 0.02, the reliability condition spoils at µ/m N ∼ 0.17 (µ/m π ∼ 0.76), so it clearly does not follow a scaling behavior related to m π (Fig.  11). If that would be the case, based on (m π a) 2 ∝ ma, one would find the breakdown of the reliability condition around µ/m N ∼ 0.14 (which gives µ/m π ∼ 0.61). For these simulations, at ma = 0.05, µa = 0.34, we accumulated 4000-5000 configurations at 20 values of x, and at ma = 0.02, µa = 0.28, at around 3000-4000 configurations. Indications of the onset were found at around µ/m N ∼ 0.18 at ma = 0.05, and at around µ/m N ∼ 0.16 at ma = 0.02.
Although to carry out the reweighting at larger chemical potentials would require many orders of magnitude higher statistics, our current results give an indication that the transition line of the nuclear onset is bent to lower chemical potentials at increasing temperatures.

IV. CONCLUSIONS
In this paper, we have studied the Density of States method and direct reweighting to explore non-zero baryon chemical potential in QCD. We have included a 'fixing term' in the action of QCD at finite isospin chemical potential, which restricts the values of a chosen operator. Our investigations are in the cold and dense region of the phase diagram where we sought to observe the Silver-Blaze phenomenon.
In the DoS method, the final results are obtained after calculating the appropriate integrals and normalizing them according to Eq. (5). When applied at non-zero baryon chemical potential, non-real weights w B must be included. In order to classify the results of reweighting as reliable, we have applied the criterion ReZ B > 0 to at least 2 sigma level.
We have tested the fixing of the gauge action, as well as fixing of the pion condensate (π φ of QCD I,λ ) and have observed that the results in the two cases behave similarly. When the pion condensate is constrained to be small, the weight factors w B are larger. Constraining the gauge action to a small value lowers the pion condensate. As a consequence, the weights become larger in this case as well. Practically it is more economical to use the gauge action fixing as the simulations are much cheaper.
One of the main motivations at the beginning of this work was to investigate whether the DoS integrals receive mainly contributions from configurations that have low pion condensate. The results revealed that at the parameter range we used, the region of small as well as large pion condensate also contributes to the final results at finite baryon chemical potential. Although we have indeed observed that the weight factor strongly depends on the pion condensate, the shift in the peak of ρ(x) w B x is moderate on small lattices. At larger volumes, the weights decay faster, but ρ(x) also becomes narrower, which results in a negligible shift of the peak position. We have investigated whether one can improve the situation by cutting the integrals over x manually, and only allow configurations with small pion-condensate to contribute, as suggested in [20]. Although this way well-behaving weights can be obtained even at µ > m π /2, there is no such region of the upper limit of the integrals in which the cut observables are constant. Therefore, we conclude that cutting the DoS integrals is not a viable procedure to determine the expectation values of the studied observables.
The sign problem becomes severe around µ ≈ m π /2. We have estimated the number of configurations one needs in order to overcome the sign problem slightly over this value. This number grows very quickly with the chemical potential, such that one can not go deep into the µ > m π /2 region even on very small lattices. However, at 6 4 , β = 2.9, T ≈ 100 MeV, we have managed to apply the DoS method with reweighting classified to be reliable, and found indications of the baryonic onset at this temperature. This observation would imply that at higher temperatures the baryonic onset happens at lower chemical potentials than the zero temperature critical chemical potential µ ≈ m N /3.
Finally, we note that both the DoS and direct reweighting from QCD I,λ provide results consistent with our expectations as long as they are classified as reliable.