Finite temperature properties of QCD with two flavors and three, four and five colors

I present a numerical study of the crossover between the low temperature chirally broken phase and the high temperature chirally restored phase in $SU(N_c)$ gauge theory with $N_c=3-5$ colors and $N_f=2$ degenerate fermion flavors. Fermion masses span a range of intermediate values (represented by the squared ratio of pseudoscalar to vector meson masses $(m_{PS}/m_V)^2\sim 0.25$ to 0.63). Observables include the temperature dependent chiral condensate and screening masses. At each fermion mass these quantities show nearly identical temperature dependence across $N_c$.


I. INTRODUCTION AND MOTIVATION
The limit of QCD when the number of colors N c is taken large has a long history as a source of insight about N c = 3 QCD itself [1][2][3] . Many of the applications of large N c ideas to phenomenology actually involve nonperturbative quantities (masses or decay constants), although the predictions are often based on counting the color weight of Feynman diagrams. To what extent are these predictions true? Checking them requires a lattice simulation, and there is a small literature of lattice calculations away from N c = 3 to provide such tests. (See Refs. [4][5][6] for a selection of reviews.) At a qualitative level, lattice calculations confirm large N c intuition rather nicely: meson masses, baryon masses, and decay constants scale as N p c f (m q ) where p is a characteristic power and m q is the fermion mass. The interplay of small dynamical fermion mass and large N c is less well explored, but simple matrix elements (decay constants, the kaon weak matrix element calculations of Refs [7,8]) mostly scale as expected. So do some chiral observables which are governed by the pseudoscalar decay constant (scaling as m 2 P S /f 2 P S ∝ m 2 P S /N c : compare to Ref. [9]) or by the condensate (as in the topological susceptibility χ T ∝ m q Σ ∝ m q N c [10]).
The subject of this paper is the crossover temperature for the transition from the low temperature confining and chirally broken phase to the high temperature deconfined and chirally restored phase, for QCD with N c = 3, 4, and 5 colors and N f = 2 flavors of degenerate mass fermions in the fundamental representation. The reason why this project might be interesting is that there is not a single large N c story for what happens, there are at least three possibilities.
The first possibility comes from the naive large N c expectation that gluonic degrees of freedom dominate fermionic ones as N c → ∞. There should be a finite temperature confinement deconfinement transition which converges to the pure glue one in the large N c limit. This is a first order transition with T c ∼ 320 MeV. In N c = 3 the pure gauge transition is first order and as the fermion mass falls from infinity the transition becomes a crossover. In this scenario the large N c transition temperature would remain roughly constant across N c as the fermion mass fell from infinity, and the critical point where the first order region ends would move to smaller fermion mass as N c rises.
The second scenario assumes naive chiral symmetry breaking dominance. Even QCD at large N c has an SU(N f ) × SU(N f ) symmetry which (modulo issues with the eta-prime [11]) undergoes spontaneous symmetry breaking to vectorial SU(N f ). The Pisarski -Wilczek analysis [12] approximates the Goldstone sector as a linear sigma model. For N f = 2 the system is expected to have a second order transition at zero fermion mass, with O(4) critical exponents. Second order transitions are unstable under perturbation, so the transition becomes a crossover away from m q = 0. All QCDs with any N c should then share a common behavior at small fermion mass. This paper doesn't have data at the tiny (or zero) fermion masses needed to say anything about the properties of any transition at zero fermion mass, but it can ask a question which is related to the Pisarski -Wilczek analysis: how does the crossover temperature scale with N c ? Linear sigma models contain one dimensionful parameter, the vacuum expectation value of the scalar field, and all derived dimensionful quantities (the pseudoscalar decay constant, and the crossover temperature T c itself) are proportional to it. It is well known from previous large N c spectroscopy comparisons that f P S ∝ √ N c . Thus the naive prediction of the second scenario is T c ∝ √ N c [13]. Is it so?
The final scenario predates QCD. Confining theories are expected to show an exponentially growing spectrum of resonances with mass, forming a Hagedorn spectrum [14]. The tower of resonances implies a limiting temperature T 0 and (as first stated explicitly by Cabibbo and Parisi [15], as far as I can tell) this implies a crossover temperature T c ∼ T 0 . In QCD, the Hagedorn temperature is about 160 MeV. The extension of the story for large N c and nonzero N f is that the spectrum of resonances is basically identical across N c . Meson states dominate baryon ones up to a few GeV. If two theories have the same spectrum, then they will have the same critical properties. Notice that large N c with nonzero N f is different from quenched QCD: the latter case has a much sparser spectrum below 2 GeV. There are only glueballs in contrast to all the excited states labeled (for example) by quark model counting. The prediction of the third scenario is that any N c = 3 with N f = 2 will qualitatively resemble N c = 3, N f = 2. This argument has been used to justify the observation that the deconfinement transition in pure gauge SU(N c ) is nearly independent of N c [16,17], since the glueball spectrum is also nearly N c independent.
The first scenario was already unlikely given that the deconfinement critical point for N c = 3 is already at a very high mass. Ref [18] observes it at a pseudoscalar mass of about 4 GeV. And indeed, there is no evidence in any of the simulations reported here for anything other than a smooth crossover. (This makes the identification of a particular crossover temperature problematic.) As to the other scenarios, my results indicate that the temperature dependence of observables computed at common values of the fermion mass show essentially identical behavior, including inflection points at finite temperature, which is nearly independent of N c .
To illustrate this statement, the temperature dependent and zero temperature subtracted condensate, defined below in Eq. 12, is shown in Fig. 1. The naive N c scaling of the condensate is divided out, and data is presented for three values of the ratio (m P S /m V ) 2 . This quantity is close to zero at low temperature and becomes negative at high temperature as (speaking loosely) chiral symmetry is restored and the finite temperature condensate falls to zero. The different plotting symbols label different numbers of colors. This picture illustrates the smooth crossover from broken to restored chiral symmetry with nearly identical temperature variation across N c .
A natural question to ask is, can one identify a feature in the data which serves as a marker for a crossover temperature? The short answer is no. The quantities I have studied have a smooth temperature dependence with no sharp features. I was able to identify two quantities which could serve as markers: there is a peak in the derivative dΣ(T )/dT , and there is a transition region where screening masses cross over from near independence with temperature to strong temperature dependence. Both features are broad.
The outline of the rest of the paper is as follows: Technical aspects of the calculation (lattice action, data sets, a bit about data analysis methodology) are described in Sec. II. This is all completely conventional. Then the various observables are described and results are presented for them: the temperature dependent condensate in Sec. III, screening masses in Sec. IV. I mention the the Polyakov line susceptibility in Sec. V. Conclusions are summarized in Sec. VI.

II. TECHNICAL ASPECTS OF THE CALCULATION
A. Simulation methodology, lattice actions, data sets The simulations involved two degenerate flavors of Wilson-clover fermions. The gauge action is the usual Wilson plaquette action. The fermion action uses gauge connections defined as normalized hypercubic smeared links [19][20][21][22]. The bare gauge coupling g 0 is set by the simulation parameter β = 2N c /g 2 0 . The bare quark mass m q 0 is introduced via the hopping parameter κ = (2m q 0 a + 8) −1 . The clover coefficient is fixed to its tree level value, c SW = 1. Gauge-field updates used the Hybrid Monte Carlo (HMC) algorithm [23][24][25] with a multi-level Omelyan integrator [26] and multiple integration time steps [27] with one level of mass preconditioning for the fermions [28]. Lattices used for analysis are spaced a minimum of 10 HMC time units apart. All data sets are based on a single stream for each set of bare parameters.
The only new feature to report in these simulations is a first order bulk transition in strong coupling for SU (5). Its location depends, of course, on the particular form of the bare action. These transitions are well known features of the pure gauge systems and the one I found has a small κ limit which appears to approach the value of gauge coupling where Ref. [16] observed a pure gauge transition. I only mention this in passing; data sets used to do physics were selected to avoid this transition.
The choice of Wilson-clover fermions in a finite temperature study is not optimal, since the best signals for finite temperature behavior with dynamical fermions are ones which are sensitive to chiral symmetry. The reason I chose this discretization is that I already had a large collection of zero temperature simulations which could be used to find lines of constant physics in bare parameter space, and because I was only looking for gross finite temperature features.

B. Fixing the lattice spacing
The lattice spacing is set by the Wilson flow parameter t 0 [29,30], and quantities will be presented as dimensionless ones by a rescaling by an appropriate power of t 0 .
The determination of t 0 is done on zero temperature lattices, in the standard way, from the observable E(t 0 ) extracted from the field strength tensor, (1) C(N c ) is chosen to match what most other large N c simulations take, with C = 0.3 the usual value used in SU(3). The extraction of t 0 from the data is identical to what was done in Ref.
[10] and details may be found there. I use values of t 0 computed at each bare parameter value, that is, there is no extrapolation or interpolation in fermion mass. Let's recall a few useful numbers which will place results in context. The critical temperature for pure gauge systems was published in Refs. [16,17]. The authors of these papers quote T c / √ σ where σ is the string tension. I converted their numbers to √ t 0 T c using the Sommer parameter r 0 ∼ 0.49 fm [31], r 0 √ σ = 1.175 for quenched SU (3) and SU(5) from Ref. [32], and the quenched N c = 3 √ t 0 = 0.1638 fm from Refs. [33,34] (as quoted in Ref. [35]) to give the pure gauge transition at √ t 0 T c = 0.265, 0.261 and 0.278 for N c = 3, 4, 5. The nominal "physical point" value (with 2 + 1 flavors) of fermions with T c = 150 MeV, √ t 0 = 0.147 fm of Ref. [36] from chiral observables, is √ t 0 T c = 0.11.

C. Data sets
At each N c the bare parameter space is (at least) three dimensional, involving a bare gauge coupling, a bare fermion mass expressed through κ in units of the lattice spacing as am q , and the temperature, T = 1/(aN t ) where N t is the length of the lattice in the Euclidean temporal direction. The issue for a study like this is that no single simulation (at any one set of bare parameters) is interesting in itself: all features are broad. It is hard to avoid generating data sets at many parameters and easy to get lost wading through them.
The literature suggests at least two ways to proceed. The first approach is to map out a crossover line (or phase boundary, if it exists) as a function of bare parameters at fixed N t values. Then zero temperature simulations are done along the crossover lines to determine physical quantities such as the lattice spacing and (m P S /m V ) 2 . (Examples of this approach I found useful were Refs. [37][38][39].) The issue with this approach is that since there is no true phase transition, the crossover region is broad, and to present a result such as √ t 0 T c versus (m P S /m V ) 2 involves collecting a large number of zero temperature data sets followed by interpolation of their output onto a surface in bare coupling constant space. The surface's own bare parameters are poorly defined because the "transition" is just a crossover. This issue is compounded of course by the need to work at several values of N c . The preliminary version of this project, Ref. [40] used this approach, but I found it to be unwieldy. Instead, I took an approach inspired by Refs. [41,42]. I started with zero temperature data at selected bare parameters and then varied the temperature by varying N t holding all other bare parameters (and hence, the lattice spacing, t 0 and (m P S /m V ) 2 ) fixed. A disadvantage of this approach is that it is unlikely that one will obtain data precisely at a crossover temperature. This can be compensated for, to some extent, by combining data sets from several values of the lattice spacing (but the same (m P S /m V ) 2 values) to fill in the curves. This of course introduces another disadvantage: the data sets from different values of the lattice spacing have different lattice artifacts. The naive approach of just plotting them all together ignores lattice artifacts: they will appear in the scatter between points on a graph. An advantage of the approach is that it allows one to use a "natural" observable -the temperature -dependent condensate -to compare crossover behavior across N c . At the end, I have eight bare parameter sets per N c [divided into three different (m P S /m V ) 2 values], five values of N t per bare parameter, -40 sets per N c , 120 datasets in all.
For zero temperature data sets I took 16 3 ×32 volumes. Results for many of the parameter sets have been published before (see Refs. [10,22]) and additional sets were generated to give several lattice spacings at three matching points in the fermion mass. The squared ratio of the pseudoscalar to vector masses (m P S /m V ) 2 will be taken as a proxy for a fermion mass. and data were collected at r = (m P S /m V ) 2 ∼ 0.63, 0.5 and about 0.25 (in the latter case, ranging from 0.22-0.27). Useful results from the zero temperature data sets are summarized in Table I.
The calculation of a temperature dependent condensate described below in Sec. III requires data sets at fixed spatial volume (for each lattice spacing), again 16 3 sites. I varied N t from 4 to 12 at r = 0.63 and r = 0.5, and 6 to 16 at r = 0.25 since the crossover temperature appeared to fall with decreasing r. Almost all of these finite temperature data sets are 1600 trajectories long after equilibration, again saving lattices spaced 10 trajectories apart for further analysis. The results for screening masses and related quantities are based on subsets of these data sets since these results were not used for any precise calculations.

D. Data analysis
Most results are global observables, a single quantity averaged over the simulation run. For most observables, there are not really enough data for a reliable determination of an integrated autocorrelation time. Instead, autocorrelation times are estimated and errors are assigned from a jackknife analysis dropping successive measurements from the data stream. For an observable Q I compute an average Q and a susceptibility χ Q = Q 2 − Q 2 ; the uncertainty of each comes from a jackknife. I varied the size of the jackknife (dropping n J successive measurements). Typically ∆Q rises as n J rises, reflecting the effect of time autocorrelations, until it plateaus as n J approaches or exceeds the autocorrelation time. At the same time the uncertainty in ∆Q has a fractional error from loss of statistics, where n = N/n J for N measurements. The increasing uncertainty in ∆Q eventually exceeds the variation of ∆Q with n J . The uncertainties quoted in the paper come from jackknife averages taking n J where the quantity ∆(∆Q) from Eq. 3 is larger than the apparent rise in the jackknife estimated ∆Q. Generally, I observed that for observables related to the temperature dependent condensate the errors taken from jackknifes dropping one or two successive lattices (spaced 10 molecular dynamics time steps apart) were indistinguishable within the uncertainty of Eq. 3. Volume averaged Polyakov lines were measured every trajectory in the normal course of data collection. There, Eq. 3 indicated that typically jackknife errors saturated with n J = 5 − 20 (the higher number coming closer to the crossover temperature). The errors quotes in tables and shown in graphs were taken from this procedure.

III. THE TEMPERATURE DEPENDENT CONDENSATE AND RELATED QUANTITIES
A. Defining the condensate With Wilson fermions the chiral condensate is a bit awkward to measure [41,43]. The bare condensate for Wilson fermions at bare mass m 1 has an expansion where m 0 is the bare mass at which the axial Ward identity fermion mass (defined below) vanishes, m 1 is the bare mass at the simulation point, and the c i 's contain cutoff (divergent) behavior, c 0 ∼ 1/a 3 , c 1 ∼ 1/a 2 , c 2 ∼ 1/a. The divergent pieces are independent of temperature, and so the difference is finite and sensible. Inspired by the Gell-Mann, Oakes, Renner relation, a potential definition for ψ ψ sub is where P (x, t) =ψ(x, t)γ 5 ψ(x, t) is the pseudoscalar current. The first term on the right hand side of Eq. 6 is evaluated on an N 3 s × N t lattice where T = 1/(aN T ) and the second term is evaluated on an N 3 With the "130 MeV" definition of f P S the Gell-Mann, Oakes, Renner relation between the condensate Σ and other observables is In this convention, with the axial current A a µ =ψγ µ γ 5 (τ a /2)ψ, and the pseudoscalar density P a =ψγ 5 (τ a /2)ψ, the vacuum to pseudoscalar matrix elements are 0|A 0 |P S = m P S f P S and 0|P |P S = m 2 P S f P S /(2m q ) . The partial conservation of axial current relation is Matrix elements of this relation define m q to be the Axial Ward Identity (AWI) fermion mass, through the two-point functions where O a can be any convenient source. The massive correlator in finite volume with periodic temporal boundary conditions in temporal length N t , saturated by a single state of mass m P S , is Thus its integral over the simulation volume is In the passage from lattice to continuum regularization there is a factor of Z 2 A on the right hand side of Eq. 11 where Z A = (1 − (3κ)/(4κ c ))z A is the tadpole improved Z− factor; z A = 1 + cα/(4π) ∼ 1. z A is close to unity for the lattice action used here and so we omit it from further discussion.
Borsanyi et al [41,42] wrote in the days before the use of t 0 and so they presented plots of (m q ψ ψ sub )/m 4 P S as a dimensionless observable. I instead will look at the quantity where (explicitly showing the conversion from the lattice quantity computed with clover fermions to a continuum one) and the lattice quantity measured with the usual convention for the definition of lattice field variables is∆ The factor of t 3/2 0 in Eq. 12 renders the observable dimensionless and the overall factor of 3/N c is included so that plots can show collapse to a common curve across N c when the condensate scales proportional to N c as expected by large N c counting.
Data for the integrated pseudoscalar correlator is recorded in Tables II, III and IV. Figure 1 shows the rescaled temperature dependent condensate from Eq. 12 at three values of r = (m P S /m V ) 2 = 0.63, 0.5 and about 0.25. These plots are sufficient to show that the finite temperature behavior of the systems is reasonably independent of N c . However, the curves are too featureless to identify an inflection point as a signal for a crossover temperature.
B. Checking for effects of finite volume and nonzero lattice spacing Two potential issues with the calculation should be discussed before proceeding: the first is whether the simulation volume could affect the results. The second is a check of the lattice spacing dependence of the data presented in Fig. 1.
Finite volume effects were studied in an earlier paper, Ref. [22], involving some of the data sets used here. Here is a recapitulation of that analysis, which basically follows the treatment of Sharpe [44]. Simulation volume effects typically arise from tadpole contributions due to pseudoscalar meson emission and absorption, where the meson returns not to its original emission point but to an image point. The pseudoscalar correlator for a particle of mass m in a box of length L µ in direction µ can be written as The infinite volume propagator, call it∆(m, x), is the n = 0 term in the sum. The finite volume tadpole is ∆(m, 0) =∆(m, 0) +Ī 1 (m, L) whereĪ 1 (m, L) is the sum over images. If a typical infinite volume observable has a chiral expansion then the finite volume correction is Sharpe [44] has shown that nearest image contribution gives a useful lower bound on the finite volume correction. It is The factor of 6 counts the closest neighboring points at positive and negative offsets. We can use Eq. 19, plus our tables of lattice masses and decay constants, to check to see which of our data sets might be compromised by volume. The result, 2I 1 (m, L)/f 2 P S (the 2 is needed to convert our 130 MeV definition of the decay constant to the standard chiral  Fig. 2 for the low mass end of our data set, mostly the sets

literature's 93 MeV) is shown in
Typical values of C 0 for chiral observables in Eq. 17 are order unity numbers and so this indicates that finite volume does not seem to be an issue with this data set as long as one is not looking for precision at the few per cent level. Note that the usual shorthand taking m P S L to be greater than some minimum value is not really applicable to N c > 3 because of the 1/f 2 P S in Eq. 18. Cutoff dependence can be shown by breaking out the data in Fig. 1 (and related quantities) into plots where the x axis is a measure of the lattice spacing. I will take this measure to be a 2 /t 0 , the inverse of the lattice flow parameter. An issue with such tests is that the bare parameters are not carefully matched. A better way to proceed will be to do a combined fit of observables related to chiral symmetry breaking to a version of chiral perturbation theory which includes lattice artifacts. To do this well requires many more zero temperature data sets than the ones used here, and is a topic for future work.
A first indirect test is to look at the zero temperature condensate. This is useful for a check related to Fig. 1: One expectation would be that the curves in Fig. 1 would show a sigmoid behavior with Σ(T ) zero at low temperature since (loosely speaking) the condensate is unchanged as the temperature rises, then a fall as the finite temperature condensate goes to zero, followed by a plateau at a constant value, basically the negative of the zero temperature condensate. One could identify a crossover temperature as the midpoint on the sigmoid.
The issue with doing this is that T = 1/(aN t ) so going to high temperature at fixed a means going to smaller N t , and at some point N t is so small that lattice artifacts must appear. An alternative is to do a direct measurement of the condensate at T = 0 and compare it to Fig. 1. There are modern ways based on Ref. [43], but maybe a quicker (though dirtier) way is to use the Gell Mann, Oakes, Renner relation, Eq. 7, using a single elimination jackknife from separate fits to the AWI quark mass, the decay constant, and the  Fig. 1 seem to be plausible, in the sense that they fall to roughly the negative of the zero temperature condensate.
A further check before returning to physics should involve the finite temperature data sets themselves. I will break up the data for the temperature-dependent condensate into bins in the temperature √ t 0 T and plot it versus a 2 /t 0 . This is shown in Fig. 4. The picture is a bit awkward to present; the data show temperature dependence within each bin in addition to scale dependence. I have color coded the various bins and used two sets of plotting symbols for each N c value to separate the different √ t 0 T values.
The highest temperature data sets are the ones with the largest lattice spacing dependence. These are uniformly taken with N t = 4. Their collection was an attempt to get high above the crossover temperature to try to see a flattening in Σ(T ) at high temperature, while still keeping to a large physical value of the simulation volume. Their only use is for the largest temperature bins in a calculation of dΣ(T )/dT in Sec. III C. They are not used in the pictures of screening masses in Sec. IV.

C. Looking for a peak
It would be better to have an observable with a peak, so I looked at two more quantities related to the condensate. One of them produced a signal. It is the temperature derivative of Σ(T ), just taken from the finite difference where T m = (T 1 + T 2 )/2. The difference in Eq. 20 can just be taken from the integrated pseudoscalar correlator at each value of N t , without doing the T = 0 subtraction. Of course, only data sets at the same bare parameters can be used. Attempts to refine this statement came to nothing. They were mostly based on doing fits to an arbitrary peaked function, a Gaussian, with x = √ t 0 T . Fits to all data sets with the same N c value (for each choice of (m P S /m V ) 2 ) generally had poor chi-squared, probably due to lattice artifacts: the data span a wide range of t 0 values. Fits to a single set of bare parameters fared better, though the issue is that there are only four values of N t . Some of the data sets (especially the ones at coarse lattice spacing) do not themselves include the peak and then of course the fit fails immediately. I also attempted to compute a susceptibility from the time histories of ∆ P P (T ). This was unsuccessful so I do not report on it.

IV. SCREENING MASSES
Meson screening masses in the scalar, pseudoscalar, vector, and axial vector channels are taken from two-point correlation functions extending in a spatial lattice direction. The temperature dependent pseudoscalar decay constant is also extracted from these spatial correlators. Propagators are constructed with composite boundary conditions to double the effective length of the lattice [45][46][47].
In the low temperature, chirally broken phase, the spectroscopy of screening masses should qualitatively resemble ordinary T = 0 spectroscopy with a light pion and no degeneracies in the spectrum. When chiral symmetry is restored parity partners (the pion and the scalar mesons, the vector and axial vector mesons) should become degenerate, and all four states should become degenerate when U(1) A is restored. A naive expectation for a screening mass is that it behaves something like where π/N t is the lowest nonzero Matsubara frequency associated with antiperiodic boundary conditions in a lattice of temporal length N t . Since 1/N t = aT , this gives m H = 2πT at high temperature. Results for screening masses (scaled by √ t 0 ) are displayed in Fig. 6 for the pseudoscalar and scalar states and Fig. 7 for the vector an axial vector states, They show the expected behavior. (The diagonal lines are just y = 2πT , the small m q (or large T ) limit of Eq. 22.) The very dirty signals for the a 0 (scalar) and a 1 (axial vector) mesons in the chirally broken phase are also expected. There, the pseudoscalar meson is light and the noise to signal ratio σ(t)/C(t) ∼ exp((m H − m π ))t is exponentially bad [48,49]; in the chirally restored phase all states are massive and the noise to signal ratio improves. The pseudoscalar and vector screening masses make a clear transition from a temperature independent value at low T to linear behavior at high T . This suggests that linear fits O = c 1 + c 2 T should show better quality (smaller chi-squared) when fits keeping only data in one phase are included, and c 2 would be much larger in the higher T phase. The fit would deteriorate when points in both phases were included. This would determine a crossover temperature.
The transition can be seen in individual data sets, and no fit is needed to see it, just a table of the mass values. In most sets the change in slope occurs in the middle of the data set, with (typically) two or three masses with nearly the same value at low temperature and the remaining higher temperature masses rising linearly with temperature. Data set by data set, one can identify a √ t 0 T low and a √ t 0 T high where masses remain unchanged for T ≤ T low and rise linearly for √ t 0 T ≥ √ t 0 T high . Generally, √ t 0 T low and √ t 0 T high are consistent between the pseudoscalar and vector mass sets. The individual data sets give a relatively wide interval between √ t 0 T low and √ t 0 T high simply because T = 1/(aN t ) and the values of N t are small. Fits to all mass values at each N c and (m P S /m V ) 2 value also show jumps in chi-squared when a fit including high temperature points extends too low, or fits to low temperature points extend too high. The issue is that even with data in one phase, the chi-squared tended to be unacceptably large due to the a dependent variation in the masses. The best one can say is that the crossover region is in the range √ t 0 T c ∼ 0.15 − 0.24, which is not inconsistent with the location of the broad peak in dΣ/dT . The last screening quantity to present is the pseudoscalar decay constant. Here, the following alternative seems to show a sharper result than plots of f P S versus temperature at fixed r. In the chirally broken phase, f P S is nonvanishing at zero fermion mass (due to spontaneous breaking of chiral symmetry) and rises modestly with fermion mass (due to to explicit breaking of chiral symmetry from the fermion mass). In the chirally restored phase there is only explicit symmetry breaking, and f P S should be proportional to m q . A plot of f P S /m q will show collapse to a common value when that situation occurs. Fig. 8 shows that behavior. I have scaled f P S by 3/N c . Here the different plotting symbols in each panel represent different values of (m P S /m V ) 2 with the diamonds representing the lightest fermion mass.

V. POLYAKOV LINE AND ITS SUSCEPTIBILITY
It is well known that the Polyakov line is not a sensitive observable at small fermion mass, but I present one picture for completeness. Following Ref. [17], I define the volume averaged Polyakov line as and then the susceptibility is  Fig. 9. The bare Polyakov line is not a scaling quantity, so the results for different N c values should not coincide. The only comment to make about these noisy figures is that the magnitude of the susceptibility falls with the fermion mass, as seen for N c = 3 [50].
At small fermion masses, where one is closer to the m q = 0 critical point, the N c = 3 Polyakov line and its derivatives do show structure associated with the critical point [51]. Since my data were collected at heavier quark masses where everything is crossover, the Polyakov line was not a particularly useful observable. Note that the shoulder in the Polyakov line susceptibility appears at √ t 0 T ∼ 0.15 or so, about where the screening masses begin to take their high temperature functional form.

VI. CONCLUSIONS
The question this study was designed to address was whether the finite temperature crossover behavior of SU(N c ) gauge theories with N f = 2 flavors of fermions showed different behavior as the number of colors was varied. These simulations, with low statistics and carried out on small volumes, studied the temperature dependence of the condensate (as determined from the volume integral of the pseudoscalar correlator) and of screening masses and the pseudoscalar decay constant. A smooth crossover from a low temperature confining and chirally broken phase to a high temperature chirally restored and deconfined phase is observed. This crossover behavior is of course not surprising given our extensive knowledge of N c = 3: the one new result is that the temperature dependence of these quantities (when compared at fixed values of m P S /m V ) shows no observable dependence on N c . It was not possible to determine a crossover temperature (to the extent that such a quantity makes sense when the crossover is smooth) but, with some plausible assumptions, it appears to be someplace between the known SU(3) result with physical quark masses ( √ t 0 T ∼ 0.11) and the large N c pure gauge result ( √ t 0 T c ∼ 0.26).
Of course, one can also make comparisons with earlier work. Refs. [37,38] [37,38]. Ref. [39] shows a plot of r 0 T c versus r 0 m P S where r 0 is the Sommer parameter [31], r 0 / √ t 0 ∼ 3. This conversion gives r 0 T c ∼ 0.54 at r 0 m P S = 2.2, 1.6 and 1.1 at r = 0.63, 0.5, 0.25, to be compared with r 0 T c = 0.48 − 0.6 from Ref. [39] for the same range of r 0 m P S values. Of the three scenarios described in the Introduction, the first two seem to be disfavored. No evidence was seen for any first order behavior at large mass at any N c studied. Of course, it could be that N c = 5 is still not "large N c " from the point of view of thermodynamics. Scenario two suggests crossover behavior, which is seen, but the scenario that T c ∝ f P S is disfavored (of course, over the range of fermion masses studied) because the crossover temperature shows no obvious N c separation despite the (known) variation of the pseudoscalar decay constant with respect to N c . As far as the third scenario, lattice simulations show that the lowest part of the meson spectrum show little N c dependence. The third scenario assumes that the two results I have presented -a common spectrum and a common crossover behavior -are correlated. Of course, the scenario asks that the correlation persists high in the meson spectrum (that the density of states is as given by Hagedorn) and lattice simulations say nothing about that.
The results in this paper are rather poor quality, but at the mass values studied, everything is smooth. The situation at very low mass and very high mass remains open. I think that to do more work in this area, one should not continue to use Wilson-clover fermionsat least, not in pilot tests. The importance of chiral symmetry, the need to subtract results from finite and zero temperature simulations, and the fierce scaling of the cost with simulation volume of thermodynamical observables argue in favor of using some modern version of staggered fermions for dedicated simulations.
The obvious next target in future studies of large N c systems at finite temperature could be a more direct attack on thermodynamic observables -the internal energy, pressure, speed of sound, and related quantities. The pure gauge calculations of Panero [52] are an inspiration, and they have been extensively cited in the phenomenological and gravitational duality literature of QCD thermodynamics. But we know that the critical behavior of N c = 3 with dynamical fermions is different from the behavior of pure gauge systems. Does this matter, for the questions which interest researchers in these areas? If so, numerical simulation which includes dynamical fermions might be a worthwhile project.