Microcanonical particlization with local conservation laws

We present a sampling method for the transition from relativistic hydrodynamics to particle transport, commonly referred to as particlization, which preserves the local conservation of energy, momentum, baryon number, strangeness, and electric charge microcanonically, i.e. in every sample. The proposed method is essential for studying ﬂuctuations and correlations by means of stochastic hydrodynamics. It is also useful for studying small systems. The method is based on Metropolis sampling applied to particles within distinct patches of the switching space-time surface, where hydrodynamic and kinetic evolutions are matched.

One of the key goals of modern heavy ion collision experiments is to search for a phase transition between a hadron gas and a quark-gluon plasma (QGP), and to locate the corresponding critical point. The vicinity of the critical point is characterized by enhanced event-byevent fluctuations [1,2]. Therefore, considerable attention is devoted to correlation and fluctuation observables, such as proton, net-proton, net-charge, and kaon cumulants [3,4], fluctuations of various particle ratios [5], transverse momentum correlations [6], and charge balance functions [7]. Since a heavy ion reaction is a dynamical process, it is essential to study these observables within a dynamical framework. A very successful dynamical treatment is a hybrid approach [8] which combines the relativistic hydrodynamic evolution of the high (energy) density QGP phase with the kinetic transport for a more dilute hadronic phase. This approach successfully reproduces bulk observables such as particle spectra and flow (see e.g. [9]). Switching from the continuous relativistic hydrodynamics to the discrete particle transport, often referred to as "particlization", is usually performed on a hypersurface characterized by constant energy density, temperature, or Knudsen number [10]. In the existing relativistic models the switching only occurs in one direction: from hydrodynamics to particles, but not vice versa. This is in contrast to non-relativistic hybrid approaches, where dynamical domain decomposition methods are routinely applied and the switching is performed in both directions (see e.g. [11]). To study correlations and fluctuations within the hybrid approach, hydrodynamics has to be either extended by stochastic terms directly [12][13][14][15] or coupled to a non-equilibrium field with a stochastic noise [16,17]. In both cases, it is essential that the particlization preserves the fluctuations generated by such models. This is a non-trivial task, which so far has not been done in the context of relativistic hydrodynamics. In the non-relativistic case, this problem is addressed in several ways [11]. One of them is to exactly match the fluxes at the interface, which in the relativistic case corresponds to local event-by-event conservation laws, or in other words, microcanonical sampling. The standard Cooper-Frye sampling used in relativistic mod-els (for a detailed description see e.g. [20]), on the other hand, is grand-canonical. It combines the Cooper-Frye formula for the momentum distribution in a hypersurface cell [18] with Poissonian sampling of the multiplicity distributions. Together they result in total energy, momentum, and charges fluctuating around correct means, as it is illustrated in Fig. 5 of [21]. Additionally, in the standard procedure particles in different cells are sampled independently, and thus are uncorrelated, although their velocities may still be correlated via a common flow velocity profile. The scaled variances of multiplicitites are ω ≡ N 2 − N 2 N = 1 by construction. In contrast, in the microcanonical case ω < 1 and particles should be correlated due to the conservation laws. Moreover, event-by-event conservation laws are important not only for correlations and fluctuations. For example, in small systems they can also affect mean values. Therefore, constructing and realizing a microcanonical sampling algorithm is the purpose of this work.
Attempts to introduce (micro)canonical particlization have been undertaken previously, but they rely on intuition-based ad hoc modifications to the standard sampling algorithm such as: introducing local charge conservation by sampling particle-antiparticle pairs [19], trying to satisfy conservation laws one by one in a "mode sampling" algorithm [20], and rejecting particles that increase the deviation from the desired conserved charges in the SPREW algorithm [21]. The multiplicity distribution sampled by these algorithms is not known precisely and does not correspond to a canonical or microcanonical ensemble. The SER algorithm [21,22] samples the correct canonical distribution, but extending it to the microcanonical ensemble is impossible. Note that energy and momentum conservation, which distinguish the microcanonical from the canonical ensemble, influence not only p T fluctuations, but also the fluctuations of multiplicities. This is why we propose a method to conserve all charges, energy, and momentum simultaneously.
First of all, we define regions over which conservation laws should be applied; we call these regions "patches". There are two requirements for the patch size b. First, it should be comparable to the hydrodynamics scale discussed in the context of fluctuating hydrodynamics [23], therefore it should be much larger then the mean free path in a weakly-coupled system or a thermal length in a strongly coupled system: b 1/T , where T is temperature. Second, one patch should contain many particles, therefore b 3 n 1, where n is particle density. A patch should not necessarily be much smaller than the system size. If a system is small, then conservation laws should be applied over the whole system and by definition the whole system is one patch. However, if a patch is comparable to the system size, then the applicability of hydrodynamics may be questionable. In the usual applications of hydrodynamics it is reasonable to use every computational grid cell as a patch. Indeed, even in simulations of micro-and nanofluids the number of particles per computational cell is of the order of 100 [11]. However, in typical simulations of relativistic ion collisions the average number of particles per computational cell is of order 10 −3 −10 −1 , given a typical cell size between (0.2 fm) 3 and (0.5 fm) 3 . While the strategy of sampling "fractional" particles is possible [24], it leads to complications in the treatment of the fluctuations in the subsequent transport evolution. Our strategy is to split the hypersurface into independent patches of similar size, and to require conservation laws in every patch. In practice this implies that a patch consists of roughly 50−1000 computational cells. A typical hypersurface consists of the order of 10 6 cells, resulting in about 10 3 − 10 4 patches. The exact number of patches, and therefore the "localness" of the conservation laws, can be treated as a parameter. For a given patch, the quantities to be conserved are where the sum runs over all hadron species i including resonances, and over all cells of the patch. Here dσ µ is a normal four-vector of the cell (see definition in [20]), u µ is the collective velocity of the cell, T is temperature of the cell, and µ B,S,Q (x) are the chemical potentials of the cell, responsible for the conservation of baryon number B, strangeness S, and electric charge Q. The chemical potential of the species i is defined as while g i is the degeneracy of the species. It may seem surprising to consider a local temperature and chemical potentials in a microcanonical sampling. However, there is no contradiction here. Conservation laws are imposed only over the whole patch. Variations in energy density, quantum number densities, and collective velocities from cell to cell within a patch are allowed and characterized by local values of T and µ. Preserving these local variations is important to ensure a faithful description of higher order azimuthal anisotropies [30], which otherwise would be smeared. The probability P of a given particle configuration in a patch is a product of the usual Cooper-Frye formulas and global delta-functions which guarantee conservation laws over the patch: Note that here the number of particles of each hadron species N s is not fixed, and neither is the total number of particles N = s N s . Instead, both are distributed according to Eq. (2). The quantities dσ µ , u µ , T , and µ B,S,Q depend on the spatial position of a particle x i . The charges B tot , S tot and Q tot are computed using Eq. (1). In practice these charges are real numbers, not integers. To address this problem, we suggest to either round B tot , S tot , and Q tot to nearest integers or distribute the non-integer parts according to a multinomial distribution, which is guaranteed not to obfuscate total charges on the hypersurface [33].
Direct sampling of the N -particle probability distribution expressed by Eq. (2) is difficult due to the unknown normalization factor N and the δ-functions. To sample it we apply a Metropolis algorithm, a Markov chain Monte Carlo method, which in our case is closely related to solving the Boltzmann equation with the stochastic rate method [27]. The state of our Markov chain ξ depends on multiplicities, coordinates and momenta of all ). The initial state is an arbitrary set of particles that satisfy the required conservation laws (Eq. 1). Charge conservation for initial state is fulfilled by an ad hoc heuristic algorithm picking lightest particles of necessary charges, while the energy-momentum conservation is achieved by rescaling momenta as in [21]. Given a state ξ we propose a state ξ with probability T (ξ → ξ ) and then decide, if this state should be accepted, with probability A(ξ → ξ ). Therefore, the probability to obtain a state ξ from ξ is w(ξ → ξ ) = T (ξ → ξ )A(ξ → ξ ). The master equation, connecting the probability to obtain the state ξ at steps t and t + 1 is After many steps the probability P t→∞ (ξ) should converge to P (ξ) given by Eq. (2). A sufficient condition for this is known as the detailed balance condition: This condition is satisfied if There is some freedom to select the proposal matrix T (ξ → ξ ). We choose it such that it conserves energy, momentum, and quantum numbers. Consequently, our Markov chain never leaves the desired subspace where conservation laws are fulfilled. Our proposal matrix may be viewed as 2 → 3 and 3 → 2 stochastic "collisions" [27] on the hypersurface. However, we note, that there is no real time involved and "collisions" are not related to any physical process. They are simply a mathematical method to sample the distribution of Eq. (2). The proposal procedure is the following: 1. With 50% probability choose a 2 → 3 or 3 → 2 transition.
2. Select the "incoming" particles by uniformly picking one of all possible pairs or triples.
3. Select the outgoing channel democratically with probability 1/N ch , where N ch is the number of possible channels, satisfying both quantum number and energy-momentum conservation.
4. For the selected channel sample the "collision" kinematics uniformly from the available phase space with probability dRn Rn , n = 2 or 3.
5. Choose a cell for each of the outgoing particles uniformly from all cells in the patch. Note that this choice matters for the acceptance probability, because the corresponding temperatures, chemical potentials, velocities u µ , and normal 4-vectors dσ µ in the Eq. (9) will be taken at the cells, where the outgoing particles are thrown.
Here R n is a phase-space integral for outgoing particles defined as the integral over dR n : where √ s = (P µ tot P tot µ ) 1/2 . The integration of dR 2 and dR 3 is possible analytically [27,28]. Our proposal procedure generates the following probabilities for 2 → 3 and 3 → 2 proposals: 3! denote total numbers of incoming pairs and triplets of any species, while G ch 2 and G ch 3 are the numbers of ways to select a given incoming particle species. Consequently, G ch 2 G2 and G ch 3 G3 represent the probabilities to obtain pairs and triplets of a given incoming species. The number of possible triplets and pairs of outgoing species with appropriate quantum numbers are denoted by N ch 3 and N ch 2 . Inserting the proposal probabilities, Eqs. (7) and (8), as well as the desired probability distribution, Eq. (2), into the expression for the acceptance probability, Eq. (5), we arrive, after some algebra, at where we made use of the relation Here n = 2, 3 and m = 3, 2 are the numbers of incoming and outgoing particles, and N is the total number of particles before proposing the Markov chain step. The product in the numerator is taken over the outgoing particles and the one in the denominator is taken over the incoming particles. The quantities dσ, u, T , µ should be evaluated in the cell where the particles are proposed to be, or coming from. The total number of particles in the entire patch is given by N , and k id m and k id n are the numbers of outgoing and incoming identical species in the reaction. Note that the sampling accounts for the variations in temperature and chemical potential within the patch. Also, and equally important, the distribution function f may contain viscous corrections. To summarize, the algorithm consists of multiple Markov chain steps, where the step is proposed according to Eqs. (7) and (8) and accepted with probability given by Eq. (9).
We have tested the above sampling algorithm both in a simplified and a somewhat more realistic setup. First, we consider a patch consisting of one cell with u µ = (1, 0, 0, 0), dσ µ = (V, 0, 0, 0), and f (p) = exp(−p 0 /T ). In this case Eq. (2) represents a well-known microcanonical distribution, which has been sampled before [25,26]. Additionally, analytical expectations of scaled variances of hadron multiplicities in the large volume limit are available in this case [29]. Our algorithm reproduces the analytical results for both means and scaled variances well. The resulting momentum distributions are very close to Boltzmann, as expected. This is a non-trivial result, because multiplicity and momentum distributions are a not a direct input to the sampling; the only input is volume V , total energy, momentum and conserved charges. Next, we demonstrate our sampling for a more realistic scenario, where we consider a patch consisting of three cells with non-trivial values for u µ , dσ µ , and T , which also vary from cell to cell [34] . Conservation laws are imposed over the entire patch, while the local energy density and charge densities vary from cell to cell. As shown in Fig. 1(a), we obtain the expected means in each cell, agreeing with the grand-canonical standard Cooper-Frye sampling. This is a demonstration that we correctly reproduce the temperature variation from cell to cell. The scaled variances, ω, of multiplicities in the patch, shown in Fig. 1(b), are already drastically different from the standard grand-canonical Cooper-Frye sampling, where ω = 1 by construction. In our case the variances agree with the microcanonical analytical expectation from [29]. Finally, in Fig. 1(c) we demonstrate the non-trivial correlations emerging from conservation laws. Unlike for variances, to our knowledge, there is no analytical calculation of correlations in a microcanonical ensemble, although in principle such calculations are possible using techniques developed in [29]. Beside testing the sampler implementation, Fig.1 1(b,c) also shows the expected effect of conservation laws on fluctuations and correlations in heavy ion collisions.
In summary, we have proposed, implemented and tested a particlization method which takes into account local event-by-event conservation laws in a systematic fashion. Localness is achieved by splitting the hypersurface into patches and enforcing conservation laws in every patch. Event-by-event conservation of total en-ergy, momentum, baryon number, strangeness, and electric charge over the patch is guaranteed by the algorithm. At the same time local cell-by-cell variations of energy and charge densities within a patch are preserved, ensuring that observables sensitive to these variations, such as higher order azimuthal asymmetries [30], are not smeared out. The proposed method is essential for studies of correlations and fluctuations, especially in combination with stochastic hydrodynamics, since it will not obfuscate its correlations and fluctuations. It may also be applied to exploring small systems, where the impact of event-byevent conservation laws is large, as well as charge dependent correlations relevant for the the chiral magnetic effect [31]. The code used for sampling is publicly available [32]. We have checked that its execution time for realistic hypersurfaces is not impractical [35] As a next step, we will apply it to search for observable effects of critical fluctuations in heavy ion collisions. Through accounting for local conservation laws we might be able to detect critical fluctuations in coordinate space via correlations in momentum space which were previously not visible when using standard particlization.
We would like to thank members of the BEST collaboration for fruitful discussions, and A. Wergieluk for a critical reading of the manuscript. This work was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract number DE-AC02-05CH11231 and received support within the framework of the Beam Energy Scan Theory (BEST) Topical Collaboration.