Effects of local event-by-event conservation laws in ultra-relativistic heavy ion collisions at the particlization

Many simulations of relativistic heavy ion collisions involve the switching from relativistic hydrodynamics to kinetic particle transport. This switching entails the sampling of particles from the distribution of energy, momentum and conserved currents provided by hydrodynamics. Usually this sampling ensures the conservation of these quantities only on the average, i.e. the conserved quantities may actually fluctuate among the sampled particle configurations and only their averages over many such configurations agree with their values from hydrodynamics. Here we apply a recently invented method [Oliinychenko, Koch; PRL 123, 18, 182302 (2019)] to ensure conservation laws for each sampled configuration in spatially compact regions (patches) and study their effects: from the well-known (micro-)canonical suppression of means and variances to little studied (micro-)canonical correlations and higher order fluctuations. Most of these effects are sensitive to the patch size. Many of them do not disappear even in the thermodynamic limit, when the patch size goes to infinity. The developed method is essential for particlization of stochastic hydrodynamics. It is useful for studying the chiral magnetic effect, small systems, and in general for fluctuation and correlation observables.


I. INTRODUCTION
One of the major goals of relativistic heavy ion collision experiments is to study a transition from a hadron gas to the quark-gluon plasma. The RHIC Beam Energy Scan experimental program is devoted to searching for the critical point of this transition by lowering collision energy from 200 GeV per nucleon pair down to 7 GeV. Future experiments at NICA and FAIR, which also have this search as one of their goals, are currently under construction. Multiple phenomenological models predict the critical point, but its location on the phase diagram varies considerably from model to model, and scenarios, where the critical point exists but is not experimentally accessible, are not excluded. The most promising experimental signatures of the critical point seem to be enhanced fluctuations. Therefore, considerable attention is devoted to correlation and fluctuation observables, such as proton, net-proton, net-charge, and kaon cumulants [2][3][4][5] and correlations [6], fluctuations of various particle ratios [7], transverse momentum correlations [8], and charge balance functions [9] (for a recent review see [10]). Other promising observables include light nuclei production, which may be related to fluctuations through coalescence [11].
Understanding these observables and linking them to a possible critical point requires dynamical modelling of heavy ion collisions, which includes treatment of fluctuations and correlations. Using a transport approach is a possible way (for a transport code including both partons and hadrons see e.g. PHSD [12]), but it inevitably results in large theoretical uncertainties related to hadronisation, because the exact mechanism of hadronization is not known. Moreover, PHSD uses a test-particle method, which artificially reduces correlations, hence the recent effort to develop a PHQMD approach [13] free of this limitation. An alternative way are the hydrodynamic and/or hybrid (hydrodynamic + transport) simulations, where hadronization is encoded in the equation of state (EoS). The latter is parameterized -it is not know from first principles at large baryon density -and the parameters can be adjusted to fit measured particle yields, spectra, flow, correlations, and other observables.
The effects of the critical point enter the EoS, but this is insufficient to model the vicinity of the critical point. In addition, the slow critical modes have to be explicitly taken into account in the hydrodynamic equations. This is done in the fluctuating hydrodynamics extended by stochastic terms directly [14][15][16][17] or coupled to a non-equilibrium field with a stochastic noise [18,19]. A deterministic approach to treat second order correlations and fluctuations ("hydro+") is also available [20].
Hybrid approaches involving hydrodynamics (fluctuating or not) need a particle sampler to convert hydrodynamic fields to particles that subsequently evolve according to kinetic equations including collision and possibly mean field dynamics. To study correlations or fluctuations, a key requirement for such a sampler is that it conserves energy-momentum and charges in every event. Otherwise correlations originating from conservation laws are lost and fluctuations are uncontrollably enhanced [21]. In other words, the sampler should be a local microcanonical sampler. Let us explain this term in detail. Suppose that we construct sets of particles from each H i (we refer to this as "performing particlization") multiple times. Thus we obtain sets of particles P ij , where j ∈ {1, . . . N samples } for each H i . If the transformation H i → P ij is performed in such a way that conservation of energy, momenta, and discrete charges is only fulfilled on average by j (meaning for example, that 1 N samples j Q j → Q hydro as N samples → ∞, but Q j = Q hydro ), we call this "grandcanonical sampling". If conservation laws are fulfilled for every j, meaning Q j = Q hydro , then we call it "microcanonical sampling", or event-by-event conservation laws. If this is the case not only for the entire simulation region, but also for smaller space-time regions, we call it "local microcanonical sampling". In case of one sample per hypersurface, by construction, local microcanonical sampling preserves fluctuations of conserved quantities over H i and transfers them to P ij without changes. In contrast, the grand-canonical sampling generates additional fluctuations by allowing conserved quantities of generated particles to differ from those of hydrodynamic events [21]. In this paper we are concerned with preserving correlations and fluctuations and, therefore, wish to adopt local microcanonical sampling. In our recent work [1] we invented, described, and tested a method to do it. Several previous attempts fulfilled only some conservation laws (but never all of them), and most of them were generating ad hoc distributions different from the actual microcanonical one, see [23] for an overview. In this work we apply the method of [1] accounting for energy-momentum, baryon number, strangeness, and charge conservation microcanonically. We do not account for angular momentum or parity conservation, although in principle they can be included too.
We have previously pointed that the sampling should be "local". However, the degree of localness is not immediately obvious. At first glance it may seem, that the more local, the better. Because the numerical solution of hydrodynamic equations is often obtained on a discrete space-time grid, it seems easy and practical to enforce conservation laws in every cell of this computational grid. However, in typical simulations of heavy ion collisions these cells are so small that the average number of particles per cell is below one. This problem is not typical for other fields, but one can always obtain a similar situation by choosing a sufficiently fine computational grid. Another case, where a similar problem can emerge, is a simulation of a very dilute solution, where a number of fluid molecules per cell is large, but average number of dissolved molecules per cell is below 1. Introducing fractional particles is a viable option to approach this problem [21], but the subsequent treatment of these fractional particles in the transport is challenging. Here we explore an alternative way: we define regions, where conservation laws will be fulfilled.
It follows from the considerations above that the scale b, on which local conservation laws have to be enforced, cannot be arbitrarily small. It has to be large enough to include the hydrodynamic scale, in other words it should be larger than the mean free path. Also, a patch of size b should contain much more than one particle on average, so ρb 3 1, where ρ is particle density. On the other hand, b should be smaller than the typical length of correlation one wishes to study. We explore it further by dividing a hypersurface into "patches", where conservation laws are enforced, and varying the size of the patch.
Our goal in this paper is to test a local microcanonical sampler, systematically trying different patch sizes and different ways to partition a hypersurface. The methodology is discussed in Sec. II, which comprises the splitting hypersurface into patches where conservation laws are enforced (Sec. II A), sampling algorithm in a patch (Sec. II B), discussion of its convergence and runtime (Sec. II C), and the issue of negative contributions which is known to plague grand-canonical samplers (Sec. II D). After extensive testing described in Appendices A and B, we proceed to apply the sampler to a realistic hypersurface in Sec. III, where we compute means, correlations, and higher order fluctuations of particle multiplicities and conserved quantities within a rapidity cut; and also study spectra and flow. All these are done as a function of the patch size. Summary, discussion, and outlook follow in Sec. V. We made the code for partitioning the hypersurface and microcanonical sampling publicly available at [45].

II. METHODOLOGY
In practice the particlization hypersurface is given as a list of cells with space-time coordinates x µ j , velocities u µ j , temperatures T j , chemical potentials µ B j , µ S j , µ Q j , and normal 4-vectors dσ µ j . Our task here is twofold: (i) partition hypersurface into patches, where local conservation laws are enforced, and (ii) sample particles within every patch, while accounting for variations of above quantities within a patch. The latter is crucial, if one wants to ensure that observables sensitive to these variations, such as higher order azimuthal asymmetries, are not smeared out. Tasks (i) and (ii) are independent, therefore we describe them separately.

A. Splitting the hypersurface into patches
Above we have already started introducing a spatial scale b, over which conservation laws should be enforced. There is a number of questions to be addressed regarding this scale. What is its physical meaning? How big or small should it be? How to partition a hypersurface into patches, given that such partitioning is not unique even if the patch size is fixed? Which observables depend on the choice of patch size and the way of partitioning, and how significant are these dependencies? These are the questions we will discuss in this section.
As already pointed out, the spatial extent of the patch should be neither too small nor too large. By Lorentz-boosting the hypersurface one can see that the same is true for the time extent of the patch, therefore we further call b a space-time size. Already from a condition ρb 3 1 it is clear that the space-time size of all patches cannot be the same, because the local particle density varies. If one chooses to have patches of the same space-time size, then for certain particlization hypersurfaces (to larger extent for isochronous, to lesser extent for iso-energy-density) some patches will contain many particles on average 1 , and some will 1 Here "average number of particles" is specifically the grand-canonical mean computed from hydrodynamic variables, as given by Eq. (C2). contain less than one particle. Furthermore, particles within patches of the same spatial size but different density would have different mean free paths. In addition to breaking the ρb 3 1 condition, such a situation is undesirable for our study of microcanonical effects, because we prefer to have the patch size as an interpretable and uniform control parameter of microcanonical effects. Indeed, given the same b for all patches, patches with larger density are less sensitive to microcanonical effects, and patches with smaller density are more sensitive. Therefore we suggest to control the patch size by its rest frame energy. This ensures that our requirements for patches are always fulfilled by construction. An alternative possibility is to use average number of particles per patch for this purpose. We tried this and obtained similar results to those presented here. For the rest of the paper, our patches are formed by combining the nearest hydrodynamic computational cells in space-time until the required rest frame energy E patch is reached. The energy E patch is a parameter that uniformly controls the size of microcanonical effects.
The parameter E patch is not just a technical parameter, it has a clear physical meaning.
Suppose that the quark-gluon fluid turns from a continuous stream into separate droplets.
Then E patch is the rest frame energy of one droplet, assuming of course that the droplets have the same size. Here we purposefully adopt this assumption to obtain systematic results as a function of E patch . However, physics-wise droplets can be of different sizes.
We would like to notice, that scenarios with droplet formation are usually overlooked by the hydrodynamic simulations, where density at the boundaries drops to zero continuously, because the sharp surface and surface tension are not included. If they are accounted for, it may lead to an onset of a well-known Plateau-Rayleigh instability, where a continuous flow of fluid turns into droplets if the ratio of the kinetic energy to the surface energy (the Weber number) is large enough. This phenomenon is ubiquitous and can be observed, for example, in the usual flow of water from a faucet. It is conceivable, that a similar separation into droplets occurs in heavy ion collisions. Moreover, the Plateau-Raleigh instability is not the only possibility to create droplets. They could also be formed as a consequence of cavitation or due to spinodal instabilities [22]. Regardless of the mechanism, if the droplets are formed, we show that it has observable consequences, such as suppressed number of particles at high p T , and enhanced v 2 at high p T , and these observables depend on E patch .
While E patch has a physical meaning and will be further studied as a physical parameter, it is not sufficient to uniquely define the partitioning into patches. The partitioning also depends on the algorithm, which we will next describe: We start by choosing an unclustered (not belonging to any patch) cell and add the closest unclustered cells until the total rest frame energy reaches E patch . Then the selected cells form a patch and the procedure is repeated until no unclustered cells remain. In this algorithm there are two choices to be made: (i) how to select the initial cell, (ii) how to define the distance to look for closest cells. An additionally uncertainty arises from the fact that in a given patch the conserved charges, such as baryon number,strangeness, and electric charge are most likely non-integer.
For microcanonical sampling, however, they have to be integer numbers, as it generates particles with integer charge. Therefore there is an additional algorithmic choice (iii) of how to assign integer conserved charges to the patches. Next we discuss the choices (i-iii) and explore, how much they influence results. For the choices (i-ii) the following combinations have been tested: 1. starting from a cell with minimal time, clustering by distance ∆t 2 + ∆r 2 2. starting from a cell with maximal spatial rapidity η = √ t 2 − z 2 , clustering by distance ∆t 2 + ∆r 2 3. starting from a cell with maximal spatial rapidity η, clustering by distance in spatial rapidity ∆η; this choice has the advantage of Lorentz-invariance so that the partitioning remains the same for a boosted hypersurface 4. starting from a cell with maximal energy, clustering by distance ∆r 2 /d 2 0 + (∆T /σ T ) 2 + (∆µ B /σ µ B ) 2 , where σ T and σ µ B are the scaled variances of temperature and baryochemical potential µ B over the hypersurface, and d 0 = 2 fm; the idea of this choice is to form patches around hot spots and reduce variations in temperature and baryochemical potential within a patch.
As already mentioned, after the patches are formed in this way, their net charges B k , S k , Q k , k = 1, N patch are not necessarily integer numbers. While this is not wrong by itself, the microcanonical sampling requires that the net charges of the patch are integers, because sampled particles always have integer quantum numbers. A simple approach of rounding net charges to a nearest integer may violate global conservation laws. To illustrate this imagine a hypersurface with a net baryon number 50, split into 200 patches each having baryon number 0.4; after rounding procedure every patch will have baryon number 0, therefore the whole hypersurface will have baryon number 0. Certainly, such scenario is undesirable. We would like to preserve the correct conserved quantities (energy-momentum, net baryon number, net strangeness, net charge) of the entire hypersurface: energy-momentum P tot , baryon number B tot , strangeness S tot , and electric charge Q tot . The total conserved charges of the entire system, B tot , S tot , and Q tot , are integers and are conserved during the hydrodynamic evolution 2 . Given a hypersurface where the particlization, i.e. the transition from hydrodynamic fields to particles takes place, the total charges are related to the phase-space density obtained from the hydrodynamic fields by Here the index i runs over all hadronic species with degeneracy g i , is the chemical potential, f i is the distribution function (in our case it is always Jüttner distribution), dσ ν denotes the normal 4-vector to the hypersurface cell which is a relativistic analog of volume (see [30] for a detailed definition), T is the temperature of this element. In the above formula we sum over all cells of the hypersurface to obtain the total charges. The conserved charges in a given patch k are then given by the same expression where we only sum over the cells in this patch. Consequently, the total charges are the given by the sum of the charges in all patches, k B k = B tot , k S k = S tot , and k Q k = Q tot . However, as discussed above, there is not reason that the charges in a given patch are integer. To achieve this we need to make additional algorithmic assumptions/choices. As we do it for every charge independently, let us discuss only the baryon number. The non-integer remainders in every patch are w k = B k − B k and they satisfy k w k = B tot − k B k ≡ B and 0 < w k < 1. Here x denotes the floor of x. In every patch we need to turn the non-integer remainder w k either into σ k = 0 or σ k = 1, while preserving their sum. We propose to do this stochastically with the probability of having σ k = 1 being proportional to w k . A formal 2 It may be that due to numerics or due to the construction of the initial state that the total charges of the whole hypersurface are not integers. In this case we round them to the nearest integer.
Panels (e) and (f) show how total baryon number at midrapidity and its variance change depend on the algorithm. The error bars are systematic errors due to requiring integer charges within patches (see text). The hypersurface is the same realistic hypersurface from Au+Au collisions at 19.6 GeV that is used for physics results.
expression for such combined probability is In other words, this is a weighted permutation of B ones and N patch − B zeros, with weights proportional to w k . This distribution is generated using a Metropolis walk (the general description of Metropolis algorithm is given further). One step of such walk proposes to exchange a zero at random position k 1 with a one at random position k 2 . This is accepted with probability min(1, w k 2 /w k 1 ). After sufficiently many steps we arrive at a sample from the required distribution. This last step completes the separation of the hypersurface into patches: Each patch has a set of cells, that belong to it, it has integer total charges, and its total rest frame energy is close to E patch . The remaining question is, how much our algorithmic choices influence physical observables.
This question is addressed throughout the paper by showing all results for two ways of splitting: maximal η first cell and distance in η (panel (c) of Fig. 1) and the largest energy cell Fig. 1). Here we additionally explore all the ways of splitting described above for the variables that turned out to be one of the most sensitive to splitting algorithm -the baryon number at midrapidity and its fluctuations. We use the same hypersurface that is also used for the results subsequently discussed. For every way of splitting described above we produce 10 3 samples 20 times.
For each of these 20 times there is a new assignment of integer quantum numbers to the patches. For each time we compute mean and scaled variance of the baryon number within midrapidity, |y| < 1. Then we show the means over these 20 times for the baryon number in panel (e) and for its scaled variance in panel (f) of Fig. 1. The variances of these quantities over the 20 times are shown as error bars. Therefore, the error bars represent a systematic uncertainty due to the assignment of integer quantum numbers to the patches. The difference between points in panels (e) and (f) from one splitting method to another is the systematic uncertainty due to the method of splitting hypersurface into patches.
As seen in panel (f), the assignment of integer baryon numbers within patch matters less for the scaled variance of the baryon number at midrapidity, likely because it is an intensive quantity. However, the scaled variance exhibits a clear sensitivity to the method of splitting. This is understandable, because on our hypersurface the mean baryon number is mainly a function of rapidity η. Therefore, if one splits the hypersurface by η as shown in panel (c), the scaled variance of the baryon number at midrapidity is smaller. If one splits the hypersurface as shown in panel (a) of Fig. 1), one patch typically comprises a larger rapidity window and the scaled variance of the baryon number at midrapidity is larger. As a summary, the influence of the patch splitting algorithm on the physical observables is not negligible and should be controlled carefully. We subsequently do it by repeating all our findings for two different splitting methods. The difference between the two should be understood as a systematic uncertainty of our method.

B. Sampling particles in a patch with event-by-event conservation laws
After the hypersurface is partitioned into patches, we proceed to sampling particles from every patch independently. The sampling is already described in [1], but we repeat the description here for completeness. We impose conservation laws in each patch, but allow variations of local energy density, quantum number densities, and collective velocities from cell to cell within a patch. These variations are characterized by the values of temperature, T , chemical potentials µ B , µ S , µ Q , and collective fluid velocity u of the cells. For example, if a cell has a larger temperature or chemical potential it is more likely that a particle will be sampled from it. The local variations of collective velocity u are important for a faithful description of higher order azimuthal anisotropies [24], which otherwise would be smeared.
This becomes obvious if one imagines a small system, such as pp or pP b collision, where the whole system may be one patch. The following multi-particle probability P of a given particle configuration satisfies our requirements: It is a product of the usual Cooper-Frye formulas and global delta-functions which guarantee conservation laws over the patch. The 1 Ns! factors ensure that the Eq. (3) transforms into a standard microcanonical distribution, if our hypersurface is just one static cell. This property is crucial, because without it the sampling cannot be called microcanonical. Note that here the number of particles of each hadron species N s is not fixed, and neither is the total number of particles N = s N s . Instead, both are distributed according to Eq. (3).
The quantities dσ µ , u µ , T , and µ B,S,Q depend on the spatial position of a particle x i . The charges B tot , S tot and Q tot are computed using Eq. (1). It is important to underline, that the resulting sampled particles are defined by the distribution (3), which should be the same regardless which algorithm is used to generate it.
Sampling of the N -particle probability distribution expressed by Eq. (3) is generally difficult due to the unknown normalization factor N and the δ-functions. We overcome this difficulty by applying a Metropolis algorithm, also known as a Markov chain Monte Carlo (MCMC) method, which in our case is closely related to solving the Boltzmann equation with the stochastic rate method [25]. The state of our Markov chain ξ depends on multiplicities, coordinates and momenta of all particles: ). The initial state is an arbitrary set of particles that satisfy the required conservation laws (Eq. 1). Quantum number conservation for the initial state is fulfilled by an ad hoc heuristic algorithm picking the lightest particles, which can provide the required baryon number, strangeness, and electric charge. The energy-momentum conservation is achieved by rescaling the momenta as in [23]. This initial state selection does not influence the resulting samples, because it is "forgotten" by Markov chain after a sufficient number of steps. Given a state ξ we propose a state ξ with probability T (ξ → ξ ) and then decide, if this state should be accepted, with probability A(ξ → ξ ). Therefore, the probability to obtain a The master equation, connecting the probability to obtain the state ξ at steps t and t + 1 is After many steps the probability P t→∞ (ξ) should converge to P (ξ) given by Eq. (3). A sufficient condition for this is known as the detailed balance condition: This condition is satisfied if There is some freedom to select the proposal matrix T (ξ → ξ ). We choose it such that it conserves energy, momentum, and quantum numbers. Consequently, our Markov chain never leaves the desired subspace where conservation laws are fulfilled. Our proposal matrix may be viewed as 2 → 3 and 3 → 2 stochastic "collisions" [25] on the hypersurface. However, we note, that there is no real time involved and "collisions" are not related to any physical process. They are simply a mathematical method to sample the distribution of Eq. (3). The proposal procedure is the following: 1. With 50% probability choose a 2 → 3 or 3 → 2 transition.
2. Select the "incoming" particles by uniformly picking one of all possible pairs or triples.
3. Select the outgoing channel democratically with probability 1/N ch , where N ch is the number of possible channels, satisfying both quantum number and energy-momentum conservation.
4. For the selected channel sample the "collision" kinematics uniformly from the available phase space dR n with probability dRn Rn , n = 2 or 3.
5. Choose a cell for each of the outgoing particles uniformly from all cells in the patch.
Note that this choice affects the acceptance probability, because the corresponding temperatures, chemical potentials, velocities u µ , and normal 4-vectors dσ µ in the Eq.
(10) will be taken at the cells, where the outgoing particles are thrown.
Here R n is a phase-space integral for outgoing particles defined as the integral over dR n : where √ s = (P µ tot P tot µ ) 1/2 . The integration of dR 2 and dR 3 is possible analytically [25,26]. Our proposal procedure generates the following probabilities for 2 → 3 and 3 → 2 proposals: where 3! denote total numbers of incoming pairs and triplets of any species, while G ch 2 and G ch 3 are the numbers of ways to select a given incoming particle species. Consequently, G 3 represent the probabilities to obtain pairs and triplets of a given incoming species. The number of possible triplets and pairs of outgoing species with appropriate quantum numbers are denoted by N ch 3 and N ch 2 . Inserting the proposal probabilities, Eqs. (8) and (9), as well as the desired probability distribution, Eq. (3), into the expression for the acceptance probability, Eq. (6), we arrive, after some algebra, at where we made use of the relation Here n = 2, 3 and m = 3, 2 are the numbers of incoming and outgoing particles, and N is the total number of particles before proposing the Markov chain step. The product in the numerator is taken over the outgoing particles and the one in the denominator is taken over the incoming particles.
The quantities dσ, u, T , µ should be evaluated in the cell where the particles are proposed to be, or coming from. The total number of particles in the entire patch is given by N , and k id m and k id n are the numbers of outgoing and incoming identical species in the reaction. Note that the sampling accounts for the variations in temperature and chemical potential within the patch. Also, and equally important, the distribution function f may contain viscous corrections. To summarize, the algorithm consists of multiple Markov chain steps, where the step is proposed according to Eqs. (8) and (9) and accepted with probability given by Eq. (10).
Testing and validation of the sampling is performed in the appendices A and B, as well as in Ref. [1].

C. Convergence and runtime
Our goal is to generate N samples samples from the distribution (3) as fast as possible, but in such a way that they are not correlated with each other. In addition these samples should not depend on the ad hoc initial state of the Markov chain. The last two requirements imply a sufficient (and the larger the better) number of Markov chain steps. The runtime minimization, however, demands the minimal number of Markov chain steps. Here we describe our approach to address this problem, which focuses more on robustness rather than runtime minimization.
After the generation of the initial state of our Markov chain, we perform a warm-up of N warmup steps described above to reach equilibration. Because the warm-up is performed only once per one hydrodynamic hypersurface, we simply play it safe, set a large N warmup = 10 6 , and check that it provides distributions, that do not change if one increases N warmup .
Then the resulting particles are printed out.
The next sample should not be correlated with the previous one. This is achieved by performing N decor steps between printing out the sample. After this it is not clear if the required decorrelation is reached. Insufficient decorrelation mainly exhibits itself as spikes in momentum spectra, which typically occur at the high momentum tail of the distribution.
These spikes originate from one or two particles "stuck" in a corner of momentum space for many Markov chain steps. To get rid of these spikes, we perform additional N decor "2 ↔ 2 elastic" steps described further. Then we check if there are any particles unchanged after these steps. In case there are unchanged particles we perform N decor 2 ↔ 2 elastic steps again and repeat these blocks of N decor 2 ↔ 2 elastic steps until all particles are changed.
Then we print out the resulting particles. The whole procedure is repeated as many times as many events we need. In our calculations we used N decor = 200 and N decor = 500 and did not observe any difference in results. For tests presented in the Appendices and tests in [1] N decor = 500 was used.
The 2 ↔ 2 elastic steps are the Markov chain steps, where particles of the same species are proposed, but their cells and momenta are allowed to change. They are used for decorrelation for a single reason: they cannot bias multiplicity distributions, because they do not change any multiplicities. In contrast, repeating 2 ↔ 3 decorrelation blocks until all particles are changed may bias multiplicity distributions. As an extreme example, consider a patch with only 2 particles, and set N decor = 1. Forcing 2 ↔ 3 decorrelation steps until all particles are changed means that the next sample always contains 3 particles even though the mean number of particles can be set arbitrarily close to 2. To avoid this type of bias we adopt 2 ↔ 2 elastic steps for decorrelation. The acceptance probability of 2 ↔ 2 elastic steps is expressed by Eq. (6), with m = n = 2 and most of the factors cancelling, resulting in where the product in the numerator is over the outgoing particles, and product in de-nominator is over the incoming ones. As in Eq. (10) the quantities dσ, u, T , µ should be evaluated in the cell where the particles are proposed to be, or coming from.
Mainly due to the decorrelations our sampling procedure appears to be rather time- or in other words, it takes about one minute per patch per 10 4 events for E patch = 20 GeV.
As the sampling in every patch is performed independently, we have parallelized our code over the patches. The quadratic dependency on E patch is due to decorrelation, therefore a way to speed up the sampling dramatically is to relax the decorrelation requirements, which in our case are very strict. Another possible idea is to consider N → N elastic steps for decorrelation instead of 2 → 2.

D. Negative contributions
Our microcanonical sampler currently treats the long-standing problem of negative Cooper-Frye contributions [27][28][29] in a special way, different from a typical grand-canonical sampler. To set the stage, let us first explain the problem. Grand-canonical samplers use the Cooper-Frye formula to compute how many particles should be produced from a cell with with a four-volume dσ µ at given momentum: The factor p µ i dσ µ is negative for particles that cross the hypersurface inwards. These negative contributions are necessary to conserve energy, momentum, and charges across the hypersurface. However, they are not possible to sample, because they come with negative weights. Moreover, integrating Eq. (13) over momenta (as it is done for example in the Appendix C) one can see that the net particle flow is proportional to u µ dσ µ , which can also be negative. Therefore, the usual solution is: (1) ignore cells with negative u µ dσ µ , which means that the net particle flow is directed inwards. In this way both positive and negative contributions from these cells are neglected; and (2) for cells with positive u µ dσ µ sample only particles with positive p µ i dσ µ , which formally corresponds to multiplying distribution by θ(p µ i dσ µ ). This cuts off negative energy flow, or energy flow inwards the hypersurface. To summarize, the usual Cooper-Frye sampler ignores the inward flow of energy, as well as the outward flow originating from cells with negative u µ dσ µ . The same is valid for the flow of momenta and charges (B, S, Q). As a consequence, conservation laws are violated even on average by events, unless the sampler is intentionally modified to avoid this, such as in [30]. However, existing modifications of this kind are ad hoc [23] and do not reproduce a canonical or microcanonical ensemble in a box.
We have encountered a practical example of the negative contributions problem, when we tried to use a hypersurface from MUSIC hydrodynamics with dynamical initialization [31] at 19.6 GeV and 30-40% centrality. This initial state results in a highly irregular particlization hypersurface on which the ratio of the total particle flow inwards over the net particle flow, u µ dσ µ θ(−u µ dσ µ )/ u µ dσ µ , constitutes around -23%. For this case we obtain around 20% smaller energy from particles generated by iSS than that of the hypersurface.
Our microcanonical sampler deals with the negative contributions in the following way.
The conserved quantities are computed first according to the Eq. (1), where negative contributions are present. Therefore, the conserved quantities of the hypersurface are equal to the ones of the sampled particles, unlike in the grand-canonical sampler. Then, the sampled particles according to the Eq. (3) are allowed to fly inwards. In other words, the negative contributions are actually sampled. This is possible, because the p µ dσ µ factors, which can be negative, enter the multi-particle probability distribution (Eq. 3) as a product. While the product should be positive, the sign of individual p µ dσ µ is not restricted, and we are thus able to sample negative contributions.
At first glance it seems to be the solution of a long-standing negative contributions problem, but unfortunately it is not. This approach is suitable for particlization of pure hydrodynamics, but not necessarily for a hybrid simulation. One can see this immediately, if one considers a Sun-like object, as proposed in [28]. The hypersurface there is a static sphere with u µ = (1, 0, 0, 0), therefore, the number of particles crossing it inwards and outwards are equal. Then our microcanonical sampler correctly computes that the net energy flow across the hypersurface is 0, and does not sample any particles. In a simulation we need a different outcome: sampling particles going outwards, and absorbing particles from transport going inwards as source terms. This cannot be achieved by sampling alone.

III. APPLICATION TO A REALISTIC HYPERSURFACE
In this section, we apply the microcanonical sampler for its main intended use caseparticlization of hydrodynamics in heavy ion collisions simulations. This serves three purposes: first, it is a comprehensive test of both the sampler performance and patch splitting procedure; second, it allows to demonstrate the sensitivity of observables to the patch size; third, it allows to study correlations due to conservation laws as a function of kinematic cuts, such as measured recently in [6].
For these purposes we consider a typical particlization hyper-surface from 30 − 40% mid-central Au+Au collisions at √ s N N = 19.6 GeV. It is computed by a 3 + 1 D MUSIC hydrodynamic simulation [32] with event-averaged Monte-Carlo Glauber initial condition.
The hyper-surface corresponds to constant energy density of 0.4 GeV/fm 3 . This is the same setup as in Ref. [33]. The idea behind choosing √ s N N = 19.6 GeV is to have a hypersurface large enough to demonstrate the capabilities of the sampler, but small enough to be able to generate the statistics necessary for computing higher order fluctuations. Partitioning of the hypersurface into patches was performed 10 times, and for each such partitioning 2 · 10 4 samples were generated, therefore the total number of samples is N ev = 2 · 10 5 . Smoothed event-averaged initial condition is particularly important, because it leads to a smooth particlization surface with negligible negative Cooper-Frye contributions. This allows for a fair comparison between micro-and grand-canonical samplers, which treat negative contributions in different ways, see Sec. II D.
First of all we demonstrate that the conservation laws over the hypersurface are indeed fulfilled in our sampler. In Fig. 2 we compare the distribution of total energy, x-component of momentum, net baryon number, net strangeness, and net electric charge from our microcanonical sampler and from the grand-canonical sampler iSS described and tested in [34] and The effects of the local microcanonical sampler are most evident as one varies the patch energy. For a single patch, these effects can be in principle computed analytically [39].
However, if one studies particle distribution with a kinematic cut or acceptance window, then particles originate from many patches. In this case it is not clear a priori how much of these effects are preserved. Indeed, if one chooses a small subsystem of a microcanonical system, the subsystem will be grand-canonical. Therefore, a particular question that we want to address here is to which extent microcanonical effects are preserved if a kinematic cut is imposed. For this purpose we impose a |y| < 1 rapidity cut and consider different multiplicity for the largest patch energy there is a small difference between the microcanonical and grandcanonical means. This difference originates in the same way, as in Figs. 8 and 9: even in the thermodynamic limit microcanonical means tend to be larger than the grand-canonical means, even though their ratio approaches unity. Scaled variances in the microcanonical case are systematically smaller than for iSS. The effect is almost independent on the patch size, constitutes around 10%, and originates mainly from conservation of quantum numbers. This is a known (micro)canonical suppression of fluctuations. Similar result is obtained analytically in the thermodynamic limit [40]. In panels (c) and (d) one can see a less studied effect: microcanonical suppression of the higher order fluctuations. In the grandcanonical case the scaled skewness κ 3 /κ 2 and kurtosis κ 4 /κ 2 are always unity, but in the microcanonical case they turn out to be always below unity. Similarly to the second order fluctuations, the effect does not vanish even in the thermodynamic limit.
Next we consider correlations between various particles, where the correlation between quantities A and B in Fig. 4 is defined as In a grand-canonical sampler like iSS, particles are sampled independently, hence the multiplicity correlations always vanish. However, the micro-canonical sampler introduces nonvanishing correlations due to conservation laws. This is shown in Fig. 4. If conservation laws are more local (and the patch energy is smaller) correlations are larger. However, correlations do not vanish even in the thermodynamic limit. Although the correlation between multiplicities with a rapidity cut is less strong than without a cut, they remain to be significant, typically from 5 to 10% in absolute value. The sign of the correlations in Fig. 4 is evident already from the pure electric charge conservation, although baryon number and strangeness influence the magnitude significantly.

IV. CUMULANTS OF CONSERVED QUANTITIES WITHIN A RAPIDITY CUT
We next study the fluctuations of conserved quantities, i.e. energy, net baryon number, net electric charge, and net strangeness over our realistic hypersurface at √ s = 19.6 GeV from particles within a rapidity cut of |y| < 1. The mean value, standard deviation, and higher cumulant ratios are shown in Fig. 5. In general, the mean values exhibit a decreasing trend with increasing patch energy and for large patch energies they approach the grand-canonical values. This is because for smaller patch energy particles prefer to be at midrapidity, rather than at high rapidity. The jumps in Fig. 5 are mainly coming from the way of splitting hypersurface into patches. It becomes evident from Fig. 5, where results for two ways of splitting into patches are shown. The difference between them is to be under- for pions, kaons, and protons. Therefore, in Fig. 6 we show only protons.
Transverse momentum distributions are expectedly suppressed at high momenta due to energy conservation. Indeed, it is clear that a patch of total energy of 5 GeV should on average contain less protons with transverse momentum p T = 3 GeV than a patch of 10 GeV. However, at much smaller momenta than the patch energy the microcanonical distributions approach the grand-canonical ones. At high momenta microcanonical sampling always results in a cutoff due to the limited total energy in a patch. This is unlike the grand-canonical Boltzmann distribution, which has non-zero probability for arbitrarily high momenta, since it assumes the presence of a heat bath.
Reproduction of the grand-canonical elliptic flow in the limit of a large patch is a good test that our sampler properly takes into account the local velocities of the cells. The elliptic flow is defined as where φ i is the angle with respect to the reaction plane. Elliptic flow is sensitive to the local variations in hydrodynamic cell velocities, temperatures, and chemical potentials within a patch. Our sampling algorithm takes into account these local variations and thus is able to reproduce the flow, as demonstrated in Fig. 6. At smaller patch energies we observe an interesting effect of momentum conservation: for smaller patches v 2 is larger at high momenta. We conjecture that this is caused by momentum conservation. In the grandcanonical sampler total momentum of the patch can fluctuate, therefore the momentum anisotropy, which reflect the anisotropy of the collective flow field u, is smeared out compared to the microcanonical sampler.
The dependence of v 2 (p T ) is used to quantify the viscosity of the quark-gluon plasma, because alarger viscosity to entropy density ratio η/s leads lower v 2 (p T ). The values of η/s ≈ 0.08 − 0.16 were obtained from fitting experimental data [46][47][48]. However, these works do not account for local microcanonical conservation laws. Our result in Fig. 6 demonstrates, that a larger η/s may be necessary to reproduce experimental data, if the local microcanonical conservation laws are included.

B. Correlations as a function of pseudorapidity
Correlations between particle multiplicities are already discussed above as a function of the patch size, mainly to understand the effect of the patch size. For patch rest-frame energies larger than 10 GeV we find that correlations are almost unchanged, see Fig. 4.
Here we would like to explore a correlation observable, similar to the one measured by the STAR collaboration [6]. For this purpose we select a patch rest frame energy to be 10 GeV.
STAR measured correlations of net protons (p −p), net kaons (K + − K − ), and net charge as a function of a pseudorapidity gap η = 1 2 log |p|+pz |p|−pz . Because STAR published unnormalized correlations, we also do not normalize them here: The range of rapidity in the STAR measurement is limited to |η| < 0.5, while we can simulate the whole rapidity range. It is important to note, however, that we do not perform any afterburner simulation and do not include resonance decays. Also, our centrality selection is different from [6]. Therefore a direct comparison to STAR data is not meaningful. However, we are able to pinpoint the effect of conservation laws. The patch definition used here is the one corresponding to Fig. 1 (d): starting from largest energy cell and clustering cells by Our comparison of the rapidity-dependent correlations between micro-and grandcanonical samplers is shown in Fig. 7. The most prominent feature is that at small rapidity the correlation between net proton and net kaon has a negative slope, if the local conservation laws are included. This is similar to the results of [6] at all energies higher than 7.7 GeV, and this feature is not reproduced by the grand-canonical sampler. At 7.7 GeV the measured net-pK correlation has a positive slope. We conjecture that the positive slope may originate from the conservation laws, when total net baryon number and total net charge are sufficiently large. Another possibility is resonance decays. We are able to handle these effects separately, thus we are potentially able to identify the reason of the measured positive slope. This task is left for a future work, however.
At small η the net-pQ and net-KQ correlations have positive slopes as a function of |η| both for micro-and grand-canonical samplers. At large η there is a large difference between the samplers. The reason is the following. In the limit of large η the charge within |η| interval does not fluctuate in the microcanonical sampler by construction. Therefore in each sample Q − Q = 0 and net-pQ and net-KQ correlations approach zero.

V. SUMMARY, DISCUSSION, AND OUTLOOK
We have introduced local microcanonical sampling over the hydrodynamical hypersurface. The localness is reached by splitting the hypersurface into patches -spatially compact regions, where conservation laws are enforced in every sample using a novel sampling algo-rithm [1]. This algorithm conserves energy, momentum, baryon number, strangeness, and electric charge in each patch. It also preserves local variations of velocity, temperature, and chemical potential within a patch.
The idea of patches combined with the sampling algorithm allows to study a rich variety of microcanonical effects in heavy ion collisions. We have explored means, fluctuations, and correlations of multiplicity distribution for pions, koaons, and protons within a rapidity cut; means and fluctuations of conserved charges within a rapidity cut; transverse momentum spectra and flow; and correlations of net protons, kaons, and charge as a function of the pseudorapidity gap. All these observables except the last one were studied as a function of the patch size. For the smallest patch size microcanonical effects are the strongest, but many effects, in particular for fluctuations and correlations, do not vanish even in the thermodynamic limit, as expected from analytical calculations [39].
While the microcanonical sampling is mathematically rigorous and well-defined, the patch splitting procedure and its parameter, the patch size choice are to a large extent a matter of choice. Which procedure and which patch size should one select to simulate heavy ion collisions? The variation of the patch splitting algorithm changes our results, but not too much; we consider these changes shown in Figs. 1, 3, 4, and 5 as a systematic uncertainty of our method. The patch size, in contrast, plays an important role for every observable studied, as long as this size is not too large. Above the patch rest frame energy of around 10 GeV observables dependent only very slightly on it. This makes it tempting to choose 10 GeV as a "reasonable" patch size. However, we would like to underline that the question about correct patch size is not algorithmic, but physical. The quark-gluon plasma created in heavy ion collisions has a surface tension (which is usually neglected in hydrodynamic simulations, except [36]) and, therefore, droplets may be formed. When these droplets hadronize they form droplets of hadron gas. In this case the right patch size should be equal to the size of the droplet, and the droplet scenario can be identified by the microcanonical effects we have listed and explored: high-momentum suppression, v 2 (p T ) enhancement at high p T , stronger suppression of fluctuations, and enhancement of correlations. The latter has already been pointed out in [49] in this context of droplet formation due to spinodal instabilities. Although in principle these effects are always present, they can be observed easily only if the droplet energy is of order of 10 GeV or less. The high-p T suppression should be the most susceptible to experimental observation. Here we considered droplets uniform in size. This was done to study the microcanonical effects systematically, but in general there is no reason to assume that droplets have the same size. Qualitatively, the microcanonical effects we have observed, should also occur if the droplets have different sizes.
Our present results cannot be directly compared to experimental data. Here we have purposely only studied the properties and effects of the local microcanonical particlization in isolation, without subsequent resonance decays or hadronic afterburner. This allows to understand the sampler and its systematics better, before using it in a larger framework. Now, that the sampler is explored and tested, it can be used as a part of the hybrid (hydrodynamics + transport) approach. For example, it would be interesting to see if it can reproduce the net-p, net-K, and net charge correlations measured recently by the STAR collaboration [6]. Also, effects of conservation laws on observables related to the chiral magnetic effect, as well as small systems should be studied and be compared to the data.
Furthermore, the presented sampling algorithm allows a consistent particlization of fluctuating hydrodynamics. Therefore, it can be applied to study the physics of critical point in heavy ion collisions. Certain improvements of the sampler may be of interest, such as introducing viscous corrections and quantum statistics. Finally, it turns out that our sampling algorithm allows to sample negative Cooper-Frye contributions, which is impossible to do with a grand-canonical sampler. The consequences of this new possibility are yet to be explored.

Appendix A: Special case -microcanonical gas of relativistic massless particles
The simplest test of our sampler is against the analytically known case of a classical microcanonical relativistic massless gas. Here by microcanonical we mean that all possible phase space cells ( x, p) can be occupied with equal probability, but the total energy E is conserved and the total momentum is zero. The number of particles N is allowed to vary.
Then the probability to have N particles with momenta where V the volume of the system. In terms of our sampler, this is a special case, where a patch consists of just one static (u µ = (1, 0, 0, 0)) cell with dσ µ = (V, 0, 0, 0).The momenta in Eq. (A1) can be integrated out analytically, which provides the following distribution of the total number of particles [37,38]: It is convenient to rewrite this distribution in terms of the grand-canonical mean for the same system. The grand-canonical probability is Integrating out momenta, one obtains the Poisson distribution w gce N ∼N N gce /N !, its mean beingN gce = V T 3 π 2 3 . The mean energy per particle is computed from Eq. (A3), E/N gce = 3T . Eliminating the temperature T , one obtains Therefore, one can express the microcanonical particle number distribution via the grandcanonical average at the same average total energy and volume: The cumulants κ i of distribution (A5) can be computed analytically as where F (t) is the cumulant generating function: w N e tN = log c + 2t + (A6) Here p F q is a generalized hypergeometric function and c is a constant. The exact expressions for the cumulants κ 1−4 are, therefore, available analytically, but we do not provide them here, because they are bulky and hardly informative. Instead, we show the expansions of certain combinations in the thermodynamic limitN gce → ∞, which are more interesting and instructive:N From this one can see that in the thermodynamic limit the microcanonical mean number of particles is larger by 1 2 than the grand-canonical one. This counter-intuitive result does not contradict the theorem about ensemble equivalence, becauseN /N gce → 1 is still fulfilled atN gce → ∞. A non-zero difference between microcanonical and grand-canonical yields was also reported when only energy conservation (but not momentum) is taken into account, see Eq. (9) of [39]. The scaled variance κ 2 /κ 1 , and the ratios κ 3 /κ 2 , and κ 4 /κ 2 are not 1, like in the grand-canonical case, but rather 1 4 , 1 4 , and 1 16 . We are interested in comparing the cumulants of the distribution (A5) to the cumulants of the distribution produced by our sampler. As the distribution is sampled indirectly, using random 2 ↔ 3 Metropolis steps in momentum space, it is non-trivial, that the distribution in Eq. (A5) is reproduced. We found this example a very useful testbed for our algorthm.
Any error in the proposal or acceptance probabilities (Eq. 10) dramatically changes all moments, including the mean. In Fig. 8 we demonstrate that the mean, scaled variance, and cumulant ratios κ 3 /κ 2 and κ 4 /κ 2 of the generated distribution agree with the analytical results computed from Eq.
Here we test that our sampler reproduces the non-trivial threshold effects on the π 0 yield, which were shown in Fig. 12 of [42]. In Fig. 9 we demonstrate a good agreement with a previous calculation, which used a dedicated microcanonical sampler. Minor discrepancies might be attributed to possible differences in hadron tables, such as the mass of f 0 meson and different number of mesonic resonances between 1 and 2 GeV. In our calculation we use the default particle table of SMASH transport code [50] with the π 0 mass set to 135 MeV, and the π ± mass set to 138 MeV.
In Fig. 9 one can also see interesting physical effects which were already studied in [42]. The minimal amount of particles in the microcanonical case is two, because one or zero particles cannot fulfil energy and momentum conservation. Therefore, at the smallest energy only a state with 2π 0 , the lightest hadron, is accessible. At slightly higher energy the π + π − state opens and the π 0 yield drops down to 2/3. Then, with increasing energy π 0 grows until the energy crosses a threshold to form a new state. Some of such thresholds are marked explicitly in Fig. 9.
At energies above 3 GeV threshold effects are not pronounced anymore, although in principle new many-particle thresholds continue to open at arbitrarily high energies. Nevertheless, even at E = 10 GeV the microcanonical average still differs from the grand-canonical average at the same average energy. Their ratio approaches 1 in the thermodynamic limit, but a finite difference remains. This is similar to the microcanonical massless case, studied above (Appendix A), however the finite difference is smaller than 1/2.
Here d σ ≡ (dσ 1 , dσ 2 , dσ 3 ). After computing these integrals as shown above, one has to boost the 4-momentum back to the computational frame. Formulas for quantum statistics can be obtained by adding ∞ k=1 (±1) k+1 e µk/T in front of the expressions and substituting T → T /k, z → zk. Here the + sign is for bosons and the − is for fermions. The expression above can be numerically integrated to obtain R 3 , but it turns out that an analytical formula for R 3 exists (see Eqs. 54-58 of [26]), which is faster and more reliable.