Core-corona procedure and microcanonical hadronization to understand strangeness enhancement in proton-proton and heavy ion collisions in the EPOS4 framework

The multiplicity dependence of multistrange hadron yields in proton-proton and lead-lead collisions at several TeV allows one to study the transition from very big to very small systems, in particular, concerning collective effects. I investigate this, employing a core-corona approach based on new microcanonical hadronization procedures in the EPOS4 framework, as well as new methods allowing one to transform energy-momentum flow through freeze-out surfaces into invariant-mass elements. I try to disentangle effects due to ``canonical suppression'' and ``core-corona separation'', which will both lead to a reduction of the yields at low multiplicity.


I. INTRODUCTION
An enhancement of multistrange hadrons in relativistic heavy ion collisions compared with protonproton scattering is one of the oldest signals proposed to detect the creation of a "quark-gluon plasma" [1].It was observed first (as expected) in heavy ion collisions [2][3][4][5][6] but, unexpectedly, such "signals" have also been detected in proton-protons collisions, where the enhancement concerns high-multiplicity compared with low-multiplicity events [7].Even more, when one plots ratios of multistrange hadrons to pions, as a function of the multiplicity dn/dη at rapidity zero, for both proton-proton (at 7 TeV) and heavy ions (PbPb at 2.76 TeV), one observes a unique curve, increasing monotonically from low-multiplicity proton-proton up to central PbPb.Also shown in Ref. [7]: the standard Monte Carlo event generators do not really describe the data.
During the past five years, the EPOS4 project was developed (see first publications [8][9][10]), being an attempt to construct a realistic model for describing relativistic collisions of different systems (from protonproton to nucleus-nucleus) at energies from several GeV per nucleon up to several TeV.So the model should be able to deal with strangeness enhancement, but not only.It is a "general purpose" approach that is meant to describe any observable.The results shown in this paper are only a very small fraction of simulation results, since as a first test of the new approach simulations were performed at low energies, i.e., 7.7, 11.5, 14.5, 19.6, 27, 39, 62.4, 130, and 200 GeV / nucleon, and as well at high energies, i.e., 2.76, 5.02, 7, 8, 13 TeV / nucleon, for different systems, looking for yields, spectra, and flow, for light and heavy flavor.All this is done with the same code version, namely EPOS4.0.0, which is at this moment being published via a dedicated web page.No attempt has been made to "fit" one particular curve for one particular system, the idea is more to see to what extent one understands the enormous amount of data accumulated over the past two decades.
A fundamental ingredient of the EPOS4 approach is the observation that multiple (nucleonic or partonic) scatterings must happen in parallel, and not sequentially, based on very elementary considerations concerning time-scales.To take this into account, EPOS4 brings together ancient knowledge about S-matrix theory (to deal with parallel scatterings) and modern concepts of perturbative QCD and saturation, going much beyond the usual factorization approach.The parallel scattering principle requires sophisticated Monte Carlo techniques, inspired by those used in statistical physics to investigate the Ising model.
In the EPOS4 approach, one distinguishes "primary scatterings" and "secondary scatterings".The former refer to the above-mentioned parallel scatterings with the initial nucleons (and their partonic constituents) being involved, happening at very high energies instantaneously.The theoretical tool is S-matrix theory, using a particular form of the proton-proton scattering S-matrix ( Gribov-Regge approach [11][12][13][14]), which can be straighforwardly generalized to be used for nucleusnucleus collisions.Although these basic assumptions are not only well-motivated, but also very simple and transparent, the practical application is complicated.This is due to the fact that when developing matrix elements in terms of multiple scattering diagrams, the large majority of the diagrams cancel when it comes to inclusive cross sections.But in EPOS4 one keeps all contributions since one wants to go beyond inclusive cross sections (important when studying high multiplicity events).
The challenge for EPOS4 is to use the full parallel scattering scenario, but in a smart way such that for inclusive cross section the cancellations actually work.This is the new part in EPOS4, strongly based on an interplay between parallel scatterings and saturation.This is discussed in detail in separate publications [8][9][10].Our S-matrix approach has a modular structure, it is based on socalled "Pomerons" (representing elementary partonparton scatterings), and these Pomerons are constructed based on "parton ladders", which makes the link to perturbative QCD.Also, this part has been completely redone, with particular care concerning the role of heavy flavor [9].
In this paper, I want to focus on the secondary scatterings, and in particular on the hadronization of "plasma droplets".In Sec .
II, I summarize briefly the "parallel scattering approach" for primary scatterings.In Sec .III, I discuss how the EPOS4 multiple-scattering diagrams translate into particle production, which means first the production of socalled prehadrons,which may originate from Pomerons or from remnants.Based one these prehadrons, a corecorona procedure will be employed, which allows one to identify the "core", which will then be treated as a fluid which evolves and eventually decays into hadrons.In Sec.IV, I discuss the main topic of this paper, namely the hadronization of the core.

II. PARALLEL SCATTERING SCENARIO IN EPOS4
Let me briefly summarize the EPOS4 approach, considering first high-energy collisions, where a parallel scattering approach for primary scatterings is mandatory, in AA collisions but also concerning multiple partonic scatterings in pp reactions.In Refs.[8][9][10], it is shown in detail how such a "parallel scattering scheme" can be constructed rigorously based on S-matrix theory, which will be sketched in the following.
The starting point is the elastic-scattering T-matrix T for pp scattering, expressed as a product of "elementary" T-matrices T Pom representing partonparton scattering via Pomeron exchange.The expression can be easily generalized for AA scattering.In both cases, pp or AA, this formalism allows a strict parallel scattering picture.The precise content of the Pomerons will be discussed later.The connection with inelastic scattering provides the optical theorem, using so-called cutting rules, which allows one to express the total cross section (which by definition adds up all inelastic processes) in terms "cut Pomerons" G, as shown for the case of two inelastic scatterings in Fig. 1.The light-cone momentum fractions x + and x − are shared between the two Pomerons and the remnants (W in the plot represents some vertex function).Each cut Pomeron G represents a squared amplitude of a single inelastic scattering.
The important new issue in Ref. [8][9][10] is the understanding of how energy conservation ruins factorization (which strictly speaking makes the model inapplicable), and how to solve the problem via an appropriate definition of G.The cut Pomeron, representing a single inelastic scattering is the fundamental quantity in the EPOS formalism.For the moment, one considers the Pomeron as a "box" (the precise internal structure will be discussed later), with two external legs representing two incoming partons carrying light-cone momentum fractions x + and x − , so G = G(x + , x − , s, b), with the energy squared s, and the impact parameter b, see Fig. Let me define the "Pomeron energy fraction" Pom /s, with M Pom being the transverse mass of the Pomeron, which is the crucial variable characterizing cut Pomerons: the bigger x PE , the bigger the Pomeron's invariant mass and the number of produced particles.Large invariant masses also favor high-p t jet production.
Let me consider a AA collision (including pp as a special case).I define, for a given cut Pomeron connected to projectile nucleon i and target nucleon j, a "connection number" N conn = (N P + N T )/2, with N P being the number of Pomerons connected to i, and with N T being the number of Pomerons connected to j.The case N conn = 1 corresponds to an isolated Pomeron, which may take all the energy of the initial nucleons, whereas in case of N conn > 1 the energy will be shared.To quantify the effect of the energy sharing, one defines f (N conn ) (x PE ) to be the inclusive x PE distribution, i.e. the probability that a single Pomeron carries an energy fraction x PE , for Pomerons with given values of N conn .The main problem which ruins factorization is the fact that the distribution for N conn > 1 will be deformed with respect to the N conn = 1 case, due to energy sharing.I define the corresponding "deformation function" R deform (x PE ) as a ratio of f (N conn ) (x PE ) over f (1) (x PE ).I also use the notation R (N conn ) deform (x PE ) to undeline its N conn dependence.As shown in Refs.[8,10], this function can be calculated and tabulated.As discussed in very much detail in Ref. [9], one also calculates and tabulate some function G QCD (Q 2 , x + , x − , s, b), which contains as a basic element a cut parton ladder based on Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) parton evolutions [13,15,16] from the projectile and target side, with an elementary QCD cross section in the middle, Q 2 being the low virtuality cutoff in the DGLAP evolution.The latter is usually taken to be constant and of the order of 1 GeV, whereas here one allows any value.With all this preparation, one is now able to connect G (used in the multi-Pomeron diagrams) and G QCD [which contains all the perturbative QCD (pQCD) diagrams], as follows: For each cut Pomeron, for given x ± , s, b, and N conn , one postulates: with But Q 2 sat does depend on N conn (and also on x ± ).The symbol n is a factor not depending on x ± .This is the fundamental equation in the new EPOS4 approach.As shown in Refs.[8,10], this deformation in the denominator of Eq.
(1) corrects perfectly the deformation of the x PE distribution, the only effect of energy sharing is a modification of Q 2 sat , and the latter will only affect low-p t processes.In this way, one recovers factorization, and binary scaling in AA (with R AA = 1) at large transverse momenta for inclusive cross sections (but only for them).In other words, at large p t , the full multiple-scattering machinery produces exactly the same result as compared with simply taking one single Pomeron (single scattering), which allows one to define and compute (in the EPOS framework) parton distribution function, tabulate them, and then compute inclusive jet cross sections based on these (using standard procedures).
But such a "shortcut" (also referred to as "EPOS4 factorization mode") is only possible for inclusive cross sections, and only at high p t .But this represents only a very small fraction of all possible applications, and there are very interesting cases outside the applicability of that approach.A prominent example, one of the highlights of the past decade in our domain, concerns collective phenomena in small systems.It has been shown that high-multiplicity pp events show very similar collective features as earlier observed in heavy ion collisions [17].High multiplicity means automatically "multiple parton scattering".As discussed earlier, this means that one has to employ the full parallel scattering machinery developed earlier, based on S-matrix theory.One cannot use the usual parton distribution functions (representing the partonic structure of a fast nucleon), one has to treat the different scatterings (happening in parallel) individually, and for each scattering, one has a parton evolution according to some evolution function E (representing the partonic structure of a fast parton), as sketched in Fig. 3.One still has DGLAP evolution, Rigorous parallel scattering scenario, for n = 3 parallel scatterings, including non-linear effects via saturation scales.The red symbols should remind one that the parts of the diagram representing nonlinear effects are replaced by simply using saturation scales.
for each of the scatterings, but one introduces saturation scales.But, most importantly, these scales are not constants, they depend on the number of scatterings, and they depend as well on x + and x − .An example of a multiple-scattering AA configuration is shown in Fig. 4. The diagrams shown in Figs. 3 and 4 are meant to be "symbolic", the real structure is somewhat more complex.Also for simplicity, one consider only gluons in the diagrams.I also do not consider (for simplicity) timelike parton emissions, but in the real EPOS4 simulations they are of course taken care of, based on angular ordering.For a complete description, see Ref. [9].But all this "multiple-scattering discussion" is not the full story.The S-matrix part concerns "primary scatterings", happening instantaneously at t = 0.As a result, in the case of a large number of Pomerons, one has correspondingly a large number of "prehadrons", and based on these, "secondary interactions" occur: fluid formation and decay and hadronic rescatterings.This will be discussed in the next section.

III. THE ROLE OF CORE, CORONA, AND REMNANTS
In this section, I will discuss briefly how the EPOS4 multiple scattering diagrams translate into particle production, which means first the production of socalled prehadrons.As discussed below, our Pomerons are mapped into kinky relativistic strings, where string decay traditionally produces string segments which correspond to hadrons.But one considers the possibility of having a dense environment, and here the string segments cannot "evolve" into hadrons.So one uses the term "prehadrons" for these segments, and they either "fuse" to produce the core, or become hadrons if they escape the core.A similar argument is used for excited remnants, which may decay into hadrons, but again this may happen in a dense area.In Ref. [9], one discusses the different types of Pomerons and their relation with pQCD, including technical details for the computations, and one also discusses how the pQCD diagrams translate into kinky strings.String decay into segments (now called prehadrons) is disussed in detail in Ref. [18].Based on these prehadrons, a core-corona procedure will be employed, which allows identifying the "core", which will then be treated as a fluid that evolves and eventually decays into hadrons, which still may collide with each other.
In Fig. 5, I consider an example of a multiple cut Pomeron (i.e.multiple scattering) configuration of two colliding nuclei A and B at Large Hadron Collider (LHC) energies (several TeV), each nucleus composed of two nucleons (just for illustration).Dark blue lines mark active quarks, red dashed lines active antiquarks, and light blue thick lines projectile and target remnants (nucleons minus the active (anti)quarks).I have chosen an example with two scatterings of "sea-sea" type, and one of "val-sea" type (see Ref. [9]).One considers each incident nucleon as a reservoir of three valence quarks plus quark-antiquark pairs.
The "objects" which represent the "external legs" of the individual scatterings are colorless: quark-antiquark pairs in most cases as shown in the figure, but one may as well have quark-diquark pairs or even antiquark-antidiquark pairs -in any case, a 3 and a 3 color representation.In general, each of the horizontal gluon lines would develop the so-called timelike cascade, which is not considered here for the simplicity of the discussion.I also consider only gluons here, in general, there are as well g → q + q splittings.This multiplescattering picture works perfectly also at lower energies [BNL Relativistic Heavy Ion Collider (RHIC)].With decreasing energy, it becomes simply more and more likely that the Pomerons in Fig. 5 are replaced by purely soft ones, as indicated in Fig. 6.Also the Pomerons get less energetic, producing fewer particles.For a given diagram as for example Fig. 5, one constructs the corresponding color flow diagram (two arrows per gluon, forward and backward arrows for quarks and antiquarks).One then follows the color flow arrows, for example starting with a 3 being an external leg on the projectile side, until one finds a 3 which is in our example an external leg on the target side.This defines a chain of partons.Assuming that the external legs are simply quarks and antiquarks, one gets six chains of the type q − g − g − ... − g − q, mapped (in a unique fashion) to kinky strings, where each parton corresponds to a kink.This is explained in detail in Ref. [9].The kinky strings are then decayed into prehadrons.
A second source of particle production are the remnants.In case of multiple scattering as in Fig. 5, the projectile and target remnants remain colorless, but they may change their flavor content during the multiple collisions.The quark-antiquark pair "taken out" for a collision (the "external legs" for the individual collisions), may be u-s, then the remnant for an incident proton has flavor uds.In addition, the remnants get massive, much more than a simply resonance excitation.One may have remnants with masses of 10 GeV/c 2 or more, which contribute significantly to particle production (at large rapidities).
In the above discussion, I mentioned the production of "prehadrons" from strings and from remnant decays.Based on these prehadrons, one employs a so-called core-corona procedure (introduced in Ref. [19], updated in Ref. [20]), at some given (early) proper-time τ 0 , see Fig. 7.The method is used for all systems, big ones (as (a) (b) Figure 7. Sketch of the core-corona separation for a "big" (a) and a "small" (b) system.The dots are prehadrons in the transverse plane, red refers to core, blue to corona.
central AA) or small ones (peripheral AA, or even pp scattering).One considers all prehadrons at τ 0 , marked as dots in Fig. 7.For each prehadron, one computes where γ is the trajectory of the prehadron moving out of the system, and Z = (N part − 2)/N max part the "centrality" based on the number of participants.I define "cone prehadrons" as prehadrons within a jet cone radius R with respect to a parton with p t ≥ 1 GeV/c, considering all the partons which constitute the kinky string being the origin of the prehadron in question.All others are "noncone prehadrons".The jet cone radius is defined as R 2 = (∆η) 2 + (∆φ) 2 , with ∆η and ∆φ being respectively the difference in pseudorapidity and azimuthal angle, with respect to the parton.I use jet cone radii of 0.3 (pp) and 0.2 (PbPb).I use different sets of parameters { f EL1 , f EL2 , g EL , h EL } for noncone and cone prehadrons, namely, {0.1, −, 0.5, 0.375} (pp cone), {0.4,−, 0, 1} (pp noncone), {0.3, 0.3, 0.5, 0.375} (PbPb cone), {0.5, 0.7, 0, 1} (PbPb noncone).The prehadron in question is then considered to be of "core" or "corona" type, based on the following criteria: For p new t ≤ 0, the prehadron is considered to be a "core prehadron".
For p new t > 0, the prehadron escapes, it is called "corona prehadron".
The core prehadrons constitute "bulk matter" and will be treated via hydrodynamics.The corona prehadrons become simply hadrons and propagate with reduced energy (due to the energy loss).In Fig. 7, I distinguish between big and small systems.Actually, the method used is the same, but there are relatively more core prehadrons in the big systems.Corona particles are either very energetic (then they move out even from the center), or they are close to the surface, or one has a combination of both.
The core-corona procedure is a crucial element in the EPOS4 approach, which allows soft and hard elements to emerge naturally from a unique formalism (rather than constructing a two-component approach).It is also important to understand that a hard process (in the usual terminology), which is in the EPOS4 framework a high transverse momentum pQCD process, will also contribute to low-p t particle production, since only string breaks close to the the high-p t kink will produce high-p t particles.
Let me investigate the core-corona effects in EPOS4 first for proton-proton collisions.
One wants to understand the relative importance of the core part, and one also wants to know which fraction is coming from remnant decay.In Fig. 8, I show results for different event classes (defined via Pomeron numbers) in proton-proton collisions at 7 TeV.I show in each case four different curves: all prehadrons (red full), all core prehadrons (red dotted), prehadrons from remnant decay (blue full), and core prehadrons from remnant The curves refer to all prehadrons (red full), all core prehadrons (red dotted), prehadrons from remnant decay (blue full), and core prehadrons from remnant decay (blue dotted).decay (blue dotted).Not too surprisingly, remnant contributions show up in general at large rapidities, but in all cases they do contribute to the core.Comparing the red full and dotted curves, one sees that the core fraction is in all cases substantial, most significantly for the event class "10-15 Pomerons", but even for small Pomeron numbers, the core contribution remains important.As a small side remark: it is often said that "collectivity" is seen in high multiplicity pp scattering.In our EPOS4 analysis, it is even essential in minimum bias collisions (with an average Pomeron number of around 2), very strongly supported by data.
Let me now turn to AA scattering, to understand the relative importance of the core part, and of the fraction coming from remnant decay.In Fig. 9, I show results for different centralities in PbPb collisions at 2.76 TeV, namely (from top to bottom): 0-10%, 30-40%, 70-80%, and 80-100% (based on the distribution of the impact parameter).Also here, the remnant contributions show up preferentially at large rapidities, and in all cases, they do contribute to the core.Comparing the red full and dotted curves, one sees that the core fraction (ratio of dn/dη s of the core contribution over all) is in all cases substantial: 0.97 for 0-10%, 0.95 for 30-40%, 0.85 for 70-80%, and 0.77 for 80-100%.One also sees that this "core dominance" extends over a wide rapidity range.
Having identified core pre-hadrons, one computes the corresponding energy-momentum tensor T µν and the flavor flow vector at some position x at initial proper time τ = τ 0 as and with q i ∈ {u, d, s} being the net flavor content and p i the four-momentum of prehadron i.The function g is a Gaussian smoothing kernel, in Milne coordinates given as g , with parameters d and d ⊥ .The parameter d is given as min(1.0,r(s)), with r(s) = max(0.5,y(s)/y(s ref )), where y(s) is the rapidity of the projectile nucleons in the nucleon-nucleon centerof-mass for a squared nucleon-nucleon energy s, and where s ref = (2.76TeV) 2 is some reference value.The parameter d ⊥ is given as 0.3 + 0.3Z for PbPb and as 0.3 − 0.02 min(N Pom − 1, 12) for pp, where Z = (N part − 2)/N max part is the centrality based on the number of participants and N Pom the number of (cut) Pomerons (characterizing the event activity in pp).
The Lorentz transformation of T µν into the comoving frame provides the energy density ε and the flow velocity components v i , which will be used as the initial condition for a hydrodynamical evolution [20].This is based on the hypothesis that equilibration happens rapidly and affects essentially the space components of the energy-momentum tensor.In Fig. 10 (upper plot), I show the energy density at the initial proper-time τ 0 as a function of the transverse coordinate r for different event classes (defined via Pomeron numbers) in pp collisions at 7 TeV.The values of τ 0 is 0.4 fm/c in EPOS4.0.0.I also indicate in the figure the freeze-out energy density (blue dashed line).For each event, one determines (based on the energy density distribution) the event plane angle ψ and rotate the system accordingly (to have after rotation event plane angles zero).The plots in Fig. 10 represent averages over such rotated events, the full lines correspond to azimuthal angles φ = 0, the dotted lines to φ = π/2.The difference between the two lines reflects the azimuthal asymmetry.As already seen in Fig. 8, even in the case of a single Pomeron, there is some core production, and one gets actually an energy density of several GeV/fm 3 , but the radial extension is very small, and the lifetime (before freeze-out) very small.For large Pomeron numbers, the energy densities and the radial extensions get bigger, but the latter remain of the order of 1 fm/c.In Fig. 10 (lower plot), I show the energy density at the initial proper-time τ 0 as a function of the transverse coordinate r for different centralities (defined via impact parameter) in PbPb collisions at 2.76 TeV.The values of τ 0 are between 2 fm/c and 1 fm/c from central to peripheral in EPOS4.0.0.Also here one does the rotations according to the event plane angles, as discussed above for pp scattering, before taking event averages.Based on these, the full lines correspond to azimuthal angles φ = 0, the dotted line to φ = π/2.As already seen in Fig. 9, even in peripheral collisions, there is some core production, and one gets actually an energy density of about 2 GeV/fm 3 for 70-80% centrality, but the radial extension is small, and the lifetime as well.
It follows a viscous hydrodynamic expansion.Starting from the initial proper time τ 0 , the core part of the system evolves according to the equations of relativistic viscous hydrodynamics [20,24], where one uses presently η/s = 0.08.The "core-matter" hadronizes on some hyper-surface defined by a constant energy density ǫ H (presently 0.57 GeV/fm 3 ).
In earlier versions [25], one used a so-called Cooper-Frye procedure.This is problematic in particular for small systems: not only do energy and flavor conservation become important, but one also encounters problems due to the fact that one gets small "droplets" with huge baryon chemical potential, with strange results for heavy baryons.In EPOS4, one systematically uses microcanonical hadronization, with major technical improvements compared to earlier versions, as I will discuss below.

IV. MICROCANONICAL HADRONIZATION OF THE CORE
In EPOS4, one systematically uses micro-canonical hadronization.Due to several technical improvements, our procedures work for all kinds of systems (big and small ones), and one uses it for proton-proton as well as heavy ion scattering.But not only for technical reasons.The usual Cooper-Frye procedure amounts to a smooth transition from a fluid to a particle description.However, one has a violently expanding system into the vacuum, where around some critical energy density the system goes very quickly from one state (plasma) to a very different one (hadrons), in a complicated fashion.So one assumes -whatever the precise mechanism might be -that the system decays into multi-hadron states, randomly.And the most random way corresponds to maximizing the entropy, which amounts to the micro-canonical ensemble.Being a crucial element of the new EPOS4 approach, I will discuss this in the following.
It is important to note that there is no need to match the dynamical part of hydro evolution, one considers a sudden statistical decay one has full energy and flavor conservation, which is important for small systems the procedure is extremely fast, and can be used for "big systems" and in particular study the limiting case of "very large plasma droplets" (infinite volume limit).

A. Infinite volume limit
Let me first look at the infinite volume limit, which is the grand canonical ensemble.Here, for single particle spectra (particle species k), one has the distribution with degeneracy g k , volume V, energy E k and temperature T. The average energy is Changing variables via E k dE k = pdp, and using one gets ) .(10) This formula allows one to compare micro-canonical and grand-canonical decay using the same average energy (this will be done later).Generating hadrons according to Eq. ( 6), onev may compute the total energy E of the produced particles, and then compare with the average energy E , Eq. ( 10).In Fig. 11, I plot the distribution of E/ E , for a temperature of 130 MeV and two values for the volume V, namely V=50 fm 3 and V=1000 fm 3 .This shows the amount of violation of energy conservation, which increases with decreasing volume.

B. Microcanonical hadronization of plasma droplets
Let me now turn to micro-canonical decay of a "plasma droplet" of volume V and energy E (also referred to as mass M).Here, all the states are equally probable, so the weight of a decay of the droplet in its center-of-mass system (CMS into n hadrons with momenta p i is given as with the constants (the indices referring to volume, degeneracy, and identical particles) and with (13) with E i = m 2 i + p 2 i being the energy and p i the 3-momentum of particle i.Here, C deg accounts for degeneracies (g i is the degeneracy of particle i), and C ident accounts for the occurrence of identical particles (n α is the number of particles of species α).The term δ Q A ,Σq A i reflects conservation laws (baryon (A = B), electric charge (A = C) and strangeness (A = S).It should be noted that one has to use the "nonrelativistic phase space (NRPS)" dΦ NRPS rather than the Lorentz-invariant phase space (LIPS) dΦ LIPS used for the decay of massive particles, where asymptotic states are defined over an infinitely large volume, whereas here one considers a finite volume (see discussion in Ref. [26]).But of course, one uses the relativistic expressions i for hadron energies.Numerical procedures to deal with these n-body phase-space expressions are quite involved, and have a long history.Cerulus and Hagedorn [27] provided in 1958 a way to evaluate the NRPS integrals Φ NRPS = dΦ NRPS , which they used to compute particle production in high energy collisions.Shortly after it was proposed to better use a covariant formula, referred to as the Lorentz-invariant phase-space (LIPS) integral.The corresponding algorithms for Monte Carlo applications were discussed by James [28] in 1968 and heavily used for computing particle production.Being still relevant concerning the decay of a quark gluon plasma, the NRPS integrals and the corresponding Hagedorn prescriptions were used for Monte Carlo realizations (Werner and Aichelin [29] in 1994, Becattini and Ferroni [26] in 2004) of particle production in heavy ion collisions.In 2012 (Bignamini et al. [30]), it was proposed to use the fact that the non-relativistic phase-space (NRPS) element is up to a factor equal to the Lorentz invariant phase-space (LIPS) element, which is much easier to handle for n not too large.
The Cerulus-Hagedorn method employed in Ref. [29] is not very efficient and needs enormous computing efforts at intermediate n (around n = 10) and at very large n, whereas the LIPS method employed in Ref. [30] works well at small n, but gets very time consuming at large n.
I will employ a hybrid method, i.e. an improved (and very efficient) Cerulus-Hagedorn method for large n, and the LIPS method for small n, such that one has finally very efficient procedures for any n, even for very big values (for very big plasma droplets).
Let me first discuss the "improved Cerulus-Hagedorn method".Cerulus and Hagedorn [27] proposed to write the phase-space integral as with p i = | p i |, and with the "random walk function" W (also called angular integral) (15) with u i = p i /p i and Ω i referring to the corresponding solid angle.The crucial point is an efficient calculation of the angular integral W. It may be written as (omitting the arguments p 1 , . . ., p n ) which leads to with the integrand being with a j = p j λ π .One may use Euler's product formula to get which is for large n approximately equal to ).This means that the expression ∏ n j=1 {sin p j λ/p j λ} is well approximated by exp −P 2 λ 2 , with P = 1 6 The crucial point is the following: although this "approximation" is not precise enough to be used directly, one may use the fact that one has a rough estimate of the integrand F(λ) to make a variable transformation which allows one to make a very accurate and efficient numerical integration.Thanks to the approxmation F(λ) ≈ λ 2 2π 2 exp −P 2 λ 2 , one knows that is a slowly varying function of λ (polynomial dependence).The angular integral may then be written as (23) So one has an integral of the form f (x)e −x 2 dx, with a well behaved function f , which allows one to evaluate the integral with very high precision by using the Gauss-Hermite method, i.e., with nodes and weights x GH j and w GH j found in text books.With only six nodes one gets excellent results.
Having solved the problem of computing W, the next step is to write dΦ NRPS in terms of independent variables, which can be done (see Ref. [29] and also Appendix A) as with n − 1 independent variables r i defined in [0, 1], and with , where a configuration is defined by the number of hadrons, their species (only configurations respecting the conservation laws Σq A i = Q A are considered), and their momenta.In addition to r i , one needs 2n more variables, to take care of the random orientations in space of the momentum vectors as (x, y, z), with x = √ 1 − z 2 cos(2πu), y = √ 1 − z 2 sin(2πu), and z = 2w − 1, characterized in terms of two independent variables u, w ∈ [0, 1] (for each of the n hadrons).
However, this does not work for small n (below 30), because in that case the angular integral W cannot be calculated reliably, as discussed earlier.Here, one uses the LIPS method [28,30].The NRPS and LIPS integrands are very similar, the only difference is an additional 1/2E i factor in the LIPS case.So one uses with (see appendix B) with n − 2 independent variables r i defined in [0, 1], and with , and (z i ) i = r i , and and with These formulas correspond to a sequence of successive two-body decays , where the decays are considered in the CMS system of the decaying mass M i (the only way to get simple formulas), with the decay products having random orientations (x, y, z) in space, with x = √ 1 − z 2 cos(2πu), y = √ 1 − z 2 sin(2πu), and z = 2w − 1, characterized in terms of two independent variables u, w ∈ [0, 1] (for each of the n − 1 decays).At the end of the procedure -starting from the last decay, going backward untill the first decay -one has to perform a sequence of Lorentz boosts to have finally the momenta in the CMS frame of the decaying droplet.This will become time-consuming for large n (>50), whereas for smaller n the procedure is very fast.
So one has excellent methods for computing dΦ NRPS , for large n [based on Eq. ( 24)], and for small n [based on Eqs.(25,26)], and fortunately around n ≈ 30 − 50 both methods work, so finally one has very fast procedures for any size of droplets.So one is ready to employ Markov chain methods to generate configurations according to the weights dΦ NRPS .A configuration of n hadrons is characterized by their particle species and momenta, the latter ones expressed in terms of k independent variables (all of them defined in [0, 1]).Based on the above discussion, in the case of the improved Cerulus-Hagedorn method [using Eq. ( 24)], one has k = n − 1 + 2n, in case of the LIPS method [using Eqs.(25,26)], one has k = n − 2 + 2(n − 1).For details concerning the Markov chain methods, see Appendix C, where one discusses in particular the (new) algorithm to define the proposal matrix in the Markov chain.
Having a reliable method for very large n is in particular useful in order to investigate the convergence towards the infinite volume limit.
I will firstalso as a check that the procedures work -compare our microcanonical results with grand-canonical decay, knowing that the latter is the limit of the former for large droplets.I consider a "complete" set of hadrons, compatible with a recent PDG (particle data group) list.I consider the decay of droplets of different invariant masses M (corresponding to the total energy E in the above formulas).The volume is chosen such that one has always the same energy density, namely M/V = ε FO = 0.57 GeV/fm 3 , which corresponds to a temperature of T = 167 MeV.
In Figs. 12 and 13, I show momentum distributions of pions (red), kaons (blue), protons (green), lambdas (yellow), Ξ baryons (light geen), and Ω baryons (grey), where one counts particles and antiparticles.I show results for different masses M, namely M = 200 GeV, M = 100 GeV, M = 50 GeV, M = 25 GeV, M = 12.5 GeV, and M = 6.25 GeV.I show results for microcanonical particle production (points) compared with the grand canonical one (lines).The grand canonical results are all taken for the same volume, namely V 0 = M 0 /ε FO , with M 0 = 50 GeV, to have always the same reference curve, therefore one multiplies the microcanonical results with M 0 /M, to have the correct normalization.
Looking at the M = 200 GeV results (upper plot in Fig. 12), one sees no difference between the microcanonical and the grand canonical results, which means that this system ( M = 200 GeV and V = M/ε FO ,) correspond already to the "infinite volume limit".Considering M = 100 GeV and M = 50 GeV (middle and lower plot in Fig. 12), one sees slight deviations of the microcanonical curves from the grand canonical ones, for heavy particles (Ξ baryons and Ω baryons) and at large p t .For pions and kaons, the two scenarios are still identical.
Looking at the plots in Fig. 13, representing M = 25 GeV, M = 12.5 GeV, and M = 6.25 GeV, one sees increasing differences between microcanonical and the grand canonical results, the biggest ones for heavy particles, in particular the Ω baryons.Finally not only the shape of the distributions is affected, but even the absolute yield.In case of M = 6.25 GeV, one observes a strong Ω baryon suppression in the microcanonical compared with the grand canonical case.
So far one considers the decay of a static plasma droplet of given mass M and volume V (with M/V = ε FO ), according to the microcanonical probability distribution, whereas in reality, one has an expanding fluid, where fluid cells pass eventually below the critical energy density ε FO .I discuss in the following how to treat such a case.

C. Flow through hypersurface elements
Let me assume that one has an expanding fluid, characterized by an energy-momentum tensor T µν and some vector J µ A representing the current of conserved quantities A (charge, baryon number, strangeness).The energy-momentum tensor can be expanded in the case of a viscous fluid in terms of the energy density ε, the flow vector u µ (four-velocity of fluid cell), the shear stress tensor, and bulk pressure.Given the space-time dependence of the energy density, one may define a "freeze-out (FO) hypersurface" via such that a point on the FO hypersurface can be expressed as x µ (τ, η, r, ϕ) with x 0 = τ cosh η, x 1 = r cos ϕ, x 2 = r sin ϕ, x 3 = τ sinh η, (30) with for example r = r(τ, η, ϕ).A hypersurface element [sketched in Fig. 14(a)] may then be defined as One uses Milne coordinates and the corresponding natural frame (see appendix D) for all vector and tensor representations.
Cooper-Frye hadronization amounts to calculating the production of particles with four-momenta p µ according to with f T (E) being "thermal" (grand canonical) distribution at a given temperature.As sketched in Fig. 14 considering particle flow through FO hypersurface elements.Here one is going to proceed differently.Rather than considering particle flow, one considers energy-momentum flow.As sketched in Fig. 14(c), one defines the flow of energy-momentum dP and the flow of conserved charges dQ A through the surface element dΣ as with A ∈ {C, B, S}, corresponding electric charge, baryon number and strangeness.Momentum and charges are conserved, i.e., Σ dP µ = 0 and Σ dQ A = 0 for a closed surface Σ, and as a consequence, if one starts at some given proper time τ ini , one gets when integrating dP µ and dQ A over the complete FO hypersurface, one recovers completely the initial energy-momentum and conserved charges, and one therefore considers dP µ and dQ A as the "basic objects" which allow one to realize particle production by respecting the conservation of energy-momentum and of conserved charges.
In practice, one discretizes the hypersurface, and consider small (but finite) elements ∆Σ (K) which cover the whole hypersurface Σ FO , as sketched in Fig. 16, to be explained in the following.Before defining hypersurface and hypersurface elements, the hydrodynamic evolution is completed, and all relevant quantities (energy density, flow velocity, current of conserved charges, ...) are known on a four-dimensional grid (τ i , η j , r k , ϕ n ), with constant step sizes ∆a = a i+1 − a i for the four variables a ∈ {τ, η, r, ϕ}.
The ranges for r and η are adapted to the system size (PbPb is obviously bigger than pp) and the energy (for LHC one needs larger η ranges than for RHIC energies).First the transverse size R is determined (based on the core distribution in space), event-by-event, such that the freeze-out surface will be covered, with R from 3 fm (peripheral) to 10 fm (central) for PbPb at 2.76 TeV, and with R = 2.4 for pp at 7 TeV.Then a longitudinal width (in space-time rapidity) W is determined, as 21.2 r(s), with r(s) = y(s)/y(s ref ), where y(s) is the rapidity of the projectile nucleons in the nucleon-nucleon centerof-mass for a squared nucleon-nucleon energy s, and where s ref = (2.76TeV) 2 is some reference value.Then one defines: ∆r = R/74, c∆τ = 2R/50, ∆η = W/26, ∆ϕ = 2π/60 (EPOS4.0.0 default).
The hypersurface defined by ε(τ, η, r, ϕ) = ε FO is computed based on the knowledge of the energy density on the grid and is given in terms of four-vectors x = x (i,j,n) depending on three indices i, j, n as x 2 = r i,j,n sin ϕ n , with r i,j,n being the root of f (r) = ε(τ i , η j , r, ϕ n ) − ε FO , for fixed i, j, n computed via linear interpolation from the known values of f (r k ) and f (r k+1 ), where k has been determined such that f (r k ) and f (r k+1 ) have opposite sign (and [r k , r k+1 ] contains the root).Knowing the hypersurface given as x = x (i,j,n) , one defines hypersurface elements ∆Σ (i,j,n) as where the partial derivatives are obtained from Eq. ( 30) and ∆τ, ∆ϕ, and ∆η are the step sizes of our grid.Having determined r i,j,n via interpolation, one employs the same interpolation procedure to construct the energy-momentum tensor T µν and the current of conserved charges J ν A corresponding to r i,j,n , from the known values of these quantities at grid points (τ i , η j , r k , ϕ n ) and (τ i , η j , r k+1 , ϕ n ) and use then for the interpolated values the symbols T (i,j,n) µν and J (i,j,n) ν A .
To simplify the following discussion, one introduces an integer K = K(i, j, n), being uniquely related to the triplet (i, j, n), so one uses the quantities ∆Σ (K) µ , T (K) µν and J (K) ν A .One then computes the energy-momentum flow vector ∆P (K) µ = T (K) µν ∆Σ (K) ν and the flow of conserved charges ∆Q (K) For each hypersurface element ∆Σ (K) and its associated flow vector ∆P (K) , one defines an invariant mass element where "•" refers to a product of four-vectors.One also defines an associated flow four-velocity U as The four-velocity U µ is NOT equal to the fluid velocity u µ , this would only be true in case of zero pressure.
If one was only interested in global particle yields, one might sum up all the masses to get the total effective invariant mass: sum up the charges as and decay this object according to microcanonical hadronization, as discussed in the previous section.But in that case, one has no information transverse momentum and rapidity dependencies of particle production.
But one can do better.One still computes M eff and performs the microcanonical decay, as discussed above.But one actually has much more information than just the masses.One knows in addition for each hypersurface element ∆Σ (K) the associated four-velocity U (K) , representing the energy-momentum flow through the surface element, and one knows all its coordinates, see Fig. 17.Since one counts the hypersurface elements ∆Σ (K) as K = 1, 2, 3...K max , one may define a probability law P on this set {1, 2, 3, ..., K max } as representing the relative weight of a particular hypersurface element.
After having done the microcanonical decay, one may consider P(K) as the weight of having produced a particle at a hypersurface position given by the coordinates τ K , ϕ K , r K , η K , corresponding to the hypersurface element ∆Σ (K) .In other words, one generates the coordinates τ K , ϕ K , r K , η K of the produced hadrons according to the law P(K).In addition, one may consider P(K) also as the weight of having a four-velocity U (K) , and so one generates not only the position according to P(K), but also the four-velocity U (K) , which one then uses to Lorentz boost the particles into the lab frame (remembering that the microcanonical decay is done in the center-of-mass frame of an effective invariant mass).In other words, one gives back the flow, which has been taken out to construct the effective invariant mass.
In the actual realization of the generation of a set of particles from the Monte Carlo procedure (microcanonical decay plus subsequent sampling based on P(K) and the following boosts of the generated particles according to U (K) ) one will not have 100% energy-momentum conservation, so one repeats the sampling based on P(K) and the following boosts several times, to take the "best" selection, giving an accuracy on the 1% level.
To test the above procedure, one considers a realistic expanding fluid and the corresponding hypersurface created in a single AuAu simulation at 200 AGeV with an impact parameter of 8 fm, considering a limited τ range (two steps).Transverse flow velocities on the freeze-out surfacte reach up to 0.45c.The total effective mass is around 160 GeV, so this should correspond already to the infinite-volume limit.One therefore computes particle yields in two ways: using the above method of microcanonical decay plus Lorentz boosts according to U (K) , based on Monte Carlo simulations, and using the grand canonical method by doing a threedimensional (3D) momentum integration of Eq. ( 6), including as well Lorentz boosts according to U (K) .The results are shown in Fig. 18, where I plot transverse momentum distributions for different hadron species, from top to bottom: π, K, Λ, Ξ, and Ω (since one is just interested in comparing two methods, one computes yields in p t bins of width 0.2 GeV/c, not divided by bin size).As expected, one observes indeed the same result (which is also a good check of the numerics, since the two methods are competely independent of each other).At very high energies, the above procedure may be problematic, since the FO hypersurface may extend over a wide η range, and it is not obvious if it is correct to consider the whole surface as one object with one single effective invariant mass.One therefore has the possibility (and will use it) to split the surface into several pieces corresponding to intervals in η with width ∆η, as [η i − ∆η/2, η i + ∆η/2], with η i+1 − η i = ∆η.One may then do the sum as in Eq. ( 42), but just summing the hypersurface elements within [and correspondingly for Q eff A (η i )] where η(K) is the η value associated with K. Rather than a single effective mass, one has now a discrete set of masses ∆M eff (η i ), which one may decay independently in their center-ofmass frames (in the same way as dicussed for M eff .One also notes the rapidities y i corresponding to ∆M eff (η i ), which allows one to finally boost the particles into the lab frame.It should be noted that the numerical values of y i are very close to η i .One has to do this splitting carefully.If the ∆η interval is chosen to be very small, one may create artificially small masses, with the corresponding suppression of heavy mass hadrons.In this sense, the width ∆η is a physical parameter, measuring the width where hypersurface elements are causally connected.I will come back to this point later.
Let me finally address a technical issue.When constructing objects for given η intervals, after summing ∆Q (K) A elements within this interval, one may end up with non-integer values, which is modified by taking the next-closest integer instead.The violation of the global conservation laws even at RHIC energies is small (less than 1%), so for the moment no correction is applied.This might be changed in the future, but in particular at low energies there are even more important issues to be considered (breakdown of the parallel scattering scheme, first of all).But it is planned to reconsider these issues in a future work dedicated to lower energies.

D. Core hadronization in pp scattering
I will apply the procedures discussed in the last section to investigate proton-proton scattering at LHC energies (I will show results for a center-of-mass energy of 7 TeV), and I will try to understand what kind of effective masses are produced, and where they are produced in space and time.
Let me consider (randomly chosen, but typical) proton-proton scattering events involving 2 Pomerons, 6 Pomerons, and 12 Pomerons.In Fig. 19, I plot for the three cases the energy density in the transverse plane (x, y).I consider in each case two snapshots, namely at the start time of the hydro evolution τ 0 = 0.40 fm/c (left column) and a later time τ 1 (different values, 1.5 − 2.5 fm/c) close to final freeze-out (right column).In all cases, the initial distributions have elongated shapes (just by accident, due to the random positions of interacting partons).One can clearly see that the final distributions are as well elongated, but perpendicular to the initial ones, as expected in a hydrodynamical expansion.
As explained in the last section, one computes effective mass elements representing the momentum flow through surface elements.It is in particular useful to split the η range into intervals [η i − ∆η/2, η i + ∆η/2], with η i+1 − η i = ∆η, and sum up the corresponding hypersurface elements to get a set of effective masses ∆M eff (η i ).In this way, one obtains an "effective mass distribution" ∆M/∆η, which one may plot as a function of η, as shown in Fig. 20 for the abovementioned events with 2 Pomerons (corresponding roughly to minimum bias), events with 6 Pomerons (roughly 3 times minimum bias), and events with 12 Pomerons, (roughly 6 times minimum bias).As expected, the dM/dη values increase with increasing Pomeron number, somewhat less than linear.The total energy (of the core part) may be estimated as using the fact that y i is very close to η i .One gets for the three cases: These numbers should be compared with the total available energy, which is 7000 GeV.This shows that the total energies of the core part in all three cases represent only a relatively small fraction of the total energy, but the observable effects are big!One may now compute the total effective masses as and one gets for the three cases: These numbers are much smaller than the energies E, the latter ones containing the kinetic energy of the longitudinal expansion.So the masses M are the relevant quantities concerning the production yields for the different particle species, and these masses will be used in the micro-canonical decay procedures.
The effective masses ∆M eff (η i ) are sums of small hypersurface elements ∆Σ (K) , each one corresponding to a set of coordinates τ, η, r, ϕ, which allows one to plot projections of these coordinates into the η − τ plane, as done in Fig. 21 for our 12-Pomeron example.This shows that the hypersurface is quite extended in space and time.In general, hadronization at large space-time rapidities (corresponding to large rapidities) happens early, the latest hadronization takes place around η = 0, the extension in time is around 2 fm/c, the width in η is around 10 units.
In particular the important width in η brings up (again) the question of how to deal with the splitting of the complete hypersurface into pieces of width ∆η, as sketched in Fig. 22.For small systems (like pp scattering), the total effective masses are not very big, in our three examples between 30 and 115 GeV, and splitting these into small pieces will lead to very small effective masses, with big effects concerning the production of heavy baryons.As shown in Sec.IV B, for masses at 25 GeV, one sees already deviations in the microcanonical result compared with the grand canonical one.So the big question is: does the system decay as single effective mass M eff or as several independent objects of width ∆η.
in Refs.[31,32], Oliinychenko et al. presented a sampling method for the transition from relativistic hydrodynamics to particle transport, which preserves the local event-by-event conservation of energy, momentum, baryon number, strangeness, and electric charge, for each sampled configuration in spatially compact regions (patches).In our method, using finite ∆η amounts to cutting the hypersurface into distinct regions, defined by η ranges.In principle it would be possible to further cut these regions into smaller pieces, which indeed might be necessary to study local fluctuations.This is an option for future development, but I do not follow this possibility in this paper, since I mainly focus on the transition to small systems, where already the regions defined by η ranges are small.

E. The role of the splitting parameter ∆η in core hadronization
Let me consider ∆η as a free parameter, in order to try several choices of ∆η in the following, to learn how it affects the results.One choice will be ∆η = ∞, which means one has just one single object as in Fig. 22 (left).In addition, one also investigates ∆η = 4.5 and ∆η = 1.8, as in Fig. 22 (right).
One should keep in mind that the total width of the FO hypersurface is around 10 fm/c, at LHC energies, see Fig. 21, and the total effective masses in pp scattering are between several tens up to more than 100 GeV, depending on the Pomeron number.
One considers only the core contributions for the moment, decayed according to microcanonical hadronization, for pp and PbPb scattering.Particle ratios (with respect to pions always) are computed as a function of the charged multiplicity dn ch /dη(0) , which allows one to put pp and PbPb results on the same plot.One considers simulation results from EPOS4.0.0 for pp at 7 TeV and PbPb at 2.76 TeV, compared with data from ALICE [6,7,[33][34][35].In Fig. 23, I  plot), and in Fig. 24 results for Ξ/π (upper plot) and Ω/π (lower plot).In all cases, I show simulation results for ∆η = ∞ (green dashed-dotted lines), ∆η = 4.5 (blue dotted lines) and ∆η = 1.8 (red dahsed lines).For the simulation results, thin line style is used for pp, and thick lines for PbPb, whereas for the data, circles are used for pp and stars for PbPb.The short black horizontal lines on the right side of the plots correspond to predictions from the "thermal model" [36].
First of all, one sees that all the curves show a continuous behavior, when passing from pp to PbPb.Concerning the PbPb results, there is no difference between the the three ∆η choices, with the only exception of Ω/π ratios, where the ∆η = ∞ gives a flat ratio, whereas for ∆η = 4.5 and ∆η = 1.8 the curves drop slightly on the left end (peripheral collisions).The situation is quite different for the pp curves.In all cases, the curves drop when going to smaller multiplicities, and this drop is more and more pronounced with decreasing ∆η, and comparing the curves with fixed ∆η, the drop is increasing with particle mass: the biggest effect is seen for Ω/π ratios.One should also keep in mind that grand canonical particle production (what has been used in earlier EPOS version) leads to flat curves, the same for pp and PbPb.So one sees a big effect of microcanonical hadronization compared to the grand canonical one, and the deviation depends strongly on ∆η.
Concerning the comparison with experimental data, one sees that none of the choices of ∆η can explain the data: for big values of ∆η, one stays clearly above the data, and for small values, the ratios do drop sufficiently, but too fast.So it is clear that even a "tuning" of ∆η will not help to reproduce the data -for a pure core approach.I will investigate the full core-corona approach in the next section.
As a side remark: it is of course possible to reduce baryon production by using a smaller value of the freeze-out energy density ε FO , as shown in Fig. 25 (left), where I plot the ratio Ω/π for a pp simulation at 7 TeV, using ∆η = ∞ and ε FO = 0.2.But as seen in 25 (right), the average p t is much too high.Similar results are obtained for other baryons.So it seems impossible to reproduce in this way ratios and other important observables at the same time, as it will be possble in the core-corona approach, where in addition, one employs a unique value of ε FO for all systems.

F. Results for the full core-corona picture in pp and PbPb scattering
In the following, results concerning the full corecorona approach will be discussed.Let me briefly recall the procedure: as explained in detail in Sec.III, primary interactions lead to the production of prehadrons, a core-corona procedure separates the core and corona prehadrons.The former constitute the core, the latter are considered to be hadrons.The core is meant to be matter, which evolves according to viscous hydrodynamics, which allows one to compute the spacetime dependence of the energy-momentum tensor T µν (expressed in terms of energy density ε, the flow vector u µ , the shear stress tensor, and bulk pressure) and of the vector J µ A representing the current of conserved quantities A.
As shown in Sec.IV C, based on the energy density ε, one defines a freeze-out (FO) hypersurface via ε(τ, η, r, ϕ) = ε FO , which allows defining (after discretization) small hypersurface elements ∆Σ µ .Together with the knowledge of T µν , this allows one to compute energy-momentum flow four-vectors ∆P and invariant mass elements ∆M = √ ∆P • ∆P, which may be summed up to get effective invariant masses ∆M eff (η i ), covering space-time rapidity intervals of width ∆η, which are decayed microcanonically (this is called "hadronization").
As discussed in the previous section, the outcome of core hadronization depends on ∆η.I use in the following ∆η = ∞, which seems to be "the best" value when comparing with data, in the sense of a comparison with very large set of data concerning very different observables, for many different systems and energies, much beyond what is shown in this paper.The aim of this work is not to accommodate local fluctuations, in that case one needed in particular for heavy ion collisions to introduce "local patches" and not only finite ∆η, which would be an easy extension of the present work.The main focus here is the transition from big to small systems, and for small systems the value of ∆η is important, and (from our findings) the best global fit is obtained for ∆η = ∞, which is simply an empirical fact, somewhat counterintuitive.Anyway, ∆η is one of the EPOS parameters the user may change.
After the hadronization of the fluid, the created hadrons as well as the corona prehadrons (having been promoted to hadrons) may still interact via hadronic scatterings, and here one uses UrQMD [37,38].
In the following, I want to study core and corona contributions to hadrons production, for pp and PbPb collisons at LHC energies.I will distinguish: (A): The "core+corona" contribution: primary interactions (S-matrix approach for parallel scatterings), plus core-corona separation, hydrodynamic evolution, and microcanonical hadronization, but without hadronic rescattering.
In cases (A), (B), and (C), one needs to exclude the hadronic afterburner, because the latter affects both core and corona particles, so in the full approach, the core and corona contributions are not visible anymore.In Fig. 26, I show the core fraction [core / (core+corona)] in pp at 7 TeV and PbPb at 2.76 TeV as a function of the multiplicity dn ch /dη(η = 0), being defined as pion yield from core over core + corona contribution (since pions are by far the most frequent species).
I will take core+corona as a reference, and plot ratios X / core+corona versus p t , with X being the corona contribution, the core , and the full contribution, for four event classes and four different particle species.
In Fig. 27, I show results for pp collisions at 7 TeV, for (from top to bottom) pions (π ± ), kaons (K ± ), protons (p and p), and lambdas (Λ and Λ), which correspond to hadrons with increasing masses.The four columns represent four different event classes defined in terms of the number N Pom of Pomerons (from left to right): 1, 2-4, 5-9, 10-15, which may be compared with the average number of Pomerons in minimum bias pp at 7 TeV of around 2.  Looking at the green (core) and blue (corona) curves, one observes that the core contribution increases with N Pom , but it also increases with the hadron mass (from top to bottom).The biggest core contribution is observed for lambdas for events with 10-15 Pomerons.There is an interesting p t dependence.In all cases, the maximum of the green core curves is around 1-2 GeV/c, with bigger values for the heavier particles.This is clearly a flow effect, the core particles are produced from a radially  expanding fluid, which moves particles to higher p t values, as compared with a decaying static droplet.
Whereas the core contribution goes down toward small N Pom , one still observes a non-vanishing contribution even for N Pom = 1, which means even a very small number of strings may create a core and flow effects.And this is actually needed to describe experimental data, as I discuss later.It is often said that "collective effects" show up in high multiplicity pp events, but it seems that flow effects are present everywhere.
The red curves represent full over core+corona, the difference between the two is the effect of the hadronic cascade in the full case.Here, one sees only a small effect, essentially some baryon-antibaryon annihilation, which suppresses the baryon yield at small p t .
Looking at the green (core) and blue (corona) curves, one observes that the core contribution increases with centrality, but it also increases with the hadron mass (from top to bottom).Concerning the p t dependence, one also observes a maximum of the green core curves around 1-2 GeV/c, less pronounced compared with the pp results, but still, at very low p t the core contribution goes down, so even at very small p t values the corona contributes.The crossing of the green core and the blue corona curves (core = corona) occurs between around 2GeV/c (mesons, peripheral) and 5GeV/c (baryons, central).
The red curve, full over core+corona, represents the effect of the hadronic cascade in the full case.The pions are not much affected, but for kaons and even more for protons and lambdas, rescattering makes the spectra harder.One should keep in mind that rescattering involves particles from fluid hadronization, but also corona particles from hard processes.Concerning the baryons, rescattering reduces (considerably) low p t yields, due to baryon-antibaryon annihilation.
In the following, I show results of particle production in pp scattering at 7 TeV and PbPb collisions at 2.76 TeV.In Fig. 29, I show ratios of hadron yields over pion yields, at rapidity zero, for proton-proton scattering at 7 TeV and PbPb at 2.76 TeV, as a function of the average multiplicity dn ch /dη(η = 0) .Concerning the EPOS simulations, I show the different contributions: core (green dashed-dotted lines), corona (blue dotted lines), core+corona (co+co, yellow dashed lines), and full (red full lines).Thin lines are used for pp and thick ones for PbPb.I also show ALICE data [6,7,[33][34][35], using black symbols, stars for PbPb, and circles for pp.I consider charged kaons, protons, lambdas, as well as Ξ baryons and Ω baryons.
One sees almost flat lines for the corona contributions, similar for pp and PbPb, which is understandable, since corona means particle production from string fragmentation, which does not depend on the system.One observes also flat curves for the core at high multiplicity, which is again expected since the core hadronization is determined by the freeze-out energy density, which is system independent.However, when the system gets very small, one gets a reduction of heavy particle production due to the microcanonical procedure (with its energy and flavor conservation constraints), whereas a grand canonical treatment would give a flat curve down to small multiplicities.This effect increases with particle mass, it is biggest for Omega baryons, where the reduction is about a factor of 2.
The yellow core+corona curves simply interpolate between the corona and the core curves, with the core weight increasing continuously with multiplicity.The increase is biggest for the Ω.Here, the core curve is far above the corona one, which simply reflects the fact that Ω production is much more suppressed in string decay, compared with statistical ("thermal") production.This explains why the core+corona contribution increases by one order of magnitude from low to high multiplicity, because simply the relative weight of the core fraction increases from zero to unity.Both microcanonical decay and core-corona procedure contribute to the decrease of the ratios toward small multiplicity, but it seems that the core-corona mechanism is more important.
Finally, there is some effect from hadronic rescattering (difference between full and co+co), mainly a suppression due to baryon-antibaryon annihilation at large multiplicities.Whereas the particle ratios are essentially smooth curves, from pp to PbPb, the situation changes completely when looking at the average transverse momentum p t versus multiplicity, as shown in Fig. 30, where I show simulation results for pp (thin curves) and PbPb (thick curves) for charged π mesons, charged K mesons, (anti)protons, Λ (anti)baryons, Ξ (anti)baryons, and Ω (anti)baryons.
Here, one sees (for all curves) a significant discontinuity when going from pp to PbPb.The corona contributions are not flat (as the ratios), but they increase with multiplicity, in the case of pp even more pronounced as for PbPb.This is a "saturation effect": the saturation scale increases with multiplicity, which means that with increasing multiplicity the events get harder, producing higher p t .The situation is different for PbPb, where an increase in multiplicity is mainly due to an increase in the number of active nucleons, with a more modest increase of the saturation scale with multiplicity.Also, the core curves increase strongly with multiplicity, and here as well more pronounced in the case of pp, due to the fact that one gets for high-multiplicity pp high energy densities within a small volume, leading to strong radial flow.Again, the core+corona contribution is understood based on the continuous increase of the core fraction from low to high multiplicity.
It is very useful (and necessary) to consider at the same time the multiplicity dependence of particle ratios and of mean p t results, since their behavior is completely different (the former is continuous, the latter jumps).Despite these even qualitative differences between the two observables, the physics issues behind these results is the same, namely saturation, core-corona effects which mix flow (being very strong) and non-flow, and microcanonical hadronization of the core.
Another very important and useful variable is the multiplicity dependence of D meson production, where "D" stands for the sum of D 0 , D + , and D * + .This is much more than just "another particle", since the D meson contains a charm quark, the latter being created exclusively in the parton ladder and not during fragmentation or in the plasma.In Fig. 31, I  is due to the fact that with increasing multiplicity the saturation scale increases, and the events get harder, producing more easily both high-p t and charmed particles.Considering EPOS with hydro (red curves), the increase compared with the green curves is much stronger, which is due to the fact that "turning on hydro" will reduce the multiplicity (the available energy is partly transformed into flow).The red curves are close to the experimental data, both showing a much stronger increase compared with the reference curve, with the effect getting bigger with increasing p t .So one may conclude this paragraph: to get these final results (the strong increase), two phenomena are crucial, namely, saturation which makes high multiplicity events harder, and the "hydro effect" which reduces multiplicity and "compresses" the multiplicity axis.

V. SUMMARY
After recalling briefly the EPOS4 parallel scattering approach, as well as the core-corona method, I presented new developments concerning the microcanonical decay of plasma droplets, providing very efficient methods to decay droplets of all sizes, allowing the study of the transition towards the grand canonical limit (the method usually employed).I then discussed in detail new methods to construct effective (droplet) masses from energy-momentum flow through freeze-out hypersurfaces of expanding fluids in pp or AA collisions, and I discussed results of microcanonical decay of these droplets.Finally, I investigated the multiplicity dependence of multistrange hadron yields in proton-proton and lead-lead collisions at several TeV, which allows the study of the transition from very big to very small systems, in particular concerning collective effects.Here, the "full model" was employed: our core-corona approach, using new microcanonical hadronization procedures, as well as the new methods allowing to transform energy-momentum flow through freeze-out surfaces into invariant-mass elements.It was tried to disentangle effects due to "canonical suppression" and "core-corona effects", which will both lead to a reduction of the yields at low multiplicity.Evaluating this expression in the center-of-mass system, one gets and so which gives So one gets where p is the momentum of the two-body decay in the rest frame, given as with (from integrating the mass-shell delta function) The n-body phase-space element is then ) which amounts to successive two-body decays (F.James, 1968 ), see Fig. 32, as M As a consequence, the integration limits are Defining variables x i via with independent variables r i ∈ [0, 1], and so , and (z i ) i = r i , and The successive two-body decays are done in the centerof-mass of the decaying object.In the Monte Carlo procedure, the particles have therefore to be boosted into the frame of the object they are originating from, in an iterative fashion, starting with the last decay.
by comparing with Eqs.(11,12,13,24,25,26), keeping in mind that there one only writes the non-trivial variables r i , the variables u and w are simply uniform random variables in [0, 1]).So to have complete expressions, one needs to add terms like ∏ f (u j ) ∏ f (w j ) with f being the function defined as f = 1 in [0, 1] and zero elsewhere.
There is one technical point that needs to be discussed.Rather than considering configurations (like the two hadron configuration "one pion and one kaon"), one considers ordered sequences of hadrons (like "first hadron = pion, second hadron = kaon").In addition, one allows in the sequence {h 1 , ...h L } also "holes", h i = 0, and one uses a fixed length L. The number of hadrons is then the number of nonzero places in the sequence.Consequently, one adds a factor to our probability distribution W.
To simplify the notation, I will use in the following with some Ω t being the law for K t .Let A and B be possible configurations.One defines an operator T as TΩ t (B) = ∑ A Ω t (A) p(A → B), with p(A → B) being called transition probability (or matrix).Normalization: ∑ B p(A → B) = 1.Considering homogeneous Markov chains, the law for Ω t+1 is by definition given as TΩ t .
A law is called stationary if TΩ = Ω.According to the fundamental theorem of Markov chains, one knows, if a stationary law TΩ = Ω exists, then T k Ω 0 converges in a unique fashion towards Ω, for any Ω 0 .A sufficient condition for TΩ = Ω is detailed balance, defined as and ergodicity, which means that for any A, B there must exist some r with the probability to get from A to B in r steps being nonzero.Henceforth, one uses the abbreviations Ω A := Ω(A); p AB := p(A → B).
(C7) Following Metropolis-Hastings [40][41][42], one makes the ansatz p AB = w AB u AB , (C8) with a so-called proposal matrix w and an acceptance matrix u.Detailed balance now reads which is obviously fulfilled for with some function F fulfilling F(z) / F(z −1 ) = z.One takes F(z) = min(z, 1) .(C11) The power of the method is due to the fact that an arbitrary w may be chosen, in connection with u being given by Eq. (C10).So the task is twofold: one needs an efficient algorithm to calculate, for given K, the weight Ω(K), and one needs to find an appropriate proposal matrix w which leads to fast convergence (small I eq ), which is not trivial.The first task can be solved, as shown in section IV B.
In the following, I discuss how to construct an appropriate proposal matrix w AB .Let n specs be the number of hadron species considered (the latter being the list of hadrons according to the particle data group PDG, without charm, bottom, top).The index n specs + 1 is used for the "hole" (missing particle), which is formally considered as particle species.One defines weights for the hadron species h as e(h) = f h /{2 ∑ f h } for hadrons h 1/2 for the hole , (C12) with f h being the grand canonical yields, see Eq. ( 6).One defines the proposal matrix w AB in terms of an algorithm which constructs B starting form some configuration A.
As discussed above, a configuration has the structure {{h 1 , . . ., h n }, {q 1 , ...q k }}, where h i refers to the particle species of hadron i, and the q i are independent variables (defined in [0, 1]) which define the momenta of the hadrons (the q i are not associated to individual hadrons).
Starting from A = {{h 1 , . . ., h n }, {q 1 , ...q k }}: A1: chose randomly two integer numbers i and j = i between 1 and n, and replace h i and h j in A by h ′ i and h ′ j , the latter having been generated with weights e(h ′ i ) and e(h ′ j ).Let me name the new configuration A ′ A2: chose randomly two more integer numbers k and l = k between 1 and n, different from i and j A3: establish a list of pairs of particle species h ′ a and h ′′ a , a = 1, 2, 3...N, considering those which conserve flavor, if h i , h j , h k , and h l are replaced by h ′ i , h ′ j , h ′ a , and h ′′ a .Associate a weight c a = e(h ′ a )e(h ′′ a ) to pair a. Choose a pair index a with weight c a / ∑ N a=1 c a .
A4: replace h k and h l in A ′ by h ′ a and h ′′ a , with the a chosen in the previous step, which gives the new configuration B.

A5:
In case of change 'hadron to hole' or vice versa, replace one of the q i by q ′ i , chosen randomly in [0, 1] (note that q i is not associated directly to hadron i).
The asymmetry w AB /w BA is given as w AB w BA = e(h ′ i )e(h ′ j )e(h ′ a )e(h ′′ a ) e(h i )e(h j )e(h k )e(h l ) . (C13) Finally, one computes Ω B , and with Ω A already known (computed in the step before), this allows one to compute The proposal will be accepted with this weight, otherwise one continues with configuration A.
With this choice (algorithm A1-5) of a proposal matrix, fast convergence can be achieved.Concerning in A3 the "list of pairs that conserve flavor", one predefines for all possible flavor numbers N u , N d ,N s (each one between −6 and 6) tables idpairs i (N u , N d ,N s ; K) and wgtpairs(N u , N d ,N s ; K) with the hadron ids (i = 1 and i = 2) and the weights of the K th pair, with K = 1, 2, ..., which allows a very fast generation of pairs (even with a "complete" set of hadron species, being close to 400).

Appendix D: Hypersurfaces and Milne coordinates
One considers a hadronization hypersurface parametrized in Minkowski space as x µ = x µ (τ, ϕ, η), with x 0 = τ cosh η, x 1 = r cos ϕ, x 2 = r sin ϕ, x 3 = τ sinh η, (D1) where r = r(τ, ϕ, η) is some function of the three parameters τ, ϕ, η.One allows for several sheets in the sense that for given τ, ϕ, η, there are several values of r satisfying the freeze-out condition.For each sheet, for given values of the three parameters τ, ϕ, η, the hypersurface element is The inverse transformation (Milne to Minkowski) is For covariant components (Minkowski to Milne) one has

Figure 1 .
Figure 1.Double scattering, each box representing a cut Pomeron G (single inelastic scattering).

Figure 4 .
Figure 4. Rigorous parallel scattering scenario, for n = 3 parallel scatterings for a collision of a nucleus A with a nucleus B , including non-linear effects via saturation scales.

Figure 5 .
Figure 5. Multiple scattering configuration of two colliding nuclei A and B at LHC energies, each nucleus being composed of two nucleons, with three scatterings (from three cut Pomerons).Dark blue lines mark active quarks, red dashed lines active antiquarks, and light blue thick lines projectile and target remnants.One of the target nucleons is just a spectator.

Figure 8 .
Figure 8.The prehadron yield as a function of space-time rapidity, for different Pomeron numbers in proton-proton collisions at 7 TeV.The curves refer to all prehadrons (red full), all core prehadrons (red dotted), prehadrons from remnant decay (blue full), and core prehadrons from remnant decay (blue dotted).

Figure 9 .
Figure 9.The prehadron yield as a function of space-time rapidity, for different centralities in PbPb collisions at 2.76 TeV.The curves refer to all prehadrons (red full), all core prehadrons (red dotted), prehadrons from remnant decay (blue full), and core prehadrons from remnant decay (blue dotted).

Figure 10 .
Figure10.Energy density at the initial proper-time τ 0 as a function of the transverse coordinate r.The full red lines correspond to an azimuthal angle φ = 0, and the dotted red lines to φ = π/2.The blue dashed lines represent the freeze-out energy density.I show results for different event classes (defined via Pomeron numbers) in pp collisions at 7 TeV (upper plot) and for different centralities (defined via impact parameter) in PbPb collisions at 2.76 TeV (lower plot).

Figure 11 .
Figure 11.The distribution of X = E/ E , i.e. the ratio of final energy over initial energy, for a temperature of 130 MeV and volumes V = 50 fm 3 (red curve) and V = 1000 fm 3 (blue dashed curve).

Figure 14 .
Figure 14.(a) FO hypersurface element.(b) Cooper-Frye hadronization as particle flow through FO hypersurface element.(c) The flow of energy-momentum dP µ through the surface element dΣ µ .

Figure 18 .
Figure 18.Particle yields in p t bins of width 0.2 GeV/c, computed based on "microcanonical decay + boost" (dotted lines) and based on "grand canonical decay + boost" (full lines), for the same test fluid, see text.

Figure 19 .
Figure 19.Energy density in the transverse plane (x, y) for protonproton scattering involving (from top to bottom) 2, 6, and 12 Pomerons.The left column represents the start time τ 0 (of the hydro evolution), and the right column a later time τ 1 , close to final freezeout.

Figure 20 .
Figure 20.Mass distribution ∆M/∆η as a function of the space-time rapidity η, for an event with 2 Pomerons, an event with 6 Pomerons, and an event with 12 Pomerons.

Figure 27 .
Figure27.The X / core+corona ratio, with X being the corona contribution (blue), the core (green), and the full contribution (red), for four event classes and four different particle species, for pp at 7 TeV.

Figure 28 .
Figure28.The X / core+corona ratio, with X being the corona contribution (blue), the core (green), and the full contribution (red), for four centrality classes and four different particle species, for PbPb at 2.76 TeV.

Figure 29 .Figure 30 .
Figure 29.Ratio to π for (from top to bottom) K, p, Λ, Ξ, Ω for pp at 7 TeV and PbPb at 2.76 TeV as a function of the multiplicity dn ch /dη(η = 0).I show the different contributions: core, corona, core+corona, and full (see text), compared with ALICE data.