Mapping properties of the quark gluon plasma in Pb-Pb and Xe-Xe collisions at energies available at the CERN Large Hadron Collider

A phenomenological analysis of the experimental measurements of transverse momentum spectra of identified charged hadrons and strange hyperons in Pb-Pb and Xe-Xe collisions at the LHC is presented. The analysis is based on the relativistic fluid dynamics description implemented in the numerically efficient \textsc{Fluid{\it u}M} approach. Building on our previous work, we separate in our treatment the chemical and kinetic freeze-out, and incorporate the partial chemical equilibrium to describe the late stages of the collision evolution. This analysis makes use of Bayesian inference to determine key parameters of the QGP evolution and its properties including the shear and bulk viscosity to entropy ratios, the initialisation time, the initial entropy density, and the freeze-out temperatures. The physics parameters and their posterior probabilities are extracted using a global search in multidimensional space with modern machine learning tools, such as ensembles of neural networks. We employ our newly developed fast framework to assess systematic uncertainties in the extracted model parameters by systematically varying key components of our analysis.

Heavy-ion collisions at ultra-relativistic energies have been at the forefront of modern physics research for several decades.These collisions, studied at facilities such as the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), create an extreme state of matter known as the quark-gluon plasma (QGP) [1][2][3][4].This fluid is of great interest because it is described by a renormalisable and fundamental quantum field theory at the microscopic level, i.e. quantum chromodynamics (QCD).While the macroscopic fluid properties remain challenging to calculate from first principles and a limited number of computations exist to date, an increasing number of experimental results motivate phenomenological and theoretical studies.One of the key features of the QGP is its collective behaviour, which is thought to arise from the fluid-like properties of the system.However, recent experimental observations of collective behaviour in proton-nucleus and proton-proton collision systems [5][6][7][8] have challenged the uniqueness of the fluid-like response of the QGP.The resolution of the origins of collective behaviour in heavy-ion collisions will likely rely on a quantitative rather than just qualitative agreement between data and models.
Traditionally, physical properties of QCD matter have been determined by comparing experimental data with model calculations of event-averaged and predefined observables.However, recent developments have shown promise in extracting more information from the final-state particles produced in heavy-ion collisions.Two such methods are Bayesian analysis [9][10][11][12][13][14][15][16][17][18][19][20] and deep learning [21,22].Bayesian analysis uses global fitting to simultaneously determine multiple model parameters, utilising all available experimental data.On the other hand, deep learning identifies observables that are sensitive to specific physical properties, enabling the extraction of relevant information.This work improves upon our previous analysis of Ref. [23] by using Bayesian inference as the optimality criterion.The previous approach of minimising the χ 2 -value had several limitations, including the neglect of experimental data correlations, interpolation uncertainties, and parameter correlations.In addition, we improved our theoretical description with respect to Ref. [23] by separating the chemical and kinematic freeze-out, and incorporating the partial chemical equilibrium to describe the later stages of the evolution, as well as by exploiting a parametrisation from Yang-Mills theory for the shear viscosity to entropy ratio of the QGP [24,25] instead of using a constant value.
Our model for simulating high-energy nuclear collisions combines three distinct components.The TrENTo model [26] was utilised for the initial conditions, while the FluiduM model [27], featuring a mode splitting technique for very fast computations, was used for the relativistic fluid dynamic expansion with viscosity.Additionally, the FastReso code [28] was used to take resonance decays into account.Despite FluiduM being significantly faster than event-by-event hydrodynamic codes, it is still beneficial to develop an emulator that can function as a quick substitute for the full model.Where previous Bayesian analyses [9][10][11][12][13][14][15][16][17][18] utilised Gaussian process regression, we for the first time exploit neural network emulation.It has been demonstrated in Ref. [29] that infinitely wide neural networks can approximate Gaussian process regression.Furthermore, the use of neural networks has several computational benefits, including significantly reduced training time, lower memory usage, and the ability to handle any number of inputs and outputs.This newly developed framework will have additional applications in the precise fitting of charm and beauty observables contributing to further constrain the heavy-quark spatial diffusion coefficient, widening our knowledge on QGP transport properties [30,31].
In the present work, we determine the specific shear and bulk viscosity to entropy ratios of the QGP, as well as the freeze-out temperatures T kin and T chem , the starting time of a fluid description τ 0 , and the normalisation of the initial entropy profile.We employ our newly developed fast framework to comprehensively explore systematic uncertainties in the extracted model parameters by systematically varying critical components of our analysis.We compare against experimental measurements of transverse momentum spectra of identified charged hadrons (π, K, p) and strange hyperons (Λ) with p T < 2 GeV/c in Pb-Pb collisions at √ s NN = 2.76 TeV and √ s NN = 5.02 TeV, and Xe-Xe collisions at √ s NN = 5.44 TeV from the ALICE Collaboration [32][33][34][35].Given that this paper primarily serves as a proof of concept for our new Bayesian inference framework, our study will be restricted to the 0-5% centrality range.We specifically chose a very central bin because we expect the background-fluctuation splitting ansatz that underlies FluiduM to work best for central collisions, where the profiles tend to be the most symmetric.Furthermore, the restriction to only one centrality class was imposed such that we are able to keep the initial-state parameters, which play an important role in the centrality dependence of the observables, fixed.In a forthcoming publication, we will explore the use of anisotropic flow observables, as well as experimental data from different centrality classes, to gain further insight into the key parameters of the QGP.Nevertheless, it is important to underscore that through the analysis of transverse momentum spectra exclusively, we can derive significant constraints on essential aspects such as the bulk viscosity, the freeze-out temperatures, τ 0 , and the normalisation.This work highlights the potential of such analyses to provide valuable insights into the properties of the QGP.This paper is organised as follows.We summarise the details of the initial conditions, the hydrodynamic evolution, and the hadronisation procedures in Sec.II.The procedure of the Bayesian analysis is discussed in Sec.III.We then discuss the extraction of the model parameters in Sec.IV, and end with a summary and future directions in Sec.V.

II. MODELLING OF THE DIFFERENT STAGES OF A HEAVY-ION COLLISION
The following section provides a brief overview of the various components of our theoretical model.We first examine the initial conditions, which are determined using the TrENTo model [26].Subsequently, we turn to the time evolution as implemented in FluiduM [27], which solves the equations of relativistic fluid dynamics with shear and bulk viscosity and corresponding relaxation times.The newly introduced partial chemical equilibrium will be discussed next, together with the kinetic freeze-out and the implementation of strong resonance decays performed with FastReso [28].

A. Initial conditions: TrENTo
As in our previous work [23], we use the TrENTo model parametrisation [26] for the initial conditions.This is an effective model, intended to generate realistic Monte Carlo initial transverse entropy (or energy) profiles without assuming specific physical mechanisms.It involves positioning nucleons with a Gaussian width w using a fluctuating Glauber model, while ensuring a minimum distance d between them.Each nucleon contains m randomly placed constituents with a Gaussian width of v. TrENTo uses an entropy deposition parameter p that interpolates among qualitatively different physical mechanisms for entropy production [26].Furthermore, additional multiplicity fluctuations are introduced by multiplying the density of each nucleon by random weights sampled from a gamma distribution with unit mean and shape parameter k.
For this study, the TrENTo parameters are not estimated via the Bayesian analysis.Instead, they are set based on the current state of knowledge in literature.As reviewed extensively in Ref. [36], we set w = 0.5 fm, m = 4, v = 0.4 fm, p = 0, and use the TrENTo output as entropy density.In addition, we set k = 1 and d = 0.75 fm, based on the outcome of the Bayesian analysis of Ref. [13].For the nucleon-nucleon cross section, the measurements by the ALICE Collaboration [37] are used, i.e. x = 61.8,67.6, and 68.4 mb for Pb-Pb at √ s NN = 2.76 TeV, Pb-Pb at √ s NN = 5.02 TeV, and Xe-Xe at √ s NN = 5.44 TeV, respectively.The Pb ion is sampled from a spherically symmetric Woods-Saxon distribution with radius R = 6.65 fm and surface thickness a = 0.54 fm, while the Xe ion comes from a deformed spheroidal Woods-Saxon distribution with R = 5.60 fm, a = 0.49 fm, and deformation parameters β 2 = 0.21 and β 4 = 0.0 [38].Using this set of parameters (which we call the "central" configuration in the remaining) we have generated the transverse density T R (x, y) for 1.5 • 10 6 minimum-bias events with impact parameter sampled from the range b ∈ [0 fm, 20 fm].
To classify the TrENTo events in centrality classes, we utilise the integrated transverse density d 2 xT R (⃗ x), which is expected to have a linear monotonic relationship with multiplicity [39].This allows us to divide the events into narrow multiplicity classes of one percent, each of which can be treated as an ensemble of events with random orientation in the transverse plane as in experiment.For each centrality class, we determine the averaged or expected entropy density profile as where the average ⟨• • • ⟩ is taken over all the events in the class with a random reaction plane angle.Consequently, the average is independent of the azimuthal angle ϕ by construction.We have introduced a centrality class-dependent normalisation constant Norm to account for possible variation of the fits from the TrENTo multiplicity scaling.We account for the longitudinal expansion effect (Bjorken flow) at early times by scaling the Norm by the initialisation time τ 0 .To enlarge the statistic of the average entropy profile ⟨T R (r)⟩ (as we will see in the next section, it can be taken independent of ϕ for this work), we perform the event average over all events in one centrality class As in Ref. [23], we produce the averaged entropy densities for the larger experimental centrality classes by averaging the corresponding distributions from the more narrow classes and propagating those.Note that a free-streaming phase that evolves the energy density profile at τ = 0 for a short time scale to finite τ 0 , is currently not included in our framework.To mimic the effect of a free-streaming phase, which can be seen as a suppression/dilution of the "spikiness" of the initial entropy density profiles, we also test another set of TrENTo parameters (called the "freesteaming" configuration later on) in which the width of the nucleons is increased (i.e.w = 1.0 fm, v = 0.9 fm, and d = 1.25 fm).A larger nucleon width in TrENTo is observed to suppress the spikiness in the initial entropy density.Systematic variations of the other TrENTo parameters are not considered.

B. Hydrodynamic evolution: FluiduM
The software package FluiduM [27], which utilises a theoretical framework based on relativistic fluid dynamics with mode expansion [40][41][42], is used to solve the equations of motion for relativistic fluids.This involves decomposing the fluid fields into background and fluctuation components.Specifically, the fluid fields are represented as Φ(τ, r, ϕ, η) = Φ 0 (τ, r) + ϵ Φ 1 (τ, r, ϕ, η), where Φ 0 is the background solution, Φ 1 is the perturbation around it, and ϵ is the formal expansion parameter (set to ϵ → 1 at the end).The background and perturbations can be solved accurately and efficiently using numerical algorithms [27].The background-fluctuation splitting is justified due to two statistical symmetries that are approximately observed in high-energy nuclear collisions.Firstly, the collision energy is sufficiently high, leading to an approximate boost invariance of the system at midrapidity.This implies that observables at midrapidity exhibit only a mild dependence on the rapidity, and consequently, the dependence from it can be a perturbation.Secondly, there is a statistical symmetry related to the random orientation of the reaction plane angle.In each collision event, the two ions collide with a particular relative orientation in the reaction plane, which is uncorrelated with other events.When calculating observables averaged over multiple events, the azimuthal angle's dependence is effectively removed, unless the reaction plane angle is explicitly reconstructed and corrected for each event.Moreover, the average will always be independent of the azimuthal angle, and every dependence form is a perturbation.
For our study, we are interested in examining the azimuthally averaged transverse momentum spectra of identified particles at midrapidity.Therefore, we do not consider azimuthally and rapidity-dependent perturbations and only require the background solution to the fluid evolution equations, neglecting terms of quadratic or higher order in perturbation amplitudes.The causal equations of motion are obtained from second-order Israel-Stewart hydrodynamics [43].
We introduce a novel aspect here with respect to Ref. [23], by assuming the shear viscosity to entropy ratio η/s to be temperature dependent.In particular, we exploit the calculation in Yang-Mills theory of Ref. [24] with recently updated parameters [25].The calculation provides an analytic fit formula describing the temperature dependence of η/s as a direct sum of a glueball resonance gas contribution with an algebraic decay at small temperatures and a high-temperature contribution consistent with HTL-resummed perturbation theory.The fit function in SU(3) Landau gauge Yang-Mills theory is The first term has been changed with respect to Ref. [24] for simplicity since the differences do not play a role for hydrodynamic applications [25].The best fit to the full Yang-Mills results is given by the parameters a = 0.0613, b = 0.00588, d = −0.709,and δ = 40.3.In the low-temperature regime, the pure glueball resonance gas is not replaced by a hadron resonance gas; the b and δ parameters are adjusted to 0.02 and 6.0 to capture this regime better.Finally, an overall correction factor of 4/3 is used to take the differences in scales and the running couplings in Yang-Mills and QCD into account [24].On top of this, we add a global scaling (η/s) scale as parameter to be estimated with the Bayesian analysis The Yang-Mills and QCD η/s distributions are presented in the left panel of Fig. 1.We would like to emphasise that our η/s parametrisation is consistently applied throughout both the partonic and hadronic phases.In contrast, other Bayesian analyses [9][10][11][12][13][14][15][16][17][18][19][20] commonly adopt a different approach, switching to the UrQMD [44] or SMASH [45] transport models at temperature T switch (typically around 145-155 MeV), where (η/s)(T switch ) ≈ 1 is employed.The bulk viscosity to entropy ratio ζ/s is also considered temperature dependent.We assume it to be of the Lorentzian form (shown in the right panel of Fig. 1) For now, the peak temperature T peak and width T width are fixed to 175 MeV and 24 MeV, respectively, based on Ref. [9].The maximum temperature (ζ/s) max is a free parameter for the Bayesian analysis.We would like to remark that we are aware of the calculations of Refs.[46,47] (and its parametrisation in Ref. [48]), where the bulk viscosity coefficient shows a maximum near the QCD phase transition temperature T c and starts to decrease almost exponentially while approaching the hadron gas model, as in our assumption.On the other hand, the temperature dependence of the bulk viscosity in SU(3)-gluodynamics is also studied within lattice QCD simulations by Ref. [49], which shows that below T c , ζ/s continues to rise.The same behaviour is observed for a pion gas in Ref. [50] where the bulk viscosity to entropy ratio rapidly increases at low temperatures.This rise might be assigned to the rapid decrease of the entropy density below the transition in SU(3)-gluodynamics, where the (de)confinement phase transition is of the first order, while in QCD it is a crossover.A bulk viscosity that keeps increasing as the temperature drops will lead to a large bulk viscous pressure on the freeze-out surface and might bring to some non-physical results during particlisation via the Cooper-Frye procedure.Since a consensus on the shape of the ζ/s as a function of the T is not yet available in the literature, we choose to keep using the bulk parametrisation as in Eq. 5 already used in our previous work [23] 1 .On the freeze-out surface we take the particle distribution function to be given by the equilibrium Bose-Einstein or Fermi-Dirac distribution (depending on the species), modified by additional corrections due to bulk and shear viscous dissipation, For the viscous corrections we use the commonly employed parametrisations [23,51,52] Here m is the mass of the primary resonance.The bulk and shear relaxation times are taken as derived in Ref. [53] where ϵ is the energy density, p is the pressure, c s is the (temperature dependent) velocity of sound, and a offset = 0.1 fm/c is a small offset such that a causal evolution of the radial expansion is ensured [43].For the same reason, we adjusted the relation between η and τ shear for temperatures T < T chem , incorporating an additional (T − T chem ) • a slope term with respect to Ref. [53].This was necessary to ensure that the relaxation time is much larger than the characteristic scale of the hadron resonance gas, for which we expect the scatterings to become more sparse.A value of a slope = 3 MeV −1 was considered for the central analysis after stability checks on the high-p T region (p T > 2 GeV/c) of the transverse momentum spectra.
C. Freeze-out and resonance decays: FastReso As the system cools down and dilutes, it changes from a QGP to a hadronic gas.Because particle scatterings are no longer efficient in maintaining equilibrium, the fluid dynamic description breaks down, necessitating the conversion of fluid fields to the distribution of hadronic degrees of freedom.The Cooper-Frye procedure is used to convert fluid fields to the spectrum of hadron species on a freeze-out surface, which in our work is assumed to be a surface of constant temperature [54].
In our previous work [23], a single freeze-out was considered, where both the chemical and kinetic distributions freeze out simultaneously at temperature T freeze .Here, we improve on this description by introducing a partial chemical equilibrium (PCE) after the chemical freeze-out and before the kinetic freeze-out, i.e.T kin < T < T chem .During the PCE, the mean free time for elastic collisions is still smaller than the characteristic expansion time of the expanding fireball, thereby keeping the gas in a state of local kinetic equilibrium.The chemical equilibrium is not maintained since the mean free path of the inelastic collisions exceeds this threshold.
Our description follows the pioneering work of Refs.[55,56], in which the different particle species in a hadronic gas are treated as being in chemical equilibrium with each other, while the overall gas is not.The effective number of long living hadrons, taken in our case as the ones with a characteristic lifetime longer than 10 fm/c (the approximate lifetime of the system [57]), are fixed at the values they had at the chemical freeze-out.The term "effective" includes the hadrons which would be produced when all unstable resonances decay, i.e.Ni = N i + j b i j N j .Here, N i is the number of hadrons i, N j the number of resonances j, and b i j the number of hadrons i formed in the decay of resonance j (including the branching ratio).The corresponding effective chemical potential is given in a similar way.We then assume the isentropic evolution of ideal hydrodynamics (i.e.entropy is conserved), meaning that the ratio of effective particle number density (n i = Ni /V ) and entropy density is constant until the kinetic freeze-out.From the relation one can then obtain µ i (T ).
On the freeze-out surface we take the particle distribution function to be given by the equilibrium Bose-Einstein or Fermi-Dirac distribution (depending on the species), modified by additional corrections due to bulk and shear viscous dissipation and decays of unstable resonances, as explained in detail in our previous work [23].For the viscous corrections, we use commonly employed parametrisations [51,52], while the resonance decays are efficiently calculated with the FastReso package [28].We use a list of approximately 700 resonances from Refs.[58][59][60].

III. PROCEDURE OF THE BAYESIAN ANALYSIS
As outlined in Sec.II, our central framework revolves around certain free parameters: the overall normalisation constant Norm, (η/s) min2 and (ζ/s) max in the shear and bulk viscosity to entropy ratio parametrisations, the initial fluid time τ 0 , and the two freeze-out temperatures T kin and T chem .While Norm and τ 0 are considered systemdependent parameters, the others are assumed to converge to the same values for different collision systems and energies.The primary objective of our Bayesian analysis is to determine these six model parameters simultaneously, allowing them to vary within predefined intervals (see Tab. I).These intervals are based on physical considerations and knowledge from previous studies [23,25,32,61,62].It is worth mentioning that we have confirmed a posteriori that the optimal values fall within these intervals rather than on their boundaries, and in cases where no clear convergence was obtained, larger intervals were employed.In contrast to previous Bayesian analyses in the field of heavy-ion physics that employed Gaussian process regression [9][10][11][12][13][14][15][16][17][18], we introduce a novel approach by utilising neural network emulation.This paper marks the first application of neural network emulation in this context.The subsequent two subsections provide a concise introduction to our newly developed framework, which combines neural networks and Markov-Chain Monte Carlo simulations.For a more comprehensive understanding of both components, we refer the reader to Ref. [63], where a detailed overview is presented.

A. Neural Network implementation
Although FluiduM is recognised for its very fast execution speeds, the extensive parameter exploration involved in Bayesian analyses necessitates an approach to speed up the simulations.One common strategy is to train machine learning models to emulate the complete model and utilise these fast emulators within the Markov-Chain Monte Carlo simulations.In our study, we employ an ensemble of artificial neural networks (ANNs) to emulate the TrENTo + FluiduM + FastReso model.A neural network is a computational algorithm in the field of supervised learning inspired by the functioning of the brain.When appropriately constructed and trained, NNs can effectively identify and model complex relationships between inputs and outputs, making them highly promising tools for our research.
In addition to speed, the emulator also needs to exhibit a high level of accuracy.Achieving this requires careful consideration of the neural network's architecture and the training process.The training necessitates large datasets to achieve the required accuracy for replacing the simulation outputs.For each collision system, we use the outputs of ten thousand complete FluiduM + FastReso simulations, with parameters distributed within the ranges presented in Tab.I.The parameter values are generated using Latin hypercube sampling, which ensures a uniform density in an efficient way.In order to enhance the training performance of the neural networks, we apply a normalisation technique to both the input parameters and the output values, scaling them to the range of [−1, 1].This procedure leads to a significant increase in the convergence rate and speed of the training process.The emulator model incorporates this normalisation, automatically converting the input parameters to the appropriate scale before feeding them through the networks.Similarly, the resulting outputs are scaled back to their original ranges.
Simple neural networks provide point predictions without any measure of uncertainty or confidence, which can arise from, e.g., insufficient model complexity or missing information due to unknown data.Ensemble methods offer an alternative approach where predictions are not solely reliant on a single model; instead, they combine predictions from multiple diverse models within an ensemble.The predictions, denoted as f i , of the ensemble members i ∈ 1, 2, ..., M are averaged for the model input x, while the spread among the different ensemble members3 and their mean correlation ρ are used to estimate the error on the ensemble prediction via The correlation among the different neural networks effectively introduces a correction factor, denoted as c, to the standard deviation of the neural network predictions σemu (x).This correction factor is determined by fitting a t-distribution to (f model (x) − f emu (x))/σ emu (x) (here f model represents the original FluiduM + FastReso output), which should ideally follow a standard normal distribution if the prediction uncertainty accurately captures the prediction error.We assume that the correlation is independent of the position in parameter space and the considered p T intervals, allowing the fit to be performed once, considering all configurations x and data points.
For each collision system, we train 100 neural networks using the input data, splitting the data sample into 80% for training and 20% for testing and validation purposes.The PyTorch Python library [64] is employed to construct the neural networks, while the hyperparameters of the neural networks are optimised using a grid search approach facilitated by the Tune Python library [65].The nature of our problem favours shallow neural networks (1-3 hidden layers with O(100) nodes), a learning rate around 0.01, and the use of a leaky rectified linear unit activation function.The neural network ensemble effectively captures the true output of the model simulations, as demonstrated in Fig. 2. The figure presents the correlation between the (1/2πp T )(1/N ev ) d 2 N/dydp T values within experimental p T intervals for pions, kaons, and protons, as estimated by the emulator model and originally simulated for ten different model parameter configurations x.It is evident that the emulator accurately reproduces the FluiduM + FastReso output, and the emulator uncertainties, which have been appropriately corrected for the leftover correlation among the individual neural networks (as discussed above), effectively capture the residual spread.Furthermore, our ensemble consisting of 100 NNs exhibits a prediction error, measured in units of the experimental data uncertainty, of (f emu (x)− f model (x))/σ exp = 2.5×10 −3 .This level of accuracy is considered sufficient for the purposes of this study.Additionally, our emulator significantly reduces the computation time by a factor 10 4 .

B. Markov-Chain Monte Carlo
To obtain the posterior probability densities of the N model parameters, we employ Bayesian inference.The posterior density is inferred from a probabilistic model, and (N -1)-dimensional integrals are performed to obtain marginal probability densities for each parameter.Due to the complexity of our case, an analytic treatment is not feasible.Therefore, we employ the numerical Markov-Chain Monte Carlo (MCMC) method, which is the most efficient approach for exploring the probability space.In this method, samples are drawn randomly but not independently by constructing a so-called Markov chain.Each element of the chain is sampled in dependence on its preceding element and only its preceding element.
For our MCMC sampling, we utilise the emcee Python Library [66], which implements the affine-invariant ensemble sampler with the so-called "stretch move".The logarithmic probability is quantified as the log of the posterior probability Here, x i represents the i th model input parameter, x min i and x max i denote their limits (as defined in Tab.I), f exp represents a vector of all the experimental data, f emu (x) is the corresponding vector for the emulator model for input x, and Σ represents the covariance matrix, which consists of the data and model covariance matrices (Σ = Σ exp +Σ emu ).Since no information about the correlations between the experimental uncertainties exist, Σ exp is diagonal with its elements given by the squared sum of the statistical and systematic uncertainties.The model covariance matrix is computed from the ensemble output as where the iterators j and k denote specific output values of the vectors.All entries have to be scaled with the correction factor c, as discussed in Sec.III A, to account for the correlation among the different neural networks in the ensemble.In our MCMC sampling, the prior probability distribution is given by a uniform distribution To ensure, that the Markov chain has converged sufficiently, we require the sampling error of the MCMC method to be smaller than 1%.The sampling error decreases by τ f /N samples , where τ f is the integrated autocorrelation time [63].This implies that the product of the number of walkers (chosen to be 300 in our case) and the length of the chains must be greater than 10000τ f .To prevent too early stopping, we also require that the change of τ f (calculated every 100 MCMC steps) is smaller than 1%.
In order to provide the reader with an initial demonstration of the power of our emulator-MCMC approach, Fig. 3 presents a comparison between the simulated and emulated model predictions and the corresponding experimental data for Pb-Pb collisions at √ s NN = 2.76 TeV [32].The top row displays the output of the ten thousand FluiduM + FastReso simulations used for the training of the NNs, and the bottom row presents the predictions generated by the neural-network ensemble emulator using n = 100 random parameter configurations sampled from the Bayesian posterior distribution obtained through the MCMC chain.While the training points exhibit a significant spread, which is expected as the parameters are varied over wide ranges (see Tab. I), the emulator predictions sampled from the posterior distributions are significantly more constrained and closely follow the experimental data points.In this study, we focus on comparing against the experimental measurements of transverse momentum spectra of identified charged hadrons and strange hyperons in the 0-5% most central Pb-Pb and Xe-Xe collisions.In contrast to traditional Bayesian inference analyses, which often focus on deriving a single 'best-fit' scenario, our approach centres on using our framework to assess additional sources of systematic uncertainties, like potential physics that is not included/parametrised in our model, the selection of experimental data, and the treatment of possible unknown correlated uncertainties in the experimental data.Through this extensive exploration of the parameter space, we aim to better understand the underlying dynamics of the system.We emphasise that, in our analysis, all configurations are equally valid, and we do not aim to single out one central result as the definitive outcome.
The soft particle momentum range p T < 2 GeV/c (with variations up to p T < 3 GeV/c) always serves as our primary window of investigation, as it is believed to be governed by fluid dynamic approximations of QCD dynamics.This momentum range is sensitive to important factors such as radial flow, viscous transport coefficients, and the initial conditions of the QGP [67][68][69][70].Similar to our previous work [23], our default setup excludes pions in the p T < 0.5 GeV/c range due to the well-known enhancement relative to hydrodynamic simulations [14,71].This low-p T pion excess is believed to arise from physics features not fully accounted for in hydrodynamic simulations, like: i) Bose-Einstein condensation [72,73], ii) increased population of resonances [74], iii) correct treatment of the finite width of ρ meson [75], and iv) effects of critical chiral fluctuations [76].

A. Comparison of different collision system configurations
We start by exploring the impact of different collision systems on the constraints of our model parameter.The Bayesian posterior distributions are presented in Fig. 4. The diagonal panels display the marginalised distributions of each individual model parameters, while the off-diagonal panels illustrate the joint distributions for pairs of these parameters marginalised over all others.The contours represent the (0.5, 1, 1.5, 2)-sigma equivalent regions, encompassing 11.8%, 39.3%, 67.5%, and 86.4% of the samples.In addition, the median values and 68% confidence intervals of the individual model parameters are reported in Tab.II.Specifically, we present three configurations: i) the combination of the Pb-Pb at Remarkably, even with a limited number of p T spectra observables, we find that most model parameters are well constrained, underscoring the potential of Bayesian analyses in providing valuable insights into the properties of the QGP.The one parameter that appears to prefer higher values beyond the upper bound of the allowed interval is the minimum value of the shear viscosity to entropy ratio parametrisation.We attribute this behaviour to the limited sensitivity of the current observables to the shear viscosity of the system.Future work incorporating experimental flow data is expected to further address this point.However, we emphasise that our η/s parametrisation is consistently applied throughout both the partonic and hadronic phases, possibly leading to higher minimum values compared to    other Bayesian analyses, where an afterburner is used for the hadronic phase [9,10,[12][13][14][15][16][17][18][19][20].
When comparing the three scenarios, most parameters are compatible within their uncertainties (which naturally decrease when more experimental data is incorporated).However, noticeable differences emerge, particularly in the maximum temperature for the bulk viscosity to entropy ratio and the chemical freeze-out temperature, which appear to be influenced by the inclusion of the Pb-Pb at √ s NN = 5.02 TeV data; the experimental data which we observed to be the most difficult to fit.Since no immediate physical explanation is available, we recommend further investigation in a future analysis, especially as additional experimental observables are included.Another notable distinction is observed in the Norm and τ 0 parameters for the various collision systems, with a substantial increase between the  Pb-Pb at √ s NN = 2.76 TeV and the other two systems.

B. Effect of modified initial conditions and inclusion of strange hyperons
Shifting the focus to two other crucial aspects of our analysis, we transition to Fig. 5, where we explore the implications of the following scenarios: i) a modification of the initial TrENTo configuration to mimic a freestreaming phase, as discussed in Sec.II A; and ii) the inclusion of the experimental (1/2πp T )(1/N ev ) d 2 N/dydp T data of the strange hyperon Λ (0.6 < p T < 2 GeV/c).We compare the resulting posterior distributions with the single Pb-Pb √ s NN = 2.76 TeV case discussed earlier.
When employing the modified TrENTo settings to mimic the free-streaming phase, we observe a notable decrease in the values of the Norm and τ 0 parameters.This decrease can be attributed to the larger integrated transverse density d 2 xT R (⃗ x) in this configuration.Because of the relationship expressed by Eq. 1, there exists a correlation between the magnitudes of the Norm and τ 0 parameters.Consequently, interpreting the decrease of τ 0 in a physical context becomes challenging, and the smaller value indicates only partly the (artificial) smoothening out of the initial entropy density profile by reducing its "spikiness".
Furthermore, the inclusion of the Λ hyperon data leads to an increase in the chemical freeze-out temperature.This finding is consistent with previous observations using hydrodynamic simulations [23,61], suggesting that strange and multi-strange baryons are more sensitive to changes in the transition temperature between the fluid evolution and the hadronic transport phases.This aligns with proposals in the literature indicating that strange hadrons may undergo chemical freeze-out earlier than non-strange particles [77][78][79][80]

C. Exploring variations in the analysis setup
Figure 6 summarises the previously discussed variations by reporting the median values and 68% confidence intervals of the marginalised Bayesian posterior distributions for each model parameter.Several smaller variations, exclusively applied to the Pb-Pb collision system at √ s NN = 2.76 TeV, are included as well.We assessed the effect of: i) using a constant shear viscosity to entropy ratio instead of the temperature-dependent version; ii) including pions with transverse momentum below 500 MeV/c; iii) increasing the transverse momentum limit to 3 GeV/c for all particle species; iv-vi) including (ζ/s) peak (150-200 MeV), (ζ/s) width (10-100 MeV), or τ shear (a slope = 0-10 MeV −1 ) as seventh model variable in the MCMC procedure; and vii-ix) considering only pions and kaons, pions and protons, and kaons and protons.
Overall, the values of the extracted model parameters demonstrate a reasonable stability across all configurations.When considering a constant shear viscosity to entropy ratio, the results remain consistent with those obtained from the temperature-dependent formulation (Eq.3).The observation that the constant value for η/s closely resembles (η/s) min in the temperature-dependent parametrisation suggests that, in the latter case, we are primarily sensitive to the region close to the phase transition.The inclusion of low transverse momentum pions impacts several model variables.Specifically, it leads to a substantially smaller τ 0 , marginally lower freeze-out temperatures, and a slightly larger (ζ/s) max .Given the expected non-hydrodynamic origin of the enhancement in the low-p T pion spectra [72][73][74][75][76], this underscores the importance of restricting the analysis within the range where hydrodynamics is expected to be applicable.Increasing the upper p T limit, instead, yields a notable impact on the (η/s) min value.This parameter now tends to favour lower values, nearing the theoretical lower bound of 1/4π derived from AdS/CFT.This behaviour likely arises from the shear correction in the computation of the final particle spectrum that scales with p 2 T [28,81], i.e. expanding the analysed p T interval includes points that are much more sensitive to the shear corrections and we would thus expect a change in the (η/s) min parameter.
The introduction of (ζ/s) peak , (ζ/s) width , or a slope as the seventh model variable in the MCMC procedure has a negligible impact on the values of the original six parameters.These parameters remain consistent with those obtained from the central Pb-Pb √ s NN = 2.76 TeV configuration.However, it is important to note that none of the posterior distributions for these seventh model parameters converge; all three distributions touch the upper limit of their respective intervals.This emphasises the need for first-principle calculations of the bulk viscosity which could replace the currently assumed Lorentzian form.Finally, excluding one of the three particle species (pions, kaons, or protons) results in small changes in the Norm and freeze-out temperatures, but still compatible within  7. The top panels display the transverse momentum spectra for pions, kaons, and protons in the centrality class for the three collision systems.The spectra as simulated with the + FastReso framework, using the extracted model parameters from our Bayesian analysis (see Fig. 6), are compared with experimental data from the ALICE Collaboration [32,34,35].The bottom panels present the respective ratios between theory and experimental data for each hadron species.Note the experimental uncertainties are not taken into account in the ratio.

TeV, as well as Xe-Xe collisions at
√ s NN = 5.44 TeV, from the ALICE Collaboration.Our focus lays on the 0-5% centrality class, where the FluiduM background-fluctuation splitting ansatz works best.Furthermore, the restriction to only one centrality class was imposed such that we are able to keep the initial-state parameters, which play an important role in the centrality dependence of the observables, fixed.In future work, we will include anisotropic flow observables and explore additional centrality classes.Thanks to the computational efficiency of our framework, we could employ it to assess systematic uncertainties by varying key components of the analysis.In other words, we did not focus on a single 'best-fit' scenario but conducted an extensive exploration of the parameter space to better understand the underlying dynamics of the system.Multiple variations, including data from various collision systems, modified initial conditions, inclusion and exclusion of hadron species, variations in the shear and bulk viscosity to entropy ratios, and the use of different p T ranges, were performed.Overall, the extracted model parameters exhibit a reasonable stability across the configurations.Furthermore, our results are in agreement with previous Bayesian analyses, except for the (η/s) min , which we attributed to limited sensitivity of the observables and a different theoretical strategy concerning the hadronic phase.
Finally, we translated our extracted parameters into FluiduM +FastReso predictions for the pion, kaon, and proton (1/2πp T )(1/N ev ) d 2 N/dydp T spectra in the 0-5% centrality class for the three collision systems.They demonstrate strong quantitative agreement with experimental measurements from the ALICE Collaboration.
of the different stages of a heavy-ion collision 3 A. Initial conditions: TrENTo 3 B. Hydrodynamic evolution: FluiduM 4 C. Freeze-out and resonance decays: FastReso 6 III.Procedure of the Bayesian analysis 6 A. Neural Network implementation 7 B. Markov-Chain Monte Carlo 8 IV.Results 10 A. Comparison of different collision system configurations 10 B. Effect of modified initial conditions and inclusion of strange hyperons 12 C. Exploring variations in the analysis setup 13 D. Discussion and comparison to experimental data 14

(FIG. 1 .
FIG. 1. Temperature dependence of the shear (left) and bulk (right) viscosity to entropy ratio as defined in Eq. 4, and 5, respectively.Both the Yang-Mills and QCD η/s distributions are shown, as well as the Kovtun-Son-Starinet (KSS) bound from AdS/CFT calculations, η/s = 1/4π.The dashed blue lines indicate the minimum and maximum values used in the Bayesian analysis for (η/s) scale and (ζ/s) max .

FIG. 2 .
FIG.2.Correlation between the NN ensemble prediction and the original FluiduM + FastReso output for 10 different parameter configurations x.Each data point represents the dN /dydpT value in one experimental pT interval for either pions, kaons, or protons measured in the 0-5% most central Pb-Pb collisions at √ sNN = 2.76 TeV[32].On the right, the residual distribution is shown in units of the corrected (see text for more details) neural network ensemble uncertainty.A normal distribution with mean zero and standard deviation one is shown in orange to guide the eye.

FIG. 3 .
FIG. 3. Comparison between simulated (top row; n = 10000) or emulated (bottom row; n = 100) FluiduM + FastReso predictions with experimental data for Pb-Pb collisions at √ sNN = 2.76 TeV in the 0-5% centrality interval from the ALICE Collaboration[32].Note that the solid markers show the experimental data used in the MCMC procedure, while the open markers show the full experimental measurement.See text for more details.

FIG. 4 .
FIG. 4. Bayesian posterior distribution of the model input parameters for three different combinations of the collision systems.The diagonal panels show the marginalised distributions of individual model parameters, while the off-diagonal panels present the correlations among pairs of model parameters.See text for more details.

FIG. 5 .
FIG. 5. Bayesian posterior distribution of the model input parameters for three different configurations exploiting the √ sNN = 2.76 TeV Pb-Pb experimental data.The diagonal panels show the marginalised distributions of individual model parameters, while the off-diagonal panels present the correlations among pairs of model parameters.See text for more details.

TABLE II .
Posterior parameter estimates corresponding to Fig.4.The reported values correspond to the median values and 68% confidence intervals.
0 5%; , K, p; p T < 2 GeV/c, excluding low-p T ; central TRENTO Pb Pb, s NN = 2.76 TeV + Xe Xe, s NN = 5.44 TeV + Pb Pb, s NN = 5.02 TeV Pb Pb, s NN = 2.76 TeV + Xe Xe, s NN = 5.44 TeV Pb Pb, s NN = 2.76 TeV .FIG.6.The median values and 68% confidence intervals of the marginalised Bayesian posterior distributions for each model parameter across all analysed configurations.Note that due to the collision system dependency, the Norm and τ0 parameters for Xe-Xe and Pb-Pb at √ sNN = 5.02 TeV are reported only in Tab.II.The configuration involving only pions and kaons is depicted in a lighter colour, reflecting the lack of convergence to a single minimum in the MCMC procedure for this specific setup.