Discovering partonic rescattering in light nucleus collisions

We demonstrate that oxygen-oxygen (OO) collisions at the LHC provide unprecedented sensitivity to parton energy loss in a system whose size is comparable to those created in very peripheral heavy-ion collisions. With leading and next-to-leading order calculations of nuclear modification factors, we show that the baseline in the absence of partonic rescattering is known with up to 2% theoretical accuracy in inclusive OO collisions. Surprisingly, a $Z$-boson normalized nuclear modification factor does not lead to higher theoretical accuracy within current uncertainties of nuclear parton distribution functions. We study a broad range of parton energy loss models and we find that the expected signal of partonic rescattering can be disentangled from the baseline by measuring charged hadron spectra in the range $20\,\text{GeV}

Introduction. Evidence for the formation of deconfined QCD matter-the quark-gluon plasma (QGP)in nucleus-nucleus (AA) collisions at the LHC and at RHIC comes from several classes of experimental signatures: the suppression of high-momentum hadronic yields (parton energy loss), the momentum anisotropy seen in multi-particle correlations (collective flow ), the increased fraction of strange hadron yields (strangeness enhancement), the exponential spectra of electromagnetic probes (thermal radiation), and others [1][2][3][4][5][6][7][8][9][10]. Several of these findings signal the presence of partonic rescattering in the QCD medium produced in AA collisions. Even in smaller collision systems, in which interactions may be so feeble that the systems evolve close to free streaming, a smaller but nonvanishing strength of these signatures is expected.
Much experimental effort at the LHC has gone recently into characterizing emergent QCD medium properties as a function of the size of the collision system. Strangeness enhancement and collective flow have been observed in the most peripheral AA collisions, as well as in protonnucleus (pA) and in proton-proton (pp) collisions [11][12][13][14]. In marked contrast, no sign of parton energy loss has been observed within current measurement uncertainties in pA collisions, and measurements in peripheral AA remain inconclusive because of large systematic uncertainties (see Fig. 1). However, all parton energy loss models predict some (possibly small) signal in small collision systems. The experimental testing of this robust prediction is arguably one of the most important challenges of the future experimental heavy-ion programs [15,16].
In this Letter, we show how oxygen-oxygen (OO) collisions at the LHC provide a unique opportunity to discover (small) medium induced energy loss in small systems.
Nuclear modification factor. The main signal for parton energy loss is the observed suppression of energetic particles in AA collisions. It is typically quantified  [17][18][19][20]. Error bars are statistical, while boxes are the combined systematic, luminosity, and TAA uncertainties. TAA uncertainty dominates in peripheral AA collisions. by the nuclear modification factor R h,j AA (p T , y) = 1 T AA (1/N ev ) dN h,j AA /dp T dy dσ h,j pp /dp T dy , (1) which compares the differential yield in AA collisions to the yield in an equivalent number N coll = σ inel pp T AA of pp collisions. Here, σ inel pp is the total inelastic pp cross section, T AA is the nuclear overlap function within a given centrality interval, and N ev is the number of collision events in this centrality interval. dN h,j AA /dp T dy is arXiv:2007.13754v2 [hep-ph] 20 May 2021 the differential yield of charged hadrons (h) or calorimetrically defined jets (j) produced in AA collisions at transverse momentum p T and longitudinal rapidity y, and dσ h,j pp /dp T dy is the corresponding differential pp cross section.
The system size dependence of parton energy loss is typically studied in terms of the centrality dependence of R AA (p T , y). Experimentally, centrality is defined as the selected percentage of the highest multiplicity events of the total inelastic AA cross section. Theoretically, it is related by Glauber-type models to T AA , to the mean number of participating nucleons N part and to the mean number of nucleon-nucleon collisions N coll [21][22][23][24]. As seen from the top panel of Fig. 1, inclusive (i.e., centrality averaged) OO collisions probe the system size corresponding to highly peripheral lead-lead (PbPb) and xenon-xenon (XeXe) collisions.
The differential cross section dσ h,j pp entering Eq. (1) can be measured precisely and it can be calculated at sufficiently high p T with controlled accuracy in QCD perturbation theory. However, the nuclear overlap function T AA depends on the soft physics of total inelastic pp cross section and on the model dependent estimation of binary nucleon-nucleon collisions. Estimates of the uncertainties associated to T AA range from 3% in central to 15% in the peripheral PbPb collisions [17]. In addition, there are known event selection and geometry biases that in peripheral AA collisions complicate the model comparison of nuclear modification factors [24]. In this way, the characterization of a high-momentum transfer process becomes dependent on the modeling of low-energy physics whose uncertainties are difficult to estimate and to improve. This limits the use of Eq. (1) for characterizing numerically small medium modifications in very peripheral heavy-ion and pA collisions. A centrality averaged measurement of Eq. (1) in OO collisions would have a smaller T AA uncertainty than 15%, but soft physics assumptions remain [25].
It is of interest to characterize parton energy loss in the range of N part ∼ 10 with measurements independent of soft physics assumptions. The study of inclusive, minimum bias R h,j AA in collisions of light nuclei allows for this since is independent of T AA . The system size is controlled by selecting nuclei with different nucleon number A. Proposed light-ion collisions with oxygen A = 16 and argon A = 40 at the LHC provide a system size scan in the physically interesting region, see Fig. 1.
Perturbative benchmark calculations. The ability to discover a small signal of high-p T partonic rescattering via Eq. (2) is now free from soft physics assumptions. It depends solely on the experimental precision of  the measurement and on the accuracy with which theory can calculate the null hypothesis, i.e., the value of R h,j AA, min bias in the absence of partonic rescattering. This null hypothesis depends only on high-momentum transfer processes that can be computed with systematically improvable accuracy in collinearly factorized perturbative QCD. To determine the null hypothesis, we calculate inclusive jet cross section in pp and OO collisions at √ s N N = 7 TeV as the convolution of incoming parton distribution functions (PDFs) with hard matrix elements with the NNLOJET framework [26,27] and using APPLfast interpolation tables [28]. For pp collisions, cross section calculations provide quantitatively reliable predictions at next-to-leading order (NLO) and have been pushed to NNLO accuracy or even beyond for many important processes. For nuclei, the nuclear modifications of the PDFs (nPDFs) are currently available up to NLO accuracy, so we restrict calculations of Eq. (2) up to this order.
Results for the minimum bias nuclear modification factor of jets are shown in Fig. 2. The uncertainties in the proton PDFs and in the fixed-order perturbative calculation were estimated using the free proton PDF sets provided by CT14 [29] and by independently varying the factorization and renormalization scales by factors 1 2 and 2 while imposing For leading order (LO) and NLO calculations, these theoretical uncertainties enter the numerator and denominator of Eq. (2) and are found to cancel to a large extent in the ratio. We checked that parton-shower (PS) and hadronization effects also largely cancel using the NLO+PS implementation of POWHEG+Pythia8 [30].
Uncertainties of nuclear modification of the free proton PDFs, however, enter only in the numerator of Eq. (2). They were calculated using nPDF sets from EPPS16 global fit including a subset of LHC data on electroweak boson and dijet production in pPb [31]. nPDFs constitute the largest theoretical uncertainty, increasing from ∼ 2% at p T = 50 GeV to ∼ 7% for p T > 200 GeV. Compared to a conservative 15% uncertainty estimate on the modeling of T AA for very peripheral heavy-ion collisions, they are approximately 4 times smaller for p T < 100 GeV. Moreover, nPDF uncertainties can be reduced by including additional LHC data. We show this by reweighting nPDF uncertainties with CMS dijet data [32] (following the work of Ref. [33,34], see the Supplemental Material). The nPDF 90% confidence level band in Fig. 2 then shrinks to 1% (4%) at low (high) p T , respectively. This demonstrates that the null hypothesis in the absence of parton energy loss is known with much higher accuracy from Eq. (2) than from the centrality dependent measurements of Eq. (1).
To gain insight into whether this higher theoretical accuracy can be exploited in an upcoming OO run, we have overlaid in Fig. 2 statistical uncertainties of OO mock data for an integrated luminosity of L AA = 0.5 nb −1 corresponding to a few hours of stable beam in the "moderately optimistic" running scenario of Ref. [15]. The errors displayed on the mock data do not account for several sources of experimental uncertainties that can only be determined with detailed knowledge of the detectors and the machine. There are indications that the systematic experimental uncertainties entering Eq. (2) can be brought down to less than 4% in the measurement of the jet nuclear modification factor [19]. In addition, a precise determination of Eq. (2) requires controlling the OO and pp beam luminosities with comparable accuracy [35,36]. In this case, both the experimental precision and theoretical accuracy of the no-parton-energy-loss baseline of Eq. (2) in OO would be high enough to provide unprecedented sensitivity for the search of parton energy loss in systems with N part ∼ 10.
In close analogy, we have also calculated the nuclear modification factor Eq. (2) for single inclusive charged hadron spectra at LO and NLO. We convoluted the parton spectra with Binnewies-Kniehl-Kramer (BKK) [37] and Kniehl-Kramer-Potter (KKP) [38] fragmentation functions (FFs) using the INCNLO program [39][40] modified to use LHAPDF grids [41]. We obtained hadronic FFs by summing pion and kaon FFs for BKK and pion, kaon and proton FFs for KKP. We checked that BKK FFs (our default choice) provide a reasonable description of the measured charged hadron spectra at √ s = 7 TeV pp collisions. In the absence of final state rescattering in the QCD medium the same FFs enter the numerator and the denominator in Eq. (2), such that the ratio is largely insensitive to the specific choice of FFs, as shown in Fig. 3 by our current knowledge of nPDFs. As parton fragmentation softens hadron distributions, the region of small ∼ 2% uncertainty lies at a p T that is shifted compared to the p T dependence in Fig. 2.
Predictions of parton energy loss. The sizable azimuthal momentum anisotropies v n observed in systems of N part ∼ 10 are interpreted in terms of interactions in the QCD medium. Therefore, qualitatively, some parton energy loss in OO collisions is expected. However, quantitative theoretical expectations for R h AA,min bias are model dependent, and there is no a priori reason that the effect is large. The medium modifications of the multiparticle final states giving rise to jets are more complicated to model than single inclusive hadron spectra, and none of the Monte Carlo tools developed to this end (see, e.g., [43][44][45]) have been tuned to very small collision systems. For these reasons, we restrict the following discussion of quantitative model expectations for parton energy loss in OO to single inclusive hadron spectra.
In general, models of parton energy loss supplement the framework of collinearly factorized QCD with assumptions about the rescattering and ensuing modifications of the final state parton shower in the QCD medium. For leading hadron spectra, the hard matrix elements are typically convoluted with quenching weights that characterize the parton energy loss of the leading parton in the QCD medium prior to hadronization in the vacuum. First perturbative calculations of this parton rescattering within QCD go back to the works of Baier-Dokshitzer-Mueller-Peigne-Schiff and Zakharov [46][47][48][49] and many others [50][51][52]. Within this framework, a large number of models were developed for the description of R h AA over the last two decades [53]. These models differ in their assumptions about the strength of the rescattering (typically parameterized in terms of the quenching parameterq or an equivalent parameter), the time evolution of the medium, the path length dependence, and other details. To the best of our knowledge, none of these models have been used to make predictions for R h AA, min bias in OO collisions.
In a companion paper [42], we therefore derive predictions for R h AA,min bias in OO collisions. This is done by building a simple modular version of the factorized perturbative QCD framework supplemented with parton energy loss. We have systematically tested the resulting R h AA, min bias (p T ) for a wide set of model assumptions. All models were tuned to experimental data of R h AA, min bias (p T ) in √ s N N = 5.02 TeV PbPb collisions at p T ∼ 50 GeV [17]. We then predict the p T and system size dependence. Although our procedure is not the same as reproducing the various published parton energy loss models (the different model assumptions are embedded all in the same simple setup), we expect that this characterizes reasonably well the spread in model predictions for OO collisions. Referring for details to the companion paper [42], we show the final result in Fig. 3. The blue lines result from overlaying predictions for different modeling assumptions and thus presents a robust expectation for parton energy loss. The blue bands represent model and (reweighted) nPDF uncertainties added in quadrature. We conclude that a 15% uncertainty in modeling of T AA in very peripheral PbPb collisions would prevent separating a large fraction of the model predictions from the null hypothesis. However, the much improved theoretical accuracy of Eq. (2) (error bands in Fig. 3) allows for this separation for the large majority of models in the range of 20 GeV < p T < 50 GeV, and for some in the range up to 100 GeV.
Opportunities of Z-boson measurements. While our model studies indicate that the theoretical accuracy will be sufficient to discover partonic rescattering in small systems, the use of Eq. (2) could potentially be limited by beam luminosity uncertainties. Z-boson production has been long touted as a golden channel to measure precisely the hard partonic luminosity [54,55]. Therefore, we consider the Z-boson normalized nuclear modification factor In comparison to Eq. (2), this measurement has the additional advantage of the beam luminosity uncertainties canceling in the double ratio of cross sections. OO collisions at LHC can reach an order of magnitude larger effective nucleon-nucleon luminosity than PbPb collisions [15]. A sample of O(10 5 ) Z bosons can be recorded with an integrated luminosity L AA = 0.5 pb −1 of OO collisions which corresponds nominally to O(1 day) stable running at LHC. This would bring the statistical uncertainties of the normalization in Eq. (3) below 1%.
As both jet and Z-boson yields are proportional to the incoming parton flux, we expected that the nPDF uncertainties would also largely cancel in the double ratio. In Fig. 4 we show the baseline calculation of Eq. (3) obtained in the same NNLOJET framework and displayed with the same breakdown of theoretical uncertainties as Fig. 2. The comparison of Figs. 2 and 4 makes it clear that our initial assumption was wrong and that the nPDF uncertainties in Eq. (3) are larger than those in Eq. (2). The reason for this is that the Z-boson and jet cross sections probe different Bjorken-x ranges and that the nPDF uncertainties of these ranges turn out to be anticorrelated (see the Supplemental Material). We conclude that the theoretical accuracy of Z-boson normalized nuclear modification factor, Eq. (3), relies on a precise knowledge of nPDFs. As more LHC data on AA and pA collision will be included in the nPDF fits, nPDF uncertainties will be reduced. It would be interesting to study to what extent future pO and OO runs at LHC can improve the current nPDF uncertainties.

Summary.
We have started from the observation that the current characterization of parton energy loss in small systems relies on centrality dependent measurements whose construction depends on assumptions about soft physics (in particular manifest in T AA ). The associated uncertainties are difficult to improve systematically and they constitute a significant limitation for high precision measurements of small parton energy loss effects in small collision systems. We have demonstrated with LO and NLO calculations of the baseline of negligible parton energy loss that theoretical uncertainties for inclusive measurements of nuclear modification factors are much smaller and as low as 2% in the kinematically most favorable regions. Moreover, these uncertainties can be systematically improved with new data that constrain nPDFs.
We reemphasize that partonic rescattering is a prerequisite for quark-gluon plasma formation and that partonic rescattering is a direct logical consequence of the standard interpretations of azimuthal anisotropies v n in terms of final state interactions. The possibility that v n is observed while partonic scattering is absent contradicts such phenomenological interpretation of heavy-ion data. The discovery of parton energy loss in small collision systems is therefore one of the most important challenges of the future experimental heavy-ion program. Here, we have shown that the improved theoretical uncertainty in the baseline calculation of inclusive hadron spectra is needed to separate unambiguously model predictions of partonic rescattering from the null hypothesis in the small OO collision system. The integrated luminosity to make this possible is O(1 nb −1 ). Measurements of Zboson normalized R AA,Z would provide an alternative characterization of parton energy loss in OO collisions that has comparable accuracy and that has the advantage of the luminosity uncertainties canceling. Such measurement require an integrated luminosity of O(1 pb −1 ). We hope that our proposal helps to clarify one of the main outstanding questions in the LHC heavy-ion program and that it informs the ongoing discussions about the integrated luminosity required to exploit the unique opportunities of an OO run at the LHC.
Reweighting of Hessian nPDF sets. The process independent nuclear parton distribution functions are extracted from global fits to a wide range of experimental data [31]. The impact of additional experimental data on nPDFs with Hessian error sets can be assessed via a reweighting procedure [33]. In Ref. [34] it was shown that including the LHC dijet data of pPb collisions significantly reduces the nPDF uncertainties for EPPS16 nPDF sets. We have independently reproduced this calculation to determine the reweighting effect on jet and Z-boson production in OO collisions. In the following, we briefly summarize this procedure.
In the Hessian approach, the parton distribution functions f (x, Q) are parametrized by z j (j = 1, . . . N ) internal parameters and a fit is performed by determining the global minimum z min of the χ 2 (z)-function. Further, the 2N error sets represent the ± displacement around the minimum along the eigendirections of the Hessian matrix. The allowed range in the displacement is determined by some suitably chosen tolerance ∆χ 2 , e.g. ∆χ 2 = 52 for EPPS16 [31]. The theoretical uncertainty of a given observable y a around the central prediction y a [z min ] is then given by where D ak is the difference of the observable evaluated on the ± error sets, i.e., Note that for the discussion of the reweighting we consider the symmetric nPDF errors [31]. In order to assess the impact of new data points y data a on the nPDFs, we consider the χ 2 after the inclusion of the new data where C ab is the covariance matrix of the measurement and y a [z] the theory predictions. Close to the initial global minimum the original χ 2 can be approximated by a quadratic function in deviations from the minimum, while y a [z] is linearized using D ak . The new data shifts the location of the minimum and changes the Hessian around it [33]. The reweighted theoretical uncertainties are given by where λ s and v s k are the s-th eigenvalue and normalized eigenvector of the matrix Here v s k represents the rotation of the Hessian matrix eigendirections, while λ s quantifies the reduction of Normalized dijet nuclear modification factor, Eq. (10). The open red band shows initial nPDF uncertainties and the solid band-after reweighting in Eqs. (7) and (9). The orange and blue bands show cancellation of fully correlated proton PDF and scale uncertainties. The error bars are combined experimental statistical and systematic uncertainties [32]. C.f. Fig. 10 in Ref. [34] nPDF uncertainties in that direction. The theory prediction at the new minimum is given by We follow the analysis of Ref. [34] and apply a reweighting to CMS √ s N N = 5.02 TeV pPb dijet data to quadratic order (beyond-quadratic terms were found not to be important for this data set). Specifically, we consider the normalized dijet spectra ratio in some p T range R norm pPb = 1 dσ pPb /dp T dσ pPb dp T dη 1 dσ pp /dp T dσ pp dp T dη . (10) nPDFs consist of 40 EPPS16 error sets of nuclear modification and 56 error sets of proton baseline, which are fully correlated with CT14 error sets. The proton baseline largely cancels in the ratio, therefore we perform reweighting on EPPS16 error sets only. We combine the data points y data a = R norm pPb (p avg T , η dijet ) from five averaged dijet momentum bins p avg T /GeV = [55,75], [75,95], [95,115], [115,150], [150, 400] and from averaged dijet rapidity bins, which fall in the range −3 < η dijet < 3. The reweighted uncertainties and new central value are found by Eqs. (7) and (9). In Fig. 5 we show the result for the lowest momentum bin (other p avg T ranges not shown). We observe a large reduction in nPDF uncertainties as first reported in Ref. [34]. Importantly, the effect of this reweighting on other predictions can be obtained by replacing D ak and y a (z min ) in Eqs. LHC run 3 high statistics data of electroweak bosons and jet observables in pPb collisions (where energy loss mechanisms are negligible) is expected to improve nPDF uncertainties [15]. OO and pO collisions could help to validate and improve nPDF fits at small nucleon number, but high collision energies.
Z-boson production in OO collisions. The electroweak boson production in heavy-ion collisions have been used to access the initial state properties unobscured by the medium, e.g., to constrain the nPDFs. Z bosons provide particularly clean experimental observables, which can be inferred from the di-lepton invariant mass spectrum. Therefore it is natural to expect that Z bosons provide a high precision hard parton luminosity meter. Here we discuss the unexpected anti-correlation of nPDF uncertainties at different Bjorken-x that makes this conclusion premature.
We use the NNLOJET framework to calculate the Zboson cross section at NLO in pp and OO collisions at √ s N N = 7 TeV. In Fig. 6 we plot the Z boson nuclear modification factor as a function of absolute Z-boson rapidity. The theoretical uncertainties for differential R Z AA range from 5% to 9%. We estimate that statistical uncertainties for the total sample of O(10 5 ) Z bosons in −2.4 < y < 2.4 range would be O(1%) for R Z AA (|y|) shown in Fig. 6 and O(0.3%) for the total fiducial cross section. This does not take into account other experimental uncertainties, in particular the luminosity normalization.
The total fiducial Z-boson cross section is used to normalize the jet nuclear modification factor and the result was shown in Fig. 4 in the main text. Contrary to initial expectations, the nPDF uncertainties do not cancel between Z-boson and jet cross sections. The origin of this can be traced back to the different Bjorken-x regions of nPDFs that is probed by the two processes.
Cross section predictions for hadron collisions σ AB can be computed through a convolution of the parton level cross sectionσ and the parton luminosities given by the PDFs, At leading order, the Bjorken-x probed by Z bosons and jets at the center of mass energy s are given by where M Z and y are the mass and rapidity of Z-boson, and p T , y 1 and y 2 are the transverse momentum and rapidities of leading and subleading jets. We note that if y = y 1 = y 2 , then x Z A,B = x j A,B for p T = M Z /2 ≈ 45 GeV, which corresponds to the lowest momentum bin in Fig. 4 in the main text.
In general, x Z A,B = x j A,B and Z bosons and jets are sensitive to partonic fluxes at different Bjorken-x. The uncertainties could still cancel if the nPDF error sets remains correlated over that x range. We compute the correlation coefficient Pearson corr. coef. = cov(X, Y ) cov(X, X)cov(Y, Y ) between the total Z boson (X) and inclusive jet cross section (Y ) evaluated on the 40 EPPS16 error sets. The result is shown as the red line in Fig. 7 for the NLO prediction, which is very similar to the correlation obtained at LO (not shown). We observe positive cross section correlation for p T < 50 GeV, which however turns negative at higher jet momentum. For comparison, we plot the correlation between gluon distribution functions X = f g (x 1 , M Z ), Y = f g (x 2 , M Z ) for the same EPPS16 error sets. We find that uncertainties of partons in the small x shadowing region x 0.01 are anti-correlated to those in the anti-shadowing region x ≈ 0.1 [31]. The rapidity integrated cross sections are convolutions of products of PDFs at different Bjorken-x, but the observed correlation in cross sections follows closely that by partons at x 1 = 0.004 and x 2 = 2p T / √ s. In summary, because of anti-correlation between parton fluxes probed by Z bosons and jets, the theoretical nPDF uncertainties in the ratio of these fluxes add up instead of canceling. One way forward is simply to expect that with new data in global fits, the overall uncertainties will be sufficiently reduced. However, it would be also interesting to see if the present anti-correlation between nPDF uncertainties could be exploited to increase the constraining power of such additional data.