Reweighted nuclear PDFs using Heavy-Flavor Production Data at the LHC: nCTEQ15_rwHF&EPPS16_rwHF

We present the reweighting of two sets of nuclear PDFs, nCTEQ15 and EPPS16, using a selection of experimental data on heavy-flavor meson [D0, J/psi, J/psi from B and Upsilon(1S)] production in proton-lead collisions at the LHC which were not used in the original determination of these nuclear PDFs. The reweighted PDFs exhibit significantly smaller uncertainties thanks to these new heavy-flavor constraints. We present a comparison with another selection of data from the LHC and RHIC which were not included in our reweighting procedure. The comparison is overall very good and serves as a validation of these reweighted nuclear PDF sets, which we dub nCTEQ15_rwHF&EPPS16_rwHF. This indicates that the LHC and forward RHIC heavy-flavor data can be described within the standard collinear factorization framework with the same (universal) small-x gluon distribution. We discuss how we believe such reweighted PDFs should be used as well as the limitations of our procedure.


I. INTRODUCTION
In [1], we investigated how experimental data on the inclusive production of heavy-flavor (HF) mesons [D 0 , J/ψ, B → J/ψ and Υ(1S )] in proton-lead collisions at the LHC could advance our knowledge of the gluon-longitudinal-momentum distribution inside heavy nuclei. Indeed, at the LHC and RHIC energies, such HF reactions are usually initiated by gluons and their production in proton-nucleus collisions should shed some light on the nuclear gluon content. 1 We observed that the nuclear effects encoded in two recent global fits of nuclear parton densities at next-to-leading order (NLO), nCTEQ15 [4] and EPPS16 [5], were yielding a good description of the existing LHC HF production data supporting the hypothesis that the modification of the partonic densities in heavy nuclei could be the dominant effect at play in such processes.
We then went further in our investigations and performed a Bayesian-reweighting analysis of the data sample of each of the mesons and showed that the existing HF data were clearly pointing at a depleted gluon distribution at small momentum fractions, x, in the lead nucleus, also known as shadowing. According to our reweighting analysis, the significance of this depletion was larger than 7 σ at x smaller than 0.01. In addition, our analysis also supported the existence of gluon antishadowing, whereby the gluon content of nuclei is augmented when x 0.1.
We concluded that the inclusion of such HF data in a nuclear PDF (nPDF) fit, such as those used for our analysis, would reduce the uncertainty on the gluon density in heavy nuclei down to x 7 × 10 −6 . At such low x, there is currently simply no other data. We stressed that the reweighted nPDFs would still be compatible with the other data of the global fits, in particular in the quark sector.
Yet, we noted that the large factorization scale (µ F ) uncertainty in the computation of HF-production cross sections and nuclear modification factors (NMFs) was then to be considered. We indeed found it to generate an uncertainty larger than that from reweighted nPDFs using charm or charmonium data. Indeed, the magnitude of the nuclear effects encoded in the nPDFs depend on the scale via the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equation and such an evolution is admittedly fast in the GeV range.
In the present work, we follow up on our previous study published as a Letter. First, we provide more details about the reweighting analysis and new comparisons with data sets which became available after our study was performed. They serve as post validation of our reweighting analysis. Second we have converted our reweighted nPDFs, which were initially in the form of replicas along with weights, into nPDFs with Hessian uncertainties. As such, they can be presented in files to be included and easily used in the LHAPDF library. A new feature of our analysis is that it includes the effect of the factorization-scale choice. As such, we also provide specific guidances on how to use them for future studies, in particular to predict the initial-state nuclear effect on hard probes of the quark-gluon plasma (QGP).

A. Framework and its justification
In the collinear factorization, following Feynman's model of partons, the longitudinal-momentum distributions of quarks and gluons inside hadrons are given by the PDFs. These then connect the measurable hadronic cross sections arXiv:2012.11462v2 [hep-ph] 8 Jan 2021 and the partonic cross sections (induced by quarks and gluons) which can be calculated using perturbative methods.
The determination of PDFs of free protons, denoted f p i , is actively pursued by many colleagues via global analyses of as much as possible experimental data of hard processes which are believed to be perturbative enough. Such state-of-the-art global analyses [6][7][8][9][10][11] involve complex perturbative calculations and cutting-edge statistical methods to extract PDFs and their uncertainties altogether.
Assuming the same collinear factorization to apply in proton-nucleus (pA) and nucleus-nucleus (AA) collisions, one needs to introduce nPDFs, denoted f A i . In such a case, additional challenges come out along with additional motivations to study them. Indeed, nuclear data are significantly more complex to collect and one needs to cope with two additional degrees of freedom to describe the nuclei, the number of protons (Z) and neutrons (N = A−Z) which they comprise. These are also necessary inputs to employ hard probes of the QGP produced in ultra-relativistic heavy-ion collisions at RHIC and the LHC [12]. Constraining nPDFs is then not only about the ambitious endeavor to understand the quark and gluon content of the nuclei but also to understand the initial stages of the production of some QGP probes.
Since the early 1980's, we know that the partonic description of nuclei cannot be reduced to a simple collection of partons in free nucleons. In other words, nPDFs deviate from a simple sum of nucleon PDFs. To study such deviations, it is customary to rely on NMFs, like for the DIS structure function F 2 and parton-level NMFs , instead of the absolute nPDFs. Past studies of F 2 [13][14][15][16][17][18][19] told us that, for the quarks, (antishadowing region), and (iv) R A q < 1 for x 0.1 (shadowing region). These 4 different x regions are usually denoted by the names in parenthesis. The EMC region still lacks a fully conclusive picture [20] although the region of medium and large x, R A q is usually explained by nuclear-binding and medium effects and the Fermi motion of the nucleons [21]. At small x, coherent scatterings inside the nucleus explain the observed suppression of F 2 . This is why it is referred to as shadowing. The physics underlying the antishadowing is however less firmly established.
Although the generic trend of the nPDFs could in principle be related to specific physics phenomena, they remain essentially determined by global fits of experimental data [4,5,[22][23][24][25][26][27] based on initial parameterizations only slightly driven by the above physics considerations.
The above discussion specifically relates to the nuclear quark content which can directly be probed by lepton-nucleus ( A) DIS and pA Drell-Yan processes. Lacking corresponding direct probes, the nuclear gluon content is less known, despite indirect constraints from the scaling violation [4].
As such, the NLO nPDF fits, nCTEQ15 [4] and EPPS16 [5], were performed using data from RHIC on single inclusive pion production. In the case of EPPS16, jet data from the LHC were also used. The objective was precisely to constrain the gluon densities down to x ∼ 10 −3 . However, owing to the absence of data at x 10 −3 , the nuclear gluon content remained completely undetermined at small x. As such, the resulting gluon nPDFs in this region are mere extrapolations from the region of larger x and essentially follow from their x-dependent parameterizations at the scale µ F,0 ∼ 1 GeV where the perturbative DGLAP evolution is initiated.
As pointed out several times [28,29], the uncertainties derived from these gluon nPDFs is not representative of the true uncertainty on this quantity. More flexible initial nPDF parameterizations naturally yield much larger uncertainties in this region. This explains why the EPPS16 set, despite accounting for more data constraints, show much larger uncertainties with respect to its predecessor EPS09 [30] or nCTEQ15.
In this context and motivated by the results of proton studies using HF production to improve the determination of smallx gluon PDF [31][32][33][34][35], we thus studied [1] the impact of HF production to constrain the small-x gluon density in lead down to x 7×10 −6 . In particular, we used heavy-quark and heavyquarkonium data in LHC proton-lead (pPb) collisions.
Like the follow-up work presented here, our former study relied on the assumption that collinear factorization in terms of nPDFs holds in the nuclear environment. To date, the global usage of nPDFs is still a subject of debates. Such an assumption should thus be seen as a working hypothesis which has to be systematically questioned. This is the object of this extension with new data-theory comparisons and the release of reweighted nPDFs which can serve for future studies which will confirm or falsify this framework.

B. Connecting NMFs and nPDFs
Since the advent of RHIC and the LHC, thus for two decades now, the cross-section measurements performed at pA colliders have nearly systematically be normalized to the pp ones [12,53,54]. Indeed, the prime interest of such studies is to look for deviations from the free nucleon case, up to isospin effects.
Hard-perturbative reactions are rare and each of the N coll. binary nucleon-nucleon (NN) collisions triggered by a pA or AA collision is meant to independently and equally contribute to the observed yields Y: Y pA/AA N coll. Y NN . If one considers all the possible geometrical configurations for these collisions, this expected equality can be translated into a relation between the cross sections, namely dσ pA A × dσ pp . As such, it is natural to define the NMF such that (up to isospin effects) it would equate unity in the absence of nuclear effects. Just as R A q was defined above, one can define R A g with the difference that gluon densities are a priori identical in protons and neutrons. If this was the only nuclear effect at play, R pA would then directly be connected to R A g via the integration of kinematic variables. Only for specific reactions at leading order (LO) can one write an equality. In general, it is necessarily more complex.
In the case of DIS off a nucleus A, one historically used F A 2 instead of σ and the corresponding NMF R[F 2 ] is then naturally found to probe the modification of the (anti)quark nPDF compared to its PDF, i.e. R A q as we discussed above. The NMFs at the LHC and RHIC are so far differential in the transverse momentum (P T,H ) or the center-of-momentum (cms) rapidity y cms,H of the observed hadron H as well as a function of the collision centrality. The latter remains theoretically poorly understood and introduces many complications. As such, it was not considered in [1]. Since the current study bears on this first study, we leave the centrality dependence for future investigations and focus on centrality integrated results where the geometry of the collisions should not matter.
Focusing on R pA is justified by the following. First, it essentially removes the sensitivity from the theoretical side on the proton PDF. Indeed, at very small x, the PDF uncertainties are not necessarily negligible. Second, R pA is a priori less sensitive to the modification of the normalization of the cross-section by QCD corrections to the hard parton scattering. Third, on the experimental side, R pA is usually better determined than the pA yields because some systematic experimental uncertainties can be assumed to cancel.
As we have already alluded to, the connection between R pA and R A q,g is not necessarily trivial even assuming that the reaction is only initiated by gluon fusion as expected at high energy for HF production. Since we follow the procedure of [1], we will employ the data-driven approach of [55][56][57] where the matrix elements squared |A| 2 for the gluonfusion processes are determined from pp data restricting to a 2 → 2 kinematics. 2 First and foremost, it is justified by our limited understanding of the quarkonium-production mechanisms (see e.g. [12,58,59]) while being sufficient to perform a sound evaluation of the nPDF effects via R pA . Second, the same approach also applies to open HF hadrons [55] for which full-fledged perturbative QCD computations exist 3 .
Indeed, we have checked that FONLL yields equivalent results. In practice, it boils down to employ a specific empirical functional form for |A| 2 , initially used in [69] to model single-quarkonium hadroproduction for double parton scattering studies [3,[69][70][71][72][73][74]. The latter is flexible enough to provide a fair account of HF data on single-inclusive-particle production. In fact, this is not always the case for complete perturbative QCD (pQCD) computations which have however the virtue of being derived from first principles of QCD.
The case of D meson production in pPb collisions has in fact recently been studied in detail [75] using the SACOTm T variant [76] of the GM-VFNS formalism. It has been found to yield the same qualitative features as what we obtained in [1] regarding the constraints on the nCTEQ15 and EPPS16 nPDFs and the rather large associated factorisationscale uncertainty.
The employed data-driven approach is also such that 1. the event generation is much faster than using NLO QCD-based codes. This allows us to perform computations for several nPDFs (2 in our case) with 3 scale choices for 4 particles in an acceptable amount of computing time. 2. one can employ it directly for any single-inclusiveparticle spectrum once we know the relative contribution of different i j fusion channels, i.e. the parton luminosities times |A i j | 2 . Thus, in principle, quark channels can also be accounted for. 3. the normalization uncertainty in the pp cross section is controlled by the measured data, which also enters R pA .
Significant theoretical uncertainties arise from the µ F uncertainty in these QCD processes. Different reweighted nPDF sets therefore naturally emerge from the scale variation and we will explain how they should be used in section IV.
We stick to the same pp baseline as in [1] which was slightly extended and improved compared to [55]. It includes the non-prompt J/ψ from B → J/ψ data. For the D 0 , J/ψ and Υ(1S ), a scale variation in the pp baseline itself was performed, besides that in R Pb g (x, µ F ). We also had checked that for D 0 and B → J/ψ production, the scale uncertainty on R pPb nearly matches that of FONLL whereas FONLL shows significantly larger scale uncertainties on the cross section. This discussion was given as supplemental material to [1]. 4 C. The PDF reweighting method PDF reweighting provides the means for estimating the impact of new experimental data on PDFs without performing a full scale global analysis. Instead, Bayes theorem is adopted to update the underlying probability distribution and obtain a new updated set of PDFs which takes into account information from the new data. The first use of the reweighting in the context of a PDF analysis was concluded in [77]. Further development of this technique for PDF analyses was carried out resulting in a number of improvements/variants of this method [78][79][80][81].
For the purpose of the study in [1] and the current analysis we used the reweighting procedure outline in [82] which we briefly summarize here. In the first step, a Hessian PDF set is converted into PDF replicas which have a direct probabilistic interpretation. Since the nPDFs we are using (nCTEQ15 and EPPS16) provide symmetric errors, the conversion allows for an arbitrarily precise reproduction of the corresponding uncertainties 5 and it can be obtained using the following formula where f k are 6 PDF replicas, f 0 is the central PDF from the Hessian set, are the corresponding Hessian error PDFs, N is the number of eigen-vector directions for the Hessian set, and R ki are normally-distributed random numbers centered at 0 with a standard deviation of 1. For such a set of replicas, one can compute the average (corresponding to the Hessian central value) and variance of a PDF-dependent observable O as where N rep is the number of replicas. During the reweighting, each of the PDF replicas is supplemented with a corresponding weight which depends on how well the replica describes the new data. In our analysis, the weight is defined as where T is the tolerance criterion used when defining Hessian error PDFs [83] and χ 2 i quantifies how well the new data is described. After calculating the weights, it is straightforward to compute any PDF-dependent observables as More details on the reweighting procedure can be found in [82] and in the other aforementioned references.
In a global PDF fit HT corrections have to be included in the theory predictions or kinematic cuts have to be imposed such that HT effects are expected to be small in the selected data. For our study, one also needs to select a kinematical region where gluon fusion is the dominant channel and where nPDFs represent the main nuclear effects. This is why we decided in [1] to focus on open-and hidden-HF production in pA collisions at LHC energies. Due to the large Lorentz boost at these energies, as what regards quarkonium production, the heavy-quark pair remains small while traversing the nuclear matter. As such, the break-up of the pair [84,85] is negligible at the LHC whereas it is a potentially large effect at low(er) energies. Along the same lines, we have not considered the more fragile excited states (ψ(2S ), Υ(2S ), Υ(3S )) and instead we focused on J/ψ and Υ(1S ) data on which the comover effects [38][39][40][41][42] are likely limited.
Overall, we have used the following datasets: the AL-ICE [86] and LHCb [87] D 0 data; the ALICE [88,89] and LHCb [90,91] J/ψ data; the LHCb [91] B → J/ψ data; the ALICE [92], ATLAS [93] and LHCb [94] Υ(1S ) data. The χ 2 we obtained and the original values are given as supplemental material to [1]. 7 Forward dAu J/ψ RHIC data could have been added. Instead, we preferred to focus on the LHC data at 5 and 8 TeV and to use the RHIC [95,96] and the new LHC [97,98] ones as a cross check as we do here.

III. RESULTING REWEIGHTED nPDFS
In [1], we have performed a PDF reweighting using two sets of initial nPDFs: nCTEQ15 and EPPS16, and for each of these nPDFs we have used data for J/ψ, D-meson, B → J/ψ and Υ(1S ) production from pPb collisions at the LHC to produce four reweighted nPDF (RnPDF) sets. The reweighting for each data set was done independently and 3 sets were obtained from a scale variation using the following scale choices: µ = {µ 0 , 2µ 0 , 0.5µ 0 }, see Tab. I for the values of µ 0 .
Going further than in [1], we have created, for the current study, an additional RnPDF set for each set of data by combining the uncertainties obtained from the scale variations. The combined set is produced by simply taking all replicas from reweightings with different scale choices and treating them as equally probable.  Pb Pb Pb Pb Pb Pb This gives us 3+1 RnPDF sets for each combination of initial nPDF sets (nCTEQ15 or EPPS16) and each data type which together gives 32 new RnPDF sets. Since we want to provide all these new RnPDFs to the public such that they can be used in other studies, we have converted them into corresponding Hessian sets which are handier to use than the (many) PDF replicas which were initially obtained. In order to produce Hessian sets out of PDF replicas we use the mc2hessian program [99,100]. For the reweighted Hessian sets we use the same number of error sets as in the original nPDFs which in case of nCTEQ15 is 32 and in case of EPPS16 is 40. One should however note that the obtained errors are symmetric and as a result it is sufficient to provide correspondingly only 16 and 20 error PDFs. The RnPDF errors for the resulting sets should be computed using the following prescription: where O is a PDF-dependent observable and k goes over 16 or 20 error PDFs. 8 The comparison of the reweighted and original gluon distributions are presented in Fig. 1. We should highlight here that the plots (and the corresponding LHAPDF files) features PDF uncertainties at 90% CL. This is in accordance with the original nPDF sets we used (nCTEQ15 and EPPS16) but it differs from what we presented in [1] where we had used 68% CL. 9 Additional plots showing a detailed comparison of gluon nuclear modifications for different reweightings both at 68% CL and 90% CL are presented in the appendix A.
In order to confront the resulting distributions with recent nPDFs, in Fig. 2, we compare the gluon NMFs obtained in the HF reweightings with the results from the nCTEQ15WZ [27] and nNNPDF20 [25] nPDF sets. None of these two nPDFs used the LHC HF data we employed in our analysis in the fits and, as such, they do not have stringent constraints on the gluon distribution especially in the low-x region (since most of the constraints in these sets are coming from the LHC W/Z boson data which have a kinematic reach to around x 10 −3 ). Nevertheless, it is interesting to see the comparison which clear shows that the HF data is crucial for pinning down the low-x gluon distribution, confirming that the gluon is shadowed at small values of x.
We note that in Fig. 2 we show a single error band for each of the HF RnPDFs. This error band is an envelope of error bands originating from the reweightings performed varying the factorization scale, see Fig. 1 and Tab. I. Furthermore, since the atomic mass, A, of lead and gold nuclei are very close (208 vs. 197) nuclear modifications for these two nuclei are also very similar. We used this fact and assumed that the results of reweighting using the pPb LHC HF data can be directly transferred to the case of gold. To 8 The original nCTEQ15 and EPPS16 LHAPDF sets provided both "plus" and "minus" error PDFs and as a result the appropriate formula for calculating PDF uncertaintiy was ∆O = 1 instead. 9 The reweighting itself was performed at 68% CL but for the convenience of the users we have converted the resulting nPDFs to the 90% CL which is the standard used in the community. Pb (a) D RnPDFs Pb Pb Comparison of the reweighted gluon nuclear ratio for Pb from the nCTEQ15 and EPPS16 with nCTEQ15WZ [27] and nNNPDF20 [25]. The ratio is taken with respect to the CT14nlo proton PDF [101]. The uncertainties for the RnPDFs are computed as envelopes of uncertainties for sets obtained with different scales.
do so, we simply applied the weights obtained for Pb nPDF replicas to the corresponding Au nPDF replicas. This amounts to assuming that the A dependence in our RnPDFs is the same as that of the original nPDFs. That way, one can see the impact of HF pPb LHC data on Au nPDFs and confront such new information with p/dAu data from RHIC. To allow others to use these results we provide LHAPDF grid files for the obtained Hessian sets. We supply the sets for lead and gold nuclei originating from nCTEQ15 and EPPS16 J/ψ, D, and B → J/ψ meson data. We refrain from providing also the sets obtained from reweightings using Υ(1S ) data as the impact of these data on the original nPDFs was marginal. The LHAPDF grid files will be available at http: //nloaccess.in2p3.fr/HF-LHC-RW-2017.

IV. RESULTS
As we explained above, a data selection is necessary to minimize HT effects [102]. In addition, we recall that our parametrization of the hard scattering neglects the (anti-)quark contributions. These are irrelevant at the LHC. However, it is important that one remains in a kinematical region where gluon fusion dominates to apply this parametrization. For charmonia, this remains the case down to the energy range of RHIC [103][104][105] and the LHC in the fixed-target mode [106][107][108][109] but it is also important to keep away from those regions where other nuclear effects than those which can be encapsulated in the nPDFs are believed to be important. This explains why we originally restricted our reweighting study to HF production in pA collisions at LHC energies.
We could also have added the forward dAu J/ψ RHIC data. Instead, we preferred to focus on the LHC data and to use the RHIC [95,96] and recent LHC [97,98] ones as cross checks. We also note that adding the RHIC data would in fact have constrained Au nPDFs. Here, what we rather do is to assume that the relative A dependence of Au and Pb shadowing is the same as in the original nPDFs.
We extend this validation by also showing comparisons with a number of data sets which appeared after our study [110,111] and which can be considered as predictions since the nPDFs were left unchanged. However, our objective is not to be exhaustive but rather to illustrate that our RnPDFs provide a first good estimate of nuclear effects at work on the production of the corresponding particles and that our released LHAPDF grids can be used as such. Our examples have also been chosen to explain which LHAPDF grids to use and how, that is with which µ F scale choices.
We should indeed distinguish two cases. The first corresponds to the situation where a LHAPDF grid has been reweighted on the data of same process (or a very similar one) as that for which one wishes to provide NMF predictions. In such a case, the µ F uncertainties are likely highly correlated even if the x range is not similar. 10 As such, we advocate that the NMF should be computed for 3 scale choices [ξ = (0.5, 1.0, 2.0)] by using the RnPDF sets corresponding to the same 3 scale values used in the reweighting and then to take the envelope of the resulting NMF uncertainties. This is what we will show next for J/ψ and charm production.
On the contrary, if the process for which one wishes to compute the NMF is a different one, e.g. di-jet vs charm production, we find it more reasonable to consider that the µ F uncertainties are not correlated. In such a case, one should rather use the constraints from the reweighting without any prior and take all the eigensets obtained with the different scale choices on the same footage. Hence, we advocate the use of the new merged grid, the computation of the NMF for 3 scale choices [still ξ = (0.5, 1.0, 2.0)] and the consideation of the envelope of the resulting uncertainties. Since we did not provide LHAPDF grid files for the reweighting with the Υ data, the prediction for the Υ NMFs which we show next are obtained likewise, using the gluon nPDF reweighted with charm data. We have also done so for B predictions. We could have used those from beauty or J/ψ in both cases as well. using our RnPDFs compared to PHENIX data [95].
A. J/ψ data at RHIC at √ s NN =200 GeV Let us start with some comparisons with RHIC data at √ s = 200 GeV both for dAu 11 and pAu collisions as a function of y c.m.s. since they illustrate that there is indeed an universality in the suppression of J/ψ forward data at high energies which can be reproduced by nPDF effects only.
Indeed, one can see in Fig. 3 and Fig. 4 that the magnitude of the suppression in the forward region is well accounted for by the nPDFs reweighted on the J/ψ LHC data. We recall that the shown uncertainty results from the envelope of the bands found by taking the 3 J/ψ-RnPDFs using 3 scales evaluated at 11 Our results for dAu collisions are obtained by neglecting any nuclear effect in the deuteron.  the same 3 scale choices. As we just explained, this amounts to assuming that the factorization scale in the hard scattering is fully correlated between the LHC and RHIC energies. Nevertheless, it remains unknown and should be varied. On the other hand, one clearly sees that the backward data cannot be described, and to a lesser extent the centralrapidity ones. This is absolutely not surprising -and it is well known that the same happens with the original unreweighted Inclusive J/ψ production at √ s NN =200 GeV RHIC =200 GeV using our RnPDFs compared to PHENIX data [110] nPDFs [55]-since, at these energies and rapidities, the J/ψ has the time to fully form while traversing the nucleus. As such, additional (final-state) effects need to be considered. For instance, they can be encapsulated in an effective absorption. We guide the interested reader to a series of works treating these effects [49,84,113,114]. In what follows, we therefore naturally do not discuss further the P T dependence of the NMFs in this rapidity region at RHIC. Fig. 5 and Fig. 6 respectively show the corresponding R dAu and R pAu vs P T which are found to be fairly well reproduced by our RnPDFs.  We start our list of comparisons with LHC data with the Dmeson ones which are representative of charm data. At this stage, since our examples are mainly illustrative, we have decided not to consider lepton-from-charm data, which are in any case much more complex theory-wise to compute. We recall that our objective here is not to demonstrate that one can account for the entire set of existing data but to illustrate how and where to use our RnPDF sets and what conclusion to draw from NMFs which would then be obtained.
Our first example is in fact a consistency check for which we show the NMF for D 0 at √ s NN =5.02 TeV obtained from the D-RnPDFs precisely using 5.02 TeV data. These have been computed while taking into account the scale correlation as explained above. Fig. 7 shows the same agreement with the R pPb as a function of y c.m.s. measured by LHCb as that obtained in Ref. [1]. The only difference indeed lies in the procedure to derive the uncertainties with the scale correlation. A similar correspondence is found for the P T dependence (see Fig. 8).
Regarding other D 0 data, as of now, we are only able to compare with the central-rapidity data from ALICE as shown on Fig. 9. These admittedly exhibit much larger experimental uncertainties and only hint at a possible smaller suppression than what one would expect from our RnPDFs. More precise data at 8. 16 TeV for instance at backward and forward rapidities from LHCb would be welcome, along the lines of their preliminary analysis of R FB [116].  We now move on to the J/ψ case. Since we have J/ψ-RnPDFs, we have used them while taking into account the scale correlation. In this case, we only show comparisons with ALICE data at 8.16 TeV which we did not include in our initial reweighting analysis [1]. Fig. 10 shows -without much surprise as well-that the magnitude of the NMF is well reproduced by our J/ψ-reweighted PDFs.
As for the P T dependence shown on Fig. 11, it is particularly well accounted for in the backward region, less in the forward region of P T . One could be tempted to attribute this to the growing impact of non-prompt J/ψ for increasing P T . Indeed, around P T = 10 GeV, this non-prompt fraction has already tripled compared to the P T -integrated value to reach 30%. However, anticipating our B results, it is not obvious that this is the case. Another possible explanation is that the agreement in the backward region is coincidental and comes from the onset of the absorption. In such a case, it could be that, in general, the larger P T region is not well accounted for by our RnPDFs. Once again, we wish to keep this discussion rather descriptive as final physical conclusions would require a full nPDF fit to see if these data are or not reproducible by LT nPDF effects alone.
We now come to the B meson case. Because the B data were not precise enough when we performed our reweighting analysis [1], the results we obtained only showed marginal differences with the original nPDFs. As such, we find it to be a neat example to illustrate how to use our RnPDFs to compute NMFs for processes which are not connected to those used in the reweighting. In this case, we have used our D-RnPDFs with the combined 3-scale results to evaluate the NMFs for 3 scales and have taken the resulting envelope. Fig. 12 shows the resulting y c.m.s. dependence which agrees very well with both LHCb B + data points at 8.16 TeV. We pushed the comparison further with the P T dependence shown on Fig. 13 which is in good agreement within the experimental and theoretical uncertainties. We note that we could have performed more such comparisons, with D-RnPDF, in particular with J/ψ from B data from LHCb. However, we recall that our objective is certainly not to be exhaustive but to illustrate the usage of our reweigthed nPDFs.  Our last set of comparisons concerns Υ(1S ) data. Like for B mesons, we have used D-RnPDFs. Fig. 14 shows the resulting y c.m.s. dependence. We find a good agreement in the forward region, but not in the backward region. It seems that the peak generated by the antishadowing is simply absent. Either the D data tend to make it too strong 12 , or the Υ(1S ) data are suppressed by another mechanism. 13 The NMF P T dependence, shown on Fig. 15, confirms this observation.

F. Note on the di-jet data
After our initial reweighting study came out, the CMS collaboration claimed the first observation [119] of a depletion of gluons in Pb at large x based on an analysis of the di-jet yield ratio in pPb and pp collisions as a function of the di-jet pseudo-rapidity, η 12 . Such a suppression would correspond to a gluon EMC suppression which was already hinted at by PHENIX Υ data [50].
It would be very insightful to do a comparison for such ratio using our HF RnPDFs to see if the data constraints concur to the same effect. Yet, we would like to stress that the pp data are not well accounted for by the fixed-order NLO computations and large (positive and negative) η 12 . We refer to a neat discussion by Eskola et al. [120] for more details and a discussion that the deviation may be accounted by modifying the proton PDFs. The discrepancies can easily be on the order of the expected size of the nuclear effects. This happens in the kinematical regions where one would need to look for such a depletion at large x or shadowing at low x.
If we were to bypass a fixed-order analysis by using the same method we have proposed in [55] by parametrizing the amplitude squared, |A| 2 , without enough kinematical lever arm in its determination, we would probably hide such a disagreement in |A| 2 and our predictions could well be wrong. Yet, in principle, what matters most in our nPDF-based NMF predictions is the relevant x 1 − x 2 − Q region probed by the scattering. We agree that it is somewhat unlikely that the observed disagreement could be the signal of a phenomenon significantly altering the kinematic of the scattering -this is however not completely excluded if this comes from kinematically enhanced next-to-next-to-leading order (NNLO) QCD corrections as it appears that the di-jet data are better described at NNLO [121]. Yet, until this issue is settled, we prefer to refrain from performing NMF predictions for dijets. We in fact leave this to the interested reader to do so since our RnPDFs are now usable by anyone thanks to our released LHAPDF grids. We however suggest a careful reading of [120,121].

V. CONCLUSIONS
We have presented a follow-up of our reweighting study [1] of two of the recent global fits of nuclear parton densities at NLO (nCTEQ15 [4] and EPPS16 [5]) which consists in a release of the corresponding LHAPDF grids with Hessian uncertainties. These will allow anyone to employ the constraints encoded in the HF experimental data set which we have used for the reweighting in order to perform computations of observables like cross sections or NMFs.
We had indeed focused on HF LHC data whose predicted yields are however very sensitive to the factorization scale, µ F . Since the magnitude of the nuclear effects is also strongly µ F sensitive, this resulted in a significant dependence of the reweighting on the µ F we choose. For instance, if one takes a value smaller than the default value, say the transverse mass of the produced particle, the resulting shadowing in the Rn-PDFs will be weaker. On the contrary, if one reweights with a larger µ F , the resulting RnPDFs will always exhibit a stronger shadowing compared to the former case.
This naturally induced a significant uncertainty, which we had found [1] to be as large as that of the resulting D and J/ψ-RnPDFs themselves. As we noted, this is less problematic for the bottom-quark sector but the data are not yet precise enough to yield valuable constraints. We have thus generated several RnPDF grids to be used with correlated µ F choices if one performs predictions for similar systems as that used for the reweighting. In addition, we release here new RnPDFs with combined scale uncertainty to be used when one assumes no correlation between these scales, e.g. to predict isolated photon or Υ NMFs from D-RnPDFs.
We have thus found it useful to show a selection of comparisons with experimental data to illustrate how to use our Rn-PDF grids and what to expect from them. In most of the cases, we find a very good agreement with LHC data. It is of course expected in the case of the 8 TeV J/ψ data since one set by LHCb was already included in our reweighting. For the other 8 TeV data, the agreement indicates that the x dependence of the RnPDFs correctly captures the energy dependence of the R pPb and, to some extent as well, highlights the coherence between different LHC data sets. In the case of 200 GeV RHIC data, the agreement we have obtained is even more striking and indeed shows the x dependence of the RnPDFs provide a good description of the gluon distribution up to the upper end of the shadowing region.
Before a full NLO fit using these data is performed, we believe that our RnPDFs can safely be used when the conventional nPDFs show too large uncertainties preventing any physics conclusions. Yet, one should keep in mind that if a strong disagreement is found, it will always be necessary to wonder if a new fitting procedure, which by construction will show more freedom to describe different observables, would not yield a global description including these new data with an acceptable global χ 2 . As such our released RnPDFs should be considered as useful and handy tools for observables which are known to be well accounted for by the effects encoded in nPDFs as well as useful exploratory tools for new ones.
In this appendix we provide a selection of additional plots showing the HF RnPDFs that give additional details on the obtained results and can be also compared with the results obtained in ref. [1]. We show results for the nCTEQ15 and EPPS16 D, B → J/ψ, and J/ψ RnPDFs. We restrict from showing the results with Υ(1S ) data as (due to the large uncertainties of the data) the impact on the original nPDFs was very limited, see ref. [1]. In Figs. 16, 17, and 18 we present gluon NMF obtained from the reweightings with D, B → J/ψ, and J/ψ meson data correspondingly. Figure (a) always corresponds to the reweighting in case of nCTEQ15 nPDFs and Fig. (b) to the reweighting with the EPPS16 nPDFs. The upper rows of Figs. (a) and (b) always show the 68% CL results (that can be directly compared to the figures presented in ref. [1]), the lower rows provide the same results but with PDF uncertainties calculated at 90% CL. In Figs. 19 and 20 we present results for gluon, up quark, and anti-down quark distributions (the other distributions exhibit analogical features). In these plots we displayed distributions obtained from the reweighting with different scale choice {µ 0 , 2µ 0 , 0.5µ 0 } and a distribution where scale uncertainties were combined. For better visibility all PDFs were scaled by the central value of the combined distribution. As expected the main impact is on the gluon distribution, the quark PDFs are mostly unchanged after including the HF data. We can also see that the uncertainty of the set with the combined scale uncertainty is smaller than the envelope of the error bands provided by the results for individual scale choices but larger than the uncertainty of the central scale choice. Generally whenever possible we recommend to use PDF sets obtained with specific scale choice (such that it is correlated with the scale used in the theoretical calculation). For observable were there is no reason to correlate the scales one can either take the envelope of the three PDF sets or use the combined PDF set to restrict the number of necessary evaluations. Pb Pb Pb  Pb Pb