Inclusive, prompt and non-prompt J /ψ identification in proton-proton collisions at the Large Hadron Collider using machine learning

,


I. INTRODUCTION
Over the last couple of decades, two of the world's most powerful particle accelerators, the Large Hadron Collider (LHC), CERN, and the Relativistic Heavy-Ion Collider (RHIC), Brookhaven National Laboratory, USA have studied the hot and dense state of deconfined partons, known as the quark-gluon plasma (QGP) by colliding heavy-ions at ultra-relativistic speeds.These studies are crucial to understand the physics of the early Universe, and the phase transition between the partonic and hadronic matter.Due to the nature of the strong interaction, QGP is extremely short-lived.Therefore, to study the properties of QGP, several indirect signatures are investigated.One such signature is the melting of heavy quarkonia (q q) in QGP, also known as the quarkonia suppression, where the color force responsible for binding the quarks into hadrons is screened in the presence of deconfined partons [1][2][3][4][5][6].The production of heavy quarkonia pairs (cc and b b) follow the perturbative QCD (pQCD) calculations, whereas the evolution to a bound colorless state is a nonperturbative process.Due to their high mass, heavy-quarks are produced via partonic interactions in the early stages of the collision, and experience the full evolution of QGP.Thus, they are sensitive probes to study the properties of QGP and the theory of strong interaction [7].
J/ψ is the lightest charm vector meson, which is the bound state of a charm and an anti-charm quark (cc).The studies related to J/ψ meson, in heavy-ion collisions provide genuine testing grounds for QCD [8,9].
To better understand the underlying production mechanism, cold nuclear matter effects, and influence from the quark-gluon plasma, baseline measurements are also performed in proton-proton (pp) and proton-nucleus (p-A) collisions [10,11].The inclusive J/ψ production can have contributions from three sources.The first one is the direct prompt production, in which J/ψ is produced directly from the hadronic/nuclear collisions; the second one is the indirect prompt production via feeddown from directly produced higher charmonium states (i.e. from χ c and ψ(2S)), and the third one is the nonprompt production which comes from the decay of beauty hadrons [12,13].Figure 1 depicts the topological pro-ductions of J/ψ, where ⃗ L denotes the vector joining the J/ψ decay vertex to the primary vertex.In Fig. 1, it is evident that the prompt J/ψ is produced nearer to the primary vertex compared to the non-prompt J/ψ, where b-hadrons fly off to a finite distance before decaying to J/ψ via weak decay.Since the rest mass of J/ψ is larger than the other decay daughters of beauty hadron, the momentum of J/ψ is closer to the decaying beauty hadron, thus non-prompt J/ψ gives a better handle to study the production of these beauty hadrons [14].Another important implication of separating the non-prompt J/ψ from prompt J/ψ comes from the fact that their spin state polarization is conceptually and effectively different [15,16].The measurement of non-prompt J/ψ can also provide direct determination of the nuclear modification of beauty hadrons.
In experiments, J/ψ is reconstructed through its electromagnetic decay to lepton pairs, in either e + + e − or µ + + µ − decay channels.By reconstructing the invariant mass spectra of these lepton pairs (m ee or m µµ ), one can extract the signal for inclusive J/ψ by fitting a suitable signal function and subtracting the background continuum.Usually, a Crystal Ball function [17] is used as the signal function.To further estimate the non-prompt contribution in the inclusive J/ψ signal, one has to rely on the non-prompt production topology.As the beauty hadrons undergo weak decay, the resulting J/ψ will originate from a decay vertex that is displaced from the primary interaction vertex (as shown in Fig. 1).For this, the pseudoproper decay length (cτ ) of the candidate is estimated, which is given in Eq. 2. The cτ Probability Density Functions (p.d.f.) for the prompt (F prompt (cτ )) and non-prompt (F B (cτ )) production can be obtained from Monte Carlo simulations separately.By using an unbinned 2-dimensional likelihood fit as described in detail in Refs.[8,12], the ratio of the non-prompt to inclusive J/ψ production (f B ) can be estimated, which can be used to calculate the non-prompt and prompt production cross-sections (σ J/ψ ), as given below.
Machine learning (ML) techniques are in use in the field of nuclear and particle physics over the last couple of decades [18,19].Recently, with the advancement of superior hardware and smart algorithms, it has gained its rightful popularity in the big data community.By construction, machine learning is trained to learn the mapping from the input features to the output class.The algorithm helps to learn the correlations between the input and output by optimizing the model parameters on the training data.This is practically useful when the mapping function is not trivial, or sometimes it can not be defined.In such cases, machine learning helps to do the mapping in a faster and more efficient manner, without compromising the quality of the result.The success-ful application of machine learning techniques in collider experiments is well proven by now.It has been used to tackle many varieties of problems.Some of them include the impact parameter estimation [20][21][22][23][24], particle identification and track reconstruction [25][26][27], jet tagging [28][29][30][31], anisotropic flow measurements [32][33][34], etc. Interested readers may refer to some of the recent reviews on machine learning in high energy physics [35][36][37][38].In this work, for the first time, machine learning techniques are implemented to separate the prompt and non-prompt dimuon pairs from the background to obtain a better identification of the J/ψ signal for different production modes.The study has been performed in pp collisions at √ s = 7 and 13 TeV simulated using PYTHIA8.Machine learning models such as XGBoost and LightGBM are explored.Some of the motivations of this work are as follows.This technique provides a faster and more efficient method to identify the inclusive, prompt, and non-prompt J/ψ signal than the conventional template fitting method discussed above.It can be applied to identify J/ψ meson in the entire range of transverse momentum (p T ) and rapidity (y), thus allowing us to probe the production fraction (f B ) of non-prompt J/ψ easily for very fine bins in p T and y.This method has another advantage, as it can directly identify the dimuon pairs, hence it can tag them to one of the three sources, prompt, non-prompt, or background.This identification of the dimuon level tags can help in studying many aspects of charmonia and bottomonia production, which are almost impossible using conventional methods.One such application would be the effect of polarization on prompt and non-prompt J/ψ production.Apart from these motivations, the novelty of this work also lies in the fact that the attempt to separate prompt versus non-prompt production for J/ψ is never attempted before using the machine learning approach.
The paper is organized as follows.It begins with a brief introduction in Sec.I.The methodology, including the data generation using PYTHIA8, and the description of the machine learning models, are described in Sec.II.The training, evaluation, and quality assurance of the models are discussed in Sec.III followed by the results and discussions in Sec.IV.Finally, the paper concludes by summarizing the findings in Sec.V.

II. METHODOLOGY
The descriptions of pQCD-based particle production, such as jets, charm, and bottom hadrons, etc., are well explained by the PYTHIA8 Monte Carlo model.In the current work, we use the PYTHIA8 event generator to simulate the data sets required to train the machine learning model to identify the prompt and non-prompt dimuon signals from the background dimuon pairs.This section provides a brief description of PYTHIA8, along with the different models used in the study.[43] in pp collisions at √ s = 13 TeV.A constant multiplication of 0.47, 0.47, and 1.0 is performed to the PYTHIA8 results for inclusive, prompt, and non-prompt production, respectively.
PYTHIA is a pQCD-based Monte Carlo event generator used to generate ultra-relativistic pp collisions at RHIC and LHC collision energies.PYTHIA8 contains a library of soft and hard processes and models for initial-and final-state parton showers, multiple parton-parton interactions, beam remnants, string fragmentation, and particle decays [39,40].PYTHIA8 is an improved version of PYTHIA6 where 2 → 2 hard processes are implemented along with MPI-based scenarios to produce the charm and beauty hadrons.In this study, we have used the 4C-tune of PYTHIA8 (see Ref. [41] for details) version 8.308 to simulate 20 billion events with inelastic and non-diffractive components (HardQCD:all = on) of the total collision cross section in pp collisions at √ s = 13 TeV and 1 billion minimum bias events in pp collisions at √ s = 7 TeV.The simulation involves a p T cut-off of p T > 0.5 GeV/c (using PhaseSpace:pTHatMinDiverge available in PYTHIA) to avoid the divergence of QCD processes that may occur in the limit p T → 0. Since this study involves charm and beauty quark production, we have allowed all the charmonia and bottomonia production processes (using "Charmonium:all=on" and "Bottomonium:all=on") in PYTHIA8.In addition, we have allowed the spread of the interaction vertex according to a simple Gaussian distribution (Beams:allowVertexSpread=on) where offset and sigma of the spread of the vertices in each of the carte-sian axes are taken from Ref. [42], and are mentioned in Table I.Here, V x , V y , and V z are the beam interaction vertex distance from the global origin (0,0,0) in the x, y and z directions, respectively.We have put an additional cut in the z-vertex, as |V z | < 10 cm, to be consistent with the experiments.The produced J/ψ are allowed to decay in the dimuon channel only, i.e.J/ψ → µ + + µ − and all other decay modes of J/ψ are switched off.mean (mm) sigma (mm) V x -0.35 0.23 V y 1.63 0.27 V z -4.0 40.24 Figure 2 shows the comparison of transverse momentum spectra for inclusive, prompt and non-prompt J/ψ using PYTHIA8 with the corresponding measurements reported by LHCb [43].All the track cuts for muons and dimuon pairs are kept the same as reported in Ref. [43].A factor of 0.47 is multiplied in the PYTHIA8 estimated inclusive and prompt J/ψ yields as it overestimates the experimental data.However, PYTHIA8 follows the experimental trend of p T spectra up to p T < 6 GeV/c, and starts to deviate towards the higher values of p T .One can intuitively note that the yield of J/ψ from bhadron decays is almost ten times lower than the prompt production; however, this difference in production yield between prompt and non-prompt J/ψ gets smaller towards the high-p T values.The overall trend produced by PYTHIA8 with the tunes and settings mentioned above is reasonable when compared to the experiment.The scaling factors are only applied in this plot to match the trend of the experimental data.For all other plots in this work, no such scaling is used and the results are directly from PYTHIA8.

B. Machine learning models
The realm of ultra-relativistic collisions at the LHC and RHIC produces complex and non-linear systems which demand powerful analysis techniques.These analysis techniques may sometimes require superlative computational facilities yet provide results with significant uncertainties.On the other hand, with the advent of machine learning tools, one can extract insightful results from a vast amount of experimental data with ease and less uncertainty by learning the correlation between the input and target variables.In collider physics experiments, ML models can be exploited in many aspects.One of the complex problems in collider physics experiments is understanding different underlying physical processes that contribute to particle production.However, the final state particles sometimes carry some distinguished kinematic signatures that can help identify their production mechanism and parent particles.For example, in experiments, identifying prompt and non-prompt J/ψ meson relies on the statistical separation method, which is already described in Sec.I.However, using machine learning, one can train a model using some of the kinematic features of the decay daughters to reject the uncorrelated pairs easily and identify the signal and the source of the parent J/ψ.Popular ML models include gradientboosting decision trees-based regressions and classifications due to their simplicity, robustness, and efficiency in handling extensive data [44,45].The name gradient boosting comes because it uses the gradient descent algorithm and boosting method [45].In this study, we apply gradient-boosted decision tree-based ML techniques to segregate prompt and non-prompt dimuon pairs from uncorrelated background using the kinematics of all the final state dimuon (µ + + µ − ) pairs, which are discussed below.

XGBoost
XGBoost (XGB) [46] stands for Extreme Gradient Boosting, and it is one of the most popular and widely used ML algorithms due to its efficiency in handling large data sets and outstanding performance in classification and regression problems.It is an upgraded version of the gradient-boosting decision trees (GBDT).It has several enhancements, such as parallel computing and tree pruning, to speed up the training process which lets it handle large datasets in a reasonable amount of time.XGB also provides a wide variety of hyperparameters that can be optimized for better model performance [47].

LightGBM
Light Gradient Boosting Machine (LightGBM or LGBM) [48] is another enhanced version of the GBDT with improved speed and performance.Along with parallel computing, it uses a leaf-wise splitting of the tree rather than level-wise to increase the model's speed and reduce memory usage.Traditional level-wise splitting of a tree leads to the formation of unnecessary nodes that contain the tiniest information, and these nodes use up memory but do not contribute to the overall learning process.In contrast, splitting a tree leaf-wise leads to the most informative split faster and thus reduces the number of nodes formed, making the training process faster [49].

III. TRAINING AND EVALUATION
In this section, we discuss our machine-learning models in detail.We begin with the description of the inputs to the models, then preprocessing of the data set, and discuss the model architecture.Finally, we discuss the training and evaluation process with the required quality assurance figures.

A. Input to the machine
The training of the ML models requires a data set with well-correlated input and target variables.Here, the invariant mass of the reconstructed dimuon pairs (m µµ ) can significantly help in separating the uncorrelated background from the signal dimuons coming from the J/ψ meson.On the other hand, prompt and nonprompt production of J/ψ can have different production topologies.The production of the prompt J/ψ would be closer to the primary vertex, whereas the J/ψ formed from the weak decays of b-hadrons would have a displaced decay vertex with a finite decay length with respect to the primary interaction vertex.One such quantity that is used to differentiate the topological production of the J/ψ by taking the production vertex into ac- count is the pseudoproper decay length defined in Eq. 2 below [50].
Here, ⃗ L is a vector pointing from the primary vertex to the J/ψ decay vertex.c is the velocity of light, m J/ψ is the mass of J/ψ meson taken from the Particle Data Group (PDG) [51].For each dimuon pair, we require its invariant mass (m µµ ), transverse momentum (p T,µµ ), pseudorapidity (η µµ ), and the pseudoproper decay length (cτ ) as the input to the models.All these inputs can be obtained in experiments as well.Now, following Eq.2, we need the quantity ⃗ L from PYTHIA8, which is obtained using the method described below.One can calculate the J/ψ decay vertex for the dimuon pairs using the Eq. 3.
Here, S x stands for the reconstructed J/ψ decay vertex in x-direction, for two particles with mass m 1 and m 2 , which fly off from the J/ψ decay vertex to a distance x 1 and x 2 , in time t 1 and t 2 with momentum p x,1 and p x,2 .Similarly, one can also obtain a similar expression for S y and S z .After obtaining the coordinates for J/ψ decay vertex, one can estimate ⃗ L = ⃗ V − ⃗ S. Here, ⃗ V = (V x , V y , V z ) is the primary vertex coordinates defined in Sec.II A and ⃗ S = (S x , S y , S z ) is the J/ψ decay vertex position for the reconstructed dimuon pairs, obtained using Eq. 3.
The target labels for the prompt, non-prompt J/ψ, and the background dimuon pairs are represented with the numeric tags as 0, 1, and 2, respectively.For the training of the model, the input features are obtained for the opposite sign dimuon pairs in the whole pseudorapidity and transverse momentum range in the minimum bias pp collisions at √ s = 13 TeV using PYTHIA8.Importance Score (%) LGBM XGB Training importance scores (%) of pseudoproper decay length (cτ ), reconstructed dimuon mass (mµµ), transverse momentum (pT,µµ) and pseudorapidity (ηµµ) for LGBM (orange), and XGB (blue).

B. Preprocessing and training
Classification models require to be trained on a similar number of training instances for each of the output classes.We call these instances examples of training.Any imbalance in the examples during the training may bias the output towards the majority class.This is often regarded as the "class imbalance problem", and the model shows high accuracy just by predicting the majority class.In this study, the majority class is the background followed by the prompt J/ψ.The ratio of background:prompt:non-prompt is ≈ 20:10:1.Thus, the models will favor the training mainly towards the background data, and will mostly misclassify the prompt and the non-prompt J/ψ.To overcome this data sample imbalance, sampling techniques like undersampling and oversampling are used.Undersampling removes some instances of the majority class while oversampling adds some instances to the minority class to balance the data points present in each class.Nevertheless, a drawback of undersampling is that it leads to data loss since the instances from the majority class are discarded.Therefore, we prefer to balance the data sets by oversampling.A random oversampling technique from the imblearn library [52] is implemented on the training set wherein both the minority classes (prompt and non-prompt) are resampled to match that of the majority class (background).We use 90% of the entire data as training and the rest 10% as testing.Further, the resampling is performed on the training set, which solves the class imbalance issue, and then 10% of the data from the training sample is used as the validation set.Now, we proceed to define the model architecture and the training process.Model parameters such as the loss function, learning rate, sub-sample, number of trees, and maximum depth are tuned for each model.The best parameters are selected through a grid search method, which is listed in Table II  In Table II, the learning rate is a hyperparameter that governs the pace with which the model learns and updates its weights.The subsample indicates the fraction of the data that the model will sample before growing trees, which occurs in every boosting iteration and prevents overfitting.Increasing the maximum depth would make the model more complex.Objective indicates the func-tion that guides the training process, which quantifies the model's performance and reduces the prediction error.In both models, we have used softmax objective for the multiclass classification, available as 'multi:softmax' and 'multiclass', for XGB and LGBM, respectively [47,49].The metric is the function that evaluates the model's performance in each training iteration.In both models, we have used the logloss metric function for the multiclass classifications, the definition of which can be found in Refs.[47,49].All the other hyperparameters are kept as their default values for both models.

C. Quality assurance
Figure 3 shows the learning curve for XGB (top) and LGBM (bottom) for both training and validation, i.e., the evolution of the loss as a function of the number of decision trees.For good training, the loss decreases with the increase in the number of decision trees and saturates at a particular loss value, indicating that the training must be stopped now.Another essential training benchmark can be deduced by looking at the difference between the curves for the training and validation simultaneously.For reasonable training, the learning curves for the training and validation should be close; however, a big difference between them can arise due to overfitting or underfitting.One can infer from Fig. 3 that the loss values for validation and training decrease with the increase in the number of trees and saturates at around 25 trees for XGB and at around 45 trees for the LGBM.In addition, for both XGB and LGBM, the curves for validation and training lie on top of each other, indicating no overfitting by the models.
Another essential benchmark of the classification models can be inferred from the confusion matrix or sometimes called as error matrix.Each row of a typical confusion matrix represents the instances of a true class, while each column represents the instances of a predicted class.The confusion matrix as a whole represents the confusion by the model to predict different classes.In Fig. 4, the normalized confusion matrix is shown for XGB and LGBM with the three output classes, i.e., prompt, nonprompt, and the background.Both XGB and LGBM have similar predictions; the backgrounds and the nonprompt dimuon pairs are identified correctly with 100% accuracy; however, the models misidentify 2% of the dimuons coming from the prompt J/ψ as the non-prompt dimuons.As the ratio of prompt to non-prompt is around 10:1, this discrepancy in the identification has less effect on the prompt; but, it may enhance the non-prompt production yield.Initially, this 2% misclassification yield coming from the prompt J/ψ to the non-prompt J/ψ, was suspected to be contributed from the indirect prompt production, which are the decays from higher excited states of charmonia.This is because, they might not have produced and decayed exactly at the primary vertex, and therefore may have traveled a finite pseudoproper decay length before decaying.This probable cause is discarded as a similar prediction is obtained while dealing with data set having only indirectly produced prompt J/ψ.So, this misclassification error is inherited in the model itself.
Figure 5 shows the percentage importance score of each feature during training for both XGB and LGBM models.In the context of decision trees, the importance score for a feature is defined as the number of times the feature is used to split a node.The importance score shown in the figure indicates how useful or valuable each feature is during the construction of the boosted decision trees.As one can infer from the figure, the input features that carry the most information about the production species of the reconstructed dimuon pairs are m µµ and cτ , and hence, these are the crucial features for this classification task.In the LGBM model, the order of relative importance to the classification task is m µµ > cτ > p T,µµ > η µµ .In contrast, XGB requires only m µµ and cτ to make a prediction, whereas the model discards the contribution of p T,µµ and η µµ .Another aspect to learn from this figure is that for the same classification task, different models can learn from the same input features with different importance scores.However, for this classification task, m µµ and cτ hold the highest importance scores in both models.

IV. RESULTS
Figure 6 shows the transverse momentum (p T ) spectra for the inclusive, prompt and non-prompt J/ψ in minimum bias pp collisions at √ s = 13 TeV in midrapidity (|y| < 0.9) and forward rapidity (2.5 < y < 4).Additionally, the p T -spectra for pp collisions at √ s = 7 TeV in midrapidity (|y| < 0.9) are also added.These results include PYTHIA8 (true), and the predictions from both the trained models i.e.XGB and LGBM, which are trained with minimum bias pp collisions at √ s = 13 TeV data.Here, J/ψ → µ + + µ − channel is used to reconstruct the p T -spectra.At first glance, one notices that the J/ψ produced from the b-hadron decays have a significantly lower yield in the low-p T region than the prompt J/ψ.However, this difference in their production yield tends to decrease as one moves towards high-p T .These observations using PYTHIA8 are consistent with the experimental measurements [50,53,54].It is seen that both the machine learning models, XGB and LGBM, can accurately identify the inclusive and prompt dimuon pairs originating from J/ψ, and thus, their predictions for the p T -spectra match well with the results obtained from PYTHIA8 (true).However, some discrepancy arises when both XGB and LGBM models try to identify the dimuon pairs coming from the non-prompt J/ψ.Both models consistently overestimate the yield of non-prompt J/ψ.The predictions from the LGBM model are slightly worse at low-p T for the midrapidity case as compared to the XGB model, whereas in the intermediate to high-p T , both the models are fairly comparable in accuracy.As discussed earlier in the description of Fig. 4, this overestimation of the yield of the non-prompt J/ψ predicted by both the models is a direct consequence of the misidentification of the dimuons coming from the prompt J/ψ as the non-prompt dimuons.
In addition, both XGB and LGBM models are found to be robust for the energy dependence predictions of inclusive, prompt, and non-prompt J/ψ p T -spectra as seen in Fig. 6 for pp collisions at √ s = 7 TeV.It is important to note that the models are trained with √ s = 13 TeV data, while they can still make predictions for √ s = 7 TeV.While XGB retains its accuracy of prediction in the entire p T range for the inclusive and prompt J/ψ in pp collisions at √ s = 7 TeV, a similar discrepancy for the nonprompt case is observed in pp collisions at √ s = 7 TeV as seen in pp collisions at √ s = 13 TeV.On the other hand, although LGBM retains its accuracy for the inclusive and prompt J/ψ, it starts to deviate much from the true values towards the lower transverse momentum regions.The success of the models in learning and predicting the energy dependence of inclusive, prompt, and non-prompt production demonstrates the robustness and accuracy of the models.This could be attributed to the fact that most of its learning comes from the invariant mass and the pseudoproper decay length of the dimuon pairs, which are independent of the collision energy.
Figure 7 represents the fraction of J/ψ produced from b-hadron decays (f B ) as a function of transverse momentum at midrapidity in minimum bias pp collisions at √ s = 13 TeV using PYTHIA8.The results are compared with the predictions from XGB and LGBM.The experimental data from ALICE [50] are added.Here, the trend of f B as a function of p T is similar to the experimental observations, where the value of f B is found to be increasing with p T in the range 5.0 ≤ p T ≤ 20.0 GeV/c.It is seen that the value of f B remains almost flat and is independent of p T in p T < 5.0 GeV/c and p T > 20.0 GeV/c range.By using the machine learning models, we can directly identify the source of the dimuon pairs and hence, it becomes easy to estimate f B in very fine bins of p T , that leads to this observation.As the production fraction of non-prompt J/ψ becomes larger in high-p T , it is natural to observe that the difference in the p T -spectra between prompt and non-prompt J/ψ becomes smaller in high-p T as seen in Fig. 6.
Figure 8 represents the rapidity spectra for inclusive, prompt, and non-prompt J/ψ in minimum bias pp collisions at √ s = 13 TeV and √ s = 7 TeV using PYTHIA8 including the predictions from XGB and LGBM models.The inclusive and prompt J/ψ are found to have a flat and rapidity independent yield in the region |y| < 2.5, after which the yield starts to decrease.On the other hand, for the non-prompt case, the yield is independent of rapidity only for a smaller rapidity coverage, i.e., |y| < 1.0.These features of the rapidity spectra for different production modes of J/ψ using PYTHIA8 are consistent with the experimental measurements reported in Ref. [50].Interestingly, the predictions from both XGB and LGBM agree with the PYTHIA8 values for the inclusive and prompt J/ψ values, while the values for nonprompt J/ψ are slightly overestimated.Such a study using a broad range of rapidity is to demonstrate the usefulness and validity of the machine learning models used.However, an experimental measurement involving muons is not practical at the mid-rapidity, where the experiment is a multi-purpose one which deals with particle

PYTHIA8
LGBM FIG. 9. Top panel shows the normalized pT-integrated inclusive, prompt and non-prompt J/ψ yield as a function of normalized charged particle pseudorapidity density at the mid pseudorapidity region with multiplicity selection at the V0 region (V0M) for minimum bias pp collisions at √ s = 13 TeV (left) and √ s = 7 TeV (right) using PYTHIA8 and includes the predictions from XGB and LGBM models, and comparison with experimental data measured at ALICE [55].The middle panel shows the ratio of XGB to PYTHIA8, and the bottom panel shows the ratio of LGBM to PYTHIA8.identification, like the ALICE at the LHC and the STAR at the RHIC.
One can observe the magnitude of disagreement in the XGB and LGBM predicted values for the non-prompt J/ψ yield with the true values from the simulation is similar to the p T -spectra shown in Fig. 6 for both the collision energies.For the case of non-prompt J/ψ, the yield ratio of XGB to PYTHIA8 is almost a constant with a value of 1.3; however, the yield ratio of LGBM to PYTHIA8 is slightly higher in the midrapidity and decreases slowly while moving to forward rapidity.These observations are similar for both collision energies.
We suspect these discrepancies in the prediction for the non-prompt J/ψ are due to the same misidentification of prompt as non-prompt as discussed already in Section III C.However, this discrepancy in the values for the prompt and non-prompt J/ψ can be fixed by considering the magnitude of mispredictions in Fig. 4.This is discussed in detail in the Appendix (Sec.VI).
Figure 9 depicts the normalized p T -integrated J/ψ yield for the inclusive, prompt, and non-prompt J/ψ as a function of normalized charged particle density at midpseudorapidity using PYTHIA8 which includes the predictions from XGB and LGBM models for pp collisions at √ s = 13 TeV and √ s = 7 TeV. Figure 9 also includes the ALICE data comparison for inclusive J/ψ yield (measured in the di-electron channel at the mid-rapidity) in pp collisions at √ s = 13 TeV measured in the V0 region (multiplicity measurement), i.e., −3.7 < η < −1.7 and 2.8 < η < 5.1 [55].The normalized yields for inclusive, prompt, and non-prompt J/ψ from PYTHIA8 are found to increase with the increase in the normalized charged particle density for both the collision energies.The increase in yield is significantly enhanced for the non-prompt J/ψ, which is consistent with the values reported in Ref. [56,57].While PYTHIA8 slightly overestimates the experimental data, it almost maintains the overall trend of the normalized yield for the inclusive J/ψ.Towards higher multiplicities in the final state, J/ψ from b-decays show an increasing trend with non-linear behavior.The slopes of these multiplicity-dependent yields of inclusive, prompt and non-prompt J/ψ show energy dependence with higher slopes at higher collision energies.The predictions from XGB and LGBM give an overall good estimation for PYTHIA8 while deviating around 10% towards the lower multiplicity for the non-prompt J/ψ cases for both collision energies.

V. SUMMARY
In this work, an effort is made to disentangle the inclusive, prompt, and non-prompt J/ψ from the uncorrelated background dimuon pairs using machine learning tools.We use experimentally available inputs for the models.The J/ψ meson are reconstructed in the µ + + µ − decay channel.For each dimuon pair, we require its invariant mass (m µµ ), transverse momentum (p T,µµ ), pseudorapidity (η µµ ), and the pseudoproper decay length (cτ ) as the input to the models.We use XG-Boost and LightGBM models for this classification task.The training of the models is performed with the mini-mum bias pp collisions at √ s = 13 TeV simulated with PYTHIA8.The predictions from both the models are tested for pp collisions at √ s = 13 TeV and pp collisions at √ s = 7 TeV.Both the models show accuracy up to 98%; however, they mis-identify 2% of the prompt J/ψ as the non-prompt.The transverse momentum (p T ) and pseudorapidity (η) differential measurements of inclusive, prompt, and non-prompt J/ψ, its multiplicity dependence, and the p T dependence of fraction of nonprompt (f B ) are shown.These results are compared to experimental findings wherever possible.
This study presents a unique method to separate the production of prompt and non-prompt J/ψ from the uncorrelated background dimuon pairs.As the models do not include any fitting to the p T differential spectra, it can be applied to identify each dimuon pairs separately having any value of p T in any rapidity range and thus allow us to probe the production fraction, f B of nonprompt J/ψ even in fine bins of p T , η and y.The direct identification of dimuon pairs as prompt or non-prompt can help study many aspects of charmonia and bottomonia production, which are almost impossible using conventional methods.One such application would be the effect of polarization on prompt and non-prompt J/ψ production.
In addition, ALICE has reported the non-linearity in the normalized J/ψ yield at the midrapidity in the dielectron channel towards higher final state normalized multiplicity [58].As seen in this present study, such behavior is an outcome of the non-prompt J/ψ both at the midand forward rapidities.The present method can be used in the experiments to separate prompt from non-prompt J/ψ and hence study the related production dynamics.
The inconsistency between the true and the predicted values for the non-prompt J/ψ, as shown in Figs. 6 and 8 can be corrected by correcting the yields of each bin in the transverse momentum, rapidity and spectra.Considering the correction factor is independent of transverse momentum, rapidity, and pseudo-rapidity, for a given bin i, the corresponding corrected prompt and non-prompt J/ψ yield is given by the following expression: In Equations 4 and 5, f denotes the correction factor for the prompt yield, which is 0.02 in our case for both XGB and LGBM.Y p,i and Y np,i are the yields in the ith bin of the spectra for the prompt and non-prompt, respectively, whereas Y p and Y np denotes the total yields.The superscripts 'corr' and 'uncorr' stand for the corrected and uncorrected yields, respectively.Figure 10 shows the corrected transverse momentum and rapidity spectra in minimum bias pp collisions at √ s = 13 TeV for the prompt, non-prompt and the inclusive J/ψ.The transverse momentum spectra are calculated in the midrapidity (|y| < 0.9) regions.As one can see, the non-prompt J/ψ predictions after implementing the corrections using Eq. 5 matches the PYTHIA8 calculations quite well for all the spectra.The reason for not making such a correction in the present study is to demonstrate the actual predictions given by the ML models.

FIG. 1 .
FIG.1.Schematic representation of topological production of prompt and non-prompt J/ψ mesons in hadronic and nuclear collisions.

FIG. 4 .
FIG. 4. Confusion matrix for both XGB (top) and LGBM (bottom) representing the accuracy and discrepancy in the true and prediction for prompt, non-prompt, and background dimuon pairs.

FIG. 6 .FIG. 7 .
FIG.6.The top panel shows the transverse momentum spectra for the inclusive, prompt and non-prompt J/ψ in pp collisions at √ s = 13 TeV measured in the midrapidity (|y| < 0.9) and forward rapidity (2.5 < y < 4), and pp collisions at √ s = 7 TeV in the midrapidity (|y| < 0.9) using PYTHIA8 along with the predictions from the XGB and LGBM models.The middle panel shows the ratio of XGB to PYTHIA8, and the bottom panel shows the ratio of LGBM to PYTHIA8.

FIG. 8 .
FIG.8.Top panel shows the rapidity spectra for inclusive, prompt, and non-prompt production of J/ψ in minimum bias pp collisions at √ s = 13 TeV (left) and √ s = 7 TeV (right) using PYTHIA8 and includes the predictions from XGB and LGBM models.The middle panel shows the ratio of XGB to PYTHIA8, and the bottom panel shows the ratio of LGBM to PYTHIA8.

FIG. 10 .
FIG.10.Transverse momentum spectra (left), rapidity spectra (right) in minimum bias pp collisions at √ s = 13 TeV for inclusive, prompt and non-prompt J/ψ using PYTHIA8 compared with the predictions from XGB and LGBM with corrections.

TABLE I .
Offset and sigma values of the primary interaction vertex from the origin. .

TABLE II .
Parameters used in XGB and LGBM with corresponding values obtained through the grid search method.