Measurement of charm and bottom production from semileptonic hadron decays in $p$$+$$p$ collisions at $\sqrt{s_{NN}}=200$ GeV

Measurements of the differential production of electrons from open-heavy-flavor hadrons with charm- and bottom-quark content in $p$$+$$p$ collisions at $\sqrt{s}=200$ GeV are presented. The measurements proceed through displaced-vertex analyses of electron tracks from the semileptonic decay of charm and bottom hadrons using the PHENIX silicon-vertex detector. The relative contribution of electrons from bottom decays to inclusive heavy-flavor-electron production is found to be consistent with fixed-order-plus-next-to-leading-log perturbative-QCD calculations within experimental and theoretical uncertainties. These new measurements in $p$$+$$p$ collisions provide a precision baseline for comparable forthcoming measurements in A$+$A collisions.

with fixed-order-plus-next-to-leading-log perturbative-QCD calculations within experimental and theoretical uncertainties. These new measurements in p+p collisions provide a precision baseline for comparable forthcoming measurements in A+A collisions.

I. INTRODUCTION
Charm and bottom quarks are collectively referred to as heavy-flavor quarks. Their production in elementary p+p collisions is of interest from a variety of vantage points, both in high-energy particle and nuclear physics.
From a fundamental standpoint, unlike light quarks the large masses of heavy-flavor quarks (compare m c ≈ 1280 MeV/c 2 and m b ≈ 4180 MeV/c 2 with m u ≈ 2.2 MeV/c 2 and m d ≈ 4.7 MeV/c 2 ) [1] are such that their production can be calculated using perturbative quantum chomodynamics (pQCD) even at low p T . At leading order (LO), heavy quark production proceeds via gluon fusion and quark-antiquark annihilation. At nextto-leading order (NLO), processes such as flavor excitation and gluon splitting are involved. In this regime, divergences are regulated by the mass of the heavy quarks, which acts as an infrared cutoff except when the quark p T is greater than its mass [2]. In that case, logarithmic divergences appear. The most advanced analytic pQCD techniques currently available allow for such divergences to be resummed, giving rise to the fixed-orderplus-next-to-leading-log (FONLL) approach [3]. Unfortunately, FONLL calculations exhibit very large error bands associated predominantly with uncertainties in the heavy quark masses and the renormalization scales, motivating the need for comparisons with experimental data.
A wealth of heavy-flavor-production data exists both at the Relativistic Heavy Ion Collider (RHIC) [4] and the Large Hadron Collider (LHC) [2]. At the LHC, such measurements comprise cross section measurements of inclusive heavy-flavor leptons, as well as of individual D (containing charm) and B (containing bottom) meson states. At RHIC, there only have been measurements of inclusive heavy-flavor leptons and individual D mesons. Within uncertainties, all such measurements are consistent with FONLL calculations, yet systematically higher than the central value predicted by the theory. It is thus of interest to arrive at a simultaneous measurement of charm and bottom production at RHIC energies to leverage the distinct masses of these quarks to provide constraints for pQCD calculations. Now, from the standpoint of high energy nuclear physics, heavy-ion collisions at RHIC and the LHC produce deconfined nuclear matter-known as the quarkgluon plasma (QGP). The QGP produced in these colliders can be characterized as a strongly coupled fluid [5] exhibiting, among other properties, substantial color opacity. This refers to the ability of the medium to hinder * PHENIX Spokesperson: akiba@rcf.rhic.bnl.gov † Deceased the passage of color charges, resulting in the energy loss of such particles [6]. Charm and bottom quarks are excellent probes of color opacity because they originate primarily from early-stage hard-parton-scattering processes, and thus transit through the entire evolution of the QGP medium [4]. The yield of heavy-flavor electrons at RHIC scales with the number of binary nucleon-nucleon collisions [7,8] as a consequence of charm and bottom conservation by the strong interaction. Nevertheless, their spectrum is modified in central Au+Au collisions relative to the p+p baseline, as quantified by the nuclear modification factor R AA [8]. Heavy quarks are redistributed in momentum space, such that a strong suppression of heavy-flavor electrons is observed for p T > 5 GeV/c, comparable in magnitude to that observed for light quarks [9,10].
This constitutes a puzzling observation, as it challenges traditional interpretations of energy loss as proceeding exclusively through gluon radiation, requiring the inclusion of additional collisional mechanisms. To shed light on the interplay of radiative and collisional energy loss by leveraging the mass difference between charm and bottom, the PHENIX collaboration has measured separated heavy-flavor-quark yields from semileptonic decay electrons in Au+Au collisions using the silicon-vertexdetector upgrade [11]. Nuclear modification factors R AA were calculated using a p+p baseline measurement by the STAR collaboration [12] obtained via electron-hadron correlations with limited kinematic reach and large uncertainties.
In this paper, we present a new baseline measurement of heavy-flavor separation in p+p at √ s = 200 GeV using the same displaced-vertex analysis technique used in a previous PHENIX measurement made in Au+Au collisions at √ s = 200 GeV [11]. Our new results with smaller uncertainties and extended kinematic range provide a valuable update for future measurements of heavyflavor modification II. EXPERIMENTAL SETUP Figure 1 shows a transverse (beam) view of the PHENIX detector and its subsystems. Two midrapidity spectrometers, called the central arms, are shown on either side of the central magnet. With an acceptance of |η| < 0.35 and ∆φ = π/2, each arm provides tracking and particle identification capabilities. The magnetic field is generated by two pairs of coils in the pole faces of the central magnet such that when electric current runs in the same direction in both coils, a maximum field strength of 0.9 T is achieved at the beam location. A detailed description of the PHENIX detector is given in Ref. [13][14][15]. The drift chamber (DC) and three layers of multiwire proportional pad chambers (PC) are the subsystems used for charged particle tracking [16]. The Ring Imag-ingČerenkov (RICH) detector and the electromagnetic calorimeter (EMCal) are the subsystems used for electron identification. The RICH [17] comprises two independent volumes, one in each detector arm, filled with CO 2 . The gas acts as a dielectric medium, in which electrons emitČerenkov radiation for p T > 20 MeV/c; pions can also emit light in the RICH above p T ≈ 5 GeV/c. The EMCal [18], which comprises lead-glass (PbGl) and lead-scintillator (PbSc) modules, is used to identify electrons based on the transverse shape of an electromagnetic shower and the ratio of the particle's energy deposit in the EMCal to the momentum of the reconstructed track. Figure 2 shows the finely segmented silicon-vertex detector (VTX) [19,20], which was installed as an upgrade in 2011 to provide tracking close to the interaction region, capable of reconstructing the primary vertex with a resolution of approximately 100 µm in p+p collisions. The VTX comprises two arms with four independent layers arranged around the beam pipe at nominal radii of r = 2.6, 5.1, 11.8, and 16.7 cm. The material budget, expressed as a percentage of a radiation length is, for each layer, X 0 (%) = 1.28, 1.28, 5.43, and 5.43. Simultaneously with the VTX a new, thinner, beryllium beam pipe was installed in 2011 with a material budget of X 0 (%) = 0.22. Each layer comprises a series of ladders extending longitudinally. The VTX has an acceptance of |η| < 1 and ∆φ ≈ 0.8π per arm. Going from smallest to largest radius, individual layers are named B0, B1, B2, and B3. The innermost layers, B0 and B1, were constructed using silicon-pixel technology developed at CERN [21]. Pixels in these layers have dimensions 50µm × 425µm, and are arranged into lattices of 256 × 32 pixels which are read out by a single ALICE1LHCb sensor-readout chip [22]. Four read-  [23].

III. METHODS
The goal of this analysis is to measure the invariant yield of heavy-flavor electrons, independently for charm and bottom decays. This is accomplished by exploiting the fact that hadrons with bottom content have a longer lifetime than those with charm, as shown in Table I for B and D mesons [1]. As will be described in the following subsection, the provenance of heavy flavor electron tracks is determined statistically based on the distance of closest approach in the transverse plane (DCA T ) between the tracks and the beam center, which is the point relative to which they are reconstructed, Thus, the longer lifetime of the B, and its decay kinematics, will result in a broader DCA T distribution than for electrons from the shorter-lived D mesons. However, the measured electron candidate sample contains not only heavy flavor electrons, but also abundant background from a variety of sources (i.e., decays of π 0 , η, ρ, ω J/ψ, K ± , K 0 s , Υ mesons and the Drell-Yan process, as well as conversions of direct and decay photons), each with its own characteristic DCA T shape. Once this background has been determined, the DCA T distribution of inclusive heavy flavor electrons can be isolated. The individual contributions from charm and bottom can then be obtained through an inversion procedure often referred to as unfolding [24]. We outline the steps involved in the analysis as follows: 1. Measure the DCA T distribution of hadrons and electrons candidate tracks in data, as a function of track p T .
2. Model the DCA T distributions of nonheavy-flavor background in the candidate electron sample by simulating the following electron sources: π 0 , η, direct photons, J/ψ, K 0 s , K ± , and hadron contamination.
3. Determine the fraction of electrons attributable to each of the background sources considered, thus normalizing the background DCA T distributions relative to those of electron candidates in data. 4. Separate the contribution of charm and bottom decays to the electron sample using Bayesian inference techniques. This step is constrained by the measured electron DCA T distributions, as well as by the invariant yield of inclusive heavy-flavor electrons, previously published by the PHENIX collaboration [25].
This analysis used 110 pb −1 of integrated luminosity collected during the 2015 p+p RHIC running period. A family of EMCal-RICH triggers were used to maximize the number of electron tracks available for analysis. These triggers segment the calorimeter and RICH detector into a series of tiles, triggering on events in which a certain energy threshold is exceeded in a calorimeter tile, and for some triggers requiring that a spatial match can be found in the RICH.

A. Measuring Track DCAT
Track reconstruction is carried out using the central arm spectrometers, as detailed in Ref. [8]. Electron candidates within 1.5 < p T [GeV/c] < 6.0 are identified by matching reconstructed tracks with hits in the RICH, and energy deposits in the EMCal.
Electrons traversing the RICH emitČerenkov light, which is amplified by photomultiplier tubes (PMT). A maximum displacement of 5 cm is allowed between a track projection and the centroid of the hit PMTs. For tracks with p T < 5 GeV/c, at least one PMT hit is required in the RICH, whereas at higher p T at least three hits are required, given that pions in this kinematic region begin to radiate in the RICH.
Additionally, the energy E deposited by a track in the EMCal is required to match its momentum p, sinceunlike hadrons-electrons deposit the majority of their energy in the calorimeter. This is quantified through the variable dep = (E/p − µ E/p )/σ E/p , where µ E/p and σ E/p correspond to the mean and width of a Gaussian fit to the distribution of the energy-momentum ratio E/p around E/p = 1, respectively. A cut on |dep| < 2 is then used to select electrons.
Additional cuts involving the EMCal include restricting the displacement in ∆z and ∆φ between the track projection and the calorimeter shower to within three standard deviations. Finally, a cut on the probability that a given EMCal cluster originates from an electromagnetic shower-as determined from the shower shape-is used to reject hadrons.
Once identified in the central arms, reconstructed tracks are projected back to the VTX detector, where an iterative algorithm described in Ref. [11] is used to associate the track with VTX hits to create a VTX-associated track. Such tracks are required to have a hit in each of the two innermost layers of the VTX, and at least one hit in either of the outer layers, and to satisfy χ 2 vtx /ndf < 2 to ensure the quality of the fit.  Figure 3 shows a schematic diagram defining the DCA T of a VTX-associated track. The circular track projection is shown in the transverse plane, where a constant magnetic field exists over the region covered by the VTX detector. The DCA T is then defined as DCA T = L − R, where L is the distance between the beam center and the center of the projection, and R is the projection radius. The beam center is defined as the geometric center of the transverse region over which beam collisions occur, is constant over a given run period, and exhibits a Gaussian spread of width σ (beam) x ≈ 130 µm and σ (beam) y ≈ 100 µm. Notice that DCA T is a signed quantity which is not generally symmetric around zero, because electrons from some background sources exhibit asymmetric DCA T distributions depending on the decay kinematics of their parent particles.
In a previous PHENIX analysis [11], the DCA T was defined relative to the primary vertex of the collision, rather than the beam center. The primary vertex is determined using tracks reconstructed from VTX hits alone, with no reliance on the central arm tracking subsystems. However, given the low multiplicity of p+p collisions, such a procedure does not converge to a vertex for approximately 50% all p+p events. Furthermore, when it does converge-and particularly in events with electron tracks-the low number of reconstructed tracks makes it likely that the primary vertex is biased towards a displaced vertex and thus unsuitable for analysis. The choice of using the beam center for DCA T determination is further justified because the primary-vertex resolution is quite similar to the beam spot spread. Table II shows the resolution of the precise vertex in the transverse plane, as a function of the number of reconstructed VTX tracks used in its determination. The resolution improves significantly with increasing number of tracks. However, due to the limited coverage of the VTX, the average number of tracks used to calculate the primary vertex in p+p collisions is 3.2, such that the corresponding resolution is broader than the beam spread.  Figure 4 shows the DCA T distribution of hadron tracks within 1.8 < p T [GeV/c] < 2.1, where the histogram corresponds to counts of tracks passing the analysis cuts, with no correction for acceptance or efficiency effects. Hadron tracks are subject to the same quality requirements as electron tracks, but are identified as hadrons by requiring no RICH PMT hits. The very prominent Gaussian peak centered at DCA T = 0 is attributed to particles originating from the primary collision point, with its width reflecting the beam spread, convolved with the track-pointing resolution. On the other hand, the broad tails can be attributed to long-lived light hadrons which decay, as well as background. It is observed that the DCA T resolution improves with increasing track p T . For this analysis, DCA T distributions of electron candidate tracks were measured in 10 p T bins between 1.5 < p T [GeV/c] < 6.0.

B. Modeling Electron Background Sources
In addition to heavy-flavor-decay electrons, the electron sample determined by applying the track cuts described in the previous section contains contributions from a variety of background sources. Namely, we consider (i) photonic electrons from the Dalitz decay of π 0 and η mesons, as well as photon conversions; (ii) nonphotonic electrons from the decay of J/ψ and the three-body decay of K ± and K 0 s (collectively called Ke3 electrons); and (iii) hadrons which are misidentified as electrons.
To isolate the heavy-flavor signal of interest, it is necessary to properly account for the background. Conversion electrons constitute the single largest source of background in this analysis owing to the material budget of the the VTX with X 0 (%) = 13.42 of a radiation length. In the following subsection we describe a strategy that uses the fine segmentation of the VTX itself to reject the vast majority of conversions based on the narrow opening angle topology of conversion electron pairs. The remaining background, both photonic and nonphotonic, is accounted for by constructing an electron cocktail normalized relative to the measured electron sample.
Of the background electron sources, all but misidentified hadrons can be modeled using previous measurements of primary (i.e., π 0 , η, J/ψ, K ± ,K 0 s ) particle production combined with a knowledge of their decay modes and geant3 simulations of the PHENIX detector. Contributions from other sources of electrons, like the decay of vector mesons such as the Υ, φ, ω and ρ, as well as the Drell-Yan process, were found to contribute negligibly to the total electron background in the kinematic region of interest.

Photonic Electron Background
Photonic electrons originate from the Dalitz decay (X → e + e − γ) of π 0 and η mesons, and from the conversion of photons (γ → e + e − ) interacting with the beam pipe or the VTX detector itself, where the photons are either direct photons or a hadronic decay product. To model this background, we start with the published cross section of π 0 , η and direct photons in p+p at √ s = 200 GeV [26][27][28][29]. Single particles are then generated between 0 < p T [GeV/c] < 20 according to the published spectrum. For π 0 and η, accounting for branching ratios, the decay is forced to proceed exclusively through channels involving photons or electrons in the final state. The decay photons and electrons are fed through a geant3 simulation of the PHENIX detector, where the same reconstruction code and track cuts used in data are applied. The resulting reconstructed electron yield is normalized by the number of simulated primary particles, thus correctly describing the relative contribution of each primary source to the total photonic electron yield.
As previously mentioned, conversion electrons constitute the most significant source of background in this analysis, originating from the beam pipe as well as all four layers of the VTX. We can eliminate 80% of conversion electrons by imposing the requirement that tracks used in analysis have a hit in each of the innermost two layers of the VTX, thereby discarding electron tracks originating in the outer layers.
Given the narrow opening angle between the e + e − pair from photon conversions, a veto cut is defined to minimize the remaining conversions from the beam pipe and innermost two layers. In this approach, tracks with a VTX hit in close proximity, within a certain window in ∆φ and ∆z, are rejected. As illustration, if a conversion occurs in the beam pipe, or in B0, then nearby pairs of hits will be found in subsequent detector layers. If at least one of these electrons is reconstructed as a track, the conversion veto cut will reject it based on the presence of at least one nearby hit within the window in any layer.
The size of the conversion veto window in chrg × ∆φ, where chrg is the charge of the track and ∆φ is the azimuthal distance between track and cluster on the surface of a VTX chip, is shown in Fig. 5. It depends on the track p T , as well as the layer where the nearby cluster is found. In general, because the bend of conversion electron pairs in the magnetic field decreases with photon momentum, the windows become narrower with increasing electron p T . Furthermore, due to multiple scattering as well as the separation of electron pairs in the magnetic field, windows in the outer layers are larger than in the inner layers. The windows are asymmetric because the quantity chrg × ∆φ is positive by construction; the negative side of the window is populated by mismeasured tracks which do not yield a positive chrg × ∆φ. In the longitudinal direction, the conversion veto window is |∆z| < 0.05 cm in the innermost two VTX layers, and |∆z| < 0.1 cm in the outermost two.
The survival rate ε of electrons from a given source is defined as the probability that they will not be rejected by the conversion veto cut. Figure 6 shows the survival rate of electrons from photonic and nonphotonic sources as a function of electron track p T , where the nonphotonic survival rate has been estimated using hadrons in data as a proxy. The survival rate of photonic electrons has been further broken down by background source, namely π 0 and η decays, as well as photon conversions.
Conversion electrons, such as those shown from direct photons, have the lowest survival rate of all. Electrons from π 0 and η mesons have a higher survival probability because they include-in addition to photon conversions-Dalitz electron pairs which have a wider opening angle than conversions pairs. The survival rate of all photonic electrons combined is shown in blue in Fig. 6 to be approximately 20%. This demonstrates the ability of the conversion veto cut to reject a substantial fraction of the photonic background. In contrast, the survival rate of nonphotonic electrons is very high, at approximately 90%. The conversion veto cut rejects a small fraction of nonphotonic electrons due to the presence of uncorrelated random hits in the window, which affect all tracks regardless of their provenance. The particular size of the conversion veto windows used in this analysis represents a compromise between maintaining a large window for background rejection, and limiting its size to minimize the inclusion of uncorrelated hits.
After applying the conversion veto cut on electrons in the photonic cocktail, the contribution of each primary particle source to the total photonic background can be calculated, as shown in Fig. 7(a). Electrons from π 0 (both Dalitz and from the conversion of decay photons) dominate the background for all p T , with direct photon external conversions becoming more significant at higher p T .

Non-photonic Electron Background
Non-photonic electrons in this analysis correspond to those from the decay of J/ψ mesons, as well as the three-body decays of K ± and K 0 s , collectively known as Ke3 electrons (K→eνπ). Other background electron sources, namely the decays of vector mesons such as the Υ, ρ, ω, and φ were considered in the background cocktail, but were found to contribute negligibly.
The approach to modeling nonphotonic background is similar to that used for photonic sources. Namely, single particles are generated according to their respective published cross section, as measured by the PHENIX collaboration [30][31][32], forced to decay, and the resulting particles fed through a geant3 simulation of the detector. Applying the full set of analysis track cuts, including the conversion veto cut, we complete the background electron cocktail. The fraction of nonphotonic electrons from each source, relative to the total photonic background is shown in Fig. 7(b). Notice that the J/ψ contribute more to the background cocktail than any other background source above p T ≈ 3.5 GeV/c.

Hadron Contamination
Despite the electron identification cuts described in the previous section, some hadron tracks will incorrectly be tagged as electrons. This contribution to the electron sample is modest and is estimated in two independent ways, making use of EMCal and RICH signals.
Unlike hadrons, electron tracks deposit the majority of their energy in the EMCal, as quantified by the ratio E/p, where E is the calorimeter energy and p is the track momentum. The variable dep, as previously defined, takes the shape of a Gaussian of zero mean and unit width for true electron tracks. In contrast, the dep distribution of hadron tracks exhibits a very different shape. Thus, a template is constructed, as a function of p T , by fitting the dep distribution of hadrons tracks in data. The dep distribution of electron candidates is then fit with a combination of the hadron template plus a Gaussian, with a single free parameter corresponding to their relative contribution. The value of this parameter provides an estimate of the fraction of hadron contamination in the sample.
An independent way of estimating the hadron contamination is to exploit the fact that imposing a cut requiring a minimum number of PMTs fired in the RICH provides greater rejection power for hadron tracks than for electrons. The fraction of hadrons rejected by such a cut can be estimated from hadron tracks in data, while the fraction of rejected electrons can be determined through geant3 simulation of single electrons. With these two pieces of information it is possible to isolate the number of hadrons and electrons in the candidate electron sample, thus determining the contamination fraction.
The weighted average of the two independent estimates of hadron contamination is taken as the nominal value, with their difference as a systematic uncertainty, as shown in Fig. 8. The systematic uncertainties are assigned to encompass both estimates and are asymmetric.
Another source of contamination comprises electron tracks identified in the central arms which are associated with uncorrelated random hits in the VTX detector, leading to the creation of a spurious VTX-associated track. The degree of contamination arising in this manner was quantified by rotating all hits in the VTX in azimuth and polar angles by a small amount and attempting to reassociate central arm tracks with the rotated hits. Given the low multiplicity of p+p collisions, the contribution of misassociated central-arm tracks was found to be negligible, unlike in Au+Au collisions where it is significant.

C. Normalizing Electron Background DCAT
In 2015-the year in which the p+p data was collected-the VTX detector exhibited a time-varying acceptance from a changing number of dead, cold, and hot channels across the surface of each detector layer over time. This precluded the measurement of an electron candidate sample fully corrected for acceptance and efficiency effects. As a result, the simulated electrons in the background cocktail are not corrected for acceptance and efficiency, but simply constructed in such a way that the same reconstruction code and analysis cuts used in data are applied.
The cocktail can then be used to calculate the fraction of electrons from each background source relative to the total photonic background. However, to use this information to determine their normalization relative to the total sample of electron candidates, it is necessary to determine the fraction of electron candidates attributable to photonic background. This is accomplished via a datadriven method relying on the conversion veto cut.
Let N P and N NP be the number of photonic and nonphotonic electrons in the electron candidate sample obtained without applying the conversion veto cut. Also, let ε P and ε U C be the veto cut survival rate of photonic electrons due to correlated effects and due to noncorrelated effects, respectively. In this nomenclature, heavy-flavor electrons are part of the nonphotonic sample. The number of electrons measured without the conversion veto cut is then simply while the number of measured electrons that pass the veto cut is given bỹ where N P is modified by both ε P and ε U C because photonic electrons are also susceptible to rejection from uncorrelated hits in the window. Taken together, Eqs. 1 and 2 form a system of equations with N P and N NP as the only unknowns, yielding and The fraction of photonic electrons in the sample with the conversion veto cut applied is then and is shown in Fig. 9 as a function of electron p T . Using F P , the fraction of candidate electrons attributable to each photonic source in data is where i is an index referring to a primary particle species (i.e., π 0 , η, γ);Ñ i is the number of electrons from the i th source that pass the conversion veto cut in the electron cocktail, and 1 − F contam is the purity of the electron sample from hadron contamination.
In the case of nonphotonic background, it is impossible to construct a similar expression because the electron cocktail does not include contributions from heavy-flavor mesons. Therefore, the nonphotonic background is normalized relative to the simulated π 0 electron yield, whose absolute normalization has been previously determined, as follows with j indexing the primary particles giving rise to nonphotonic electrons (i.e., J/ψ and Ke3). These factors, shown in Fig. 10, are used to normalize the DCA T distribution of each background electron source relative to the electron candidate sample. Fig. 11 shows the DCA T distribution of electrons from each background species within 1.8 < p T [GeV/c] < 2.1, normalized relative to the total number of electron candidates in that p T bin.
In general, the resolution of the DCA T is a consequence of the width of the beam spot convolved with the trackpointing resolution. However, the simulations used to  create the background electron cocktail were run using a single reference value for the beam spot size, which in reality fluctuates over time during data-taking. Therefore, it is necessary to correct for the difference in resolution between simulations and data. This was accomplished by comparing, as a function of p T , the DCA T resolution of charged pions in data and simulation, as quoted in Table III deriving a p T -dependent factor such that the simulated DCA T distributions could be broadened to match the resolution in data. This broadening factor was applied to electrons from every species in the background electron cocktail.

D. Heavy-Flavor Separation via Unfolding
Having normalized the electron background DCA T distributions, it is possible to isolate the corresponding distributions of electrons from heavy-flavor decays in data. If the shapes of the parent hadron spectra were known a priori, it would be a straightforward matter to use the knowledge of heavy-flavor-decay kinematics to determine the shape of the DCA T distributions of charm and bottom electrons separately, whose relative normalization could then be constrained by the measured inclusive DCA T .  [25], and used as input for the flavorelectron-separation-unfolding procedure.
However, because such spectral shapes are not known for D and B mesons, it becomes necessary to solve an inverse problem where the model parameters (i.e., the spectrum of hadrons containing open charm and bottom, as a function of p T ) are inferred from data observations, namely the DCA T and spectrum of inclusive heavy-flavor electrons. For the spectrum, an earlier PHENIX measurement [25] was used, shown in Fig. 12.
To solve the inverse problem, it is necessary to construct a mapping from the model parameters to the data. Given that the heavy-flavor-decay kinematics are known, it is possible to assign a probability for a heavy flavor hadron at a given p T and DCA T . Such mapping makes it possible to quantify the likelihood that a given set of trial hadron spectra is consistent with the measured electron spectrum and DCA T distributions.
We use a probabilistic approach [24] to the unfolding problem based on Bayesian inference, identical to that used by PHENIX to separate charm and bottom electron yields in Au+Au collisions [11]. Let Y data be a vector whose individual elements correspond to the yield of inclusive heavy-flavor electrons, as shown in Fig. 12.
Similarly, D data j is a vector of the binned DCA T distribution of electrons in data, for tracks in the j th p T bin, out of nine bins between 1.5 < p T [GeV/c] < 6.0. The two observables are combined in a 'data' vector The model parameters are also represented as a vector where θ c and θ b correspond to the charm and bottom hadron yields, respectively, in 17 p T bins each, between 0 < p T [GeV/c] < 20. Bayes' theorem, as written below relates the probability that a given set of model parameters θ are true given the data x, to the probability that the data follow from an assumed set of model parameters. While the former probability-known as the posterior -is a difficult quantity to estimate, the latterknown as the likelihood -is straightforward to compute given the knowledge of heavy-flavor decays. The quantity π(θ), known as the prior, corresponds to the knowledge of the model parameters prior to the data being analyzed. The denominator P (x), sometimes known as the evidence, provides the normalization for the posterior. Thus, Bayes' theorem allows us to take a first guess regarding the model parameters, as encoded in the prior, and refine it through the inclusion of data in the likelihood. The 17 bins for both the charm and bottom hadron spectra within 0 < p (h) T [GeV/c] < 20, as represented by θ, define a 34-dimensional space of model parameters. Starting with an initial set of values given by the prior π(θ), corresponding to the charm and bottom yields as calculated with pythia 1 , the unfolding proceeds by drawing trial sets of hadron yields, corresponding to individual points in the multi-dimensional parameter space. Because sampling such a large-dimensional space uniformly is computationally prohibitive, we use a Markov chain Monte Carlo (MCMC) algorithm [33], which proceeds iteratively until convergence of the final solution is achieved. In this analysis, three iterations suffice for convergence with 500 parallel "walkers" and 1000 burn-in steps, as described in [33].
For each trial θ, we predict an electron p T spectrum and DCA T distribution as follows where T is a matrix encoding the probability of a hadron of p T encodes the probability of yielding an electron at a given DCA T value. The construction of these matrices using the pythia generator is described in detail in Ref. [11], and includes the decays of charm hadrons (D ± ,D 0 ,D s , and Λ c ), and bottom hadrons (B ± , B 0 , B s m and Λ b ). Additional Monte Carlo generators could be used to construct the matrix, but this would be computationally prohibitive. For the purposes of this analysis, an additional matrix was introduced to model the detector response, mapping the truth p T and DCA T values in M (Y ) and M (D) j to their reconstructed counterparts, allowing for a direct comparison between the data and the predicted distributions from a given set of trial parameters θ.
The predicted spectrum and DCA T distributions are then used to compute the (log)likelihood as follows ln P (x | θ) = ln P (Y data | Y (θ)) where the log-likelihood for the Y data term is modeled as a multi-variate Gaussian with diagonal covariance, while the log-likelihood for the D j term is modeled by a multivariate Poisson distribution, with full details provided in Ref. [11]. To constrain the shape of the unfolded charm and bottom spectra, ensuring its smoothness, a regularization term is added to the log-likelihood function, as follows where R c and R b correspond to the ratios of the trial vector of charm and bottom spectra to the prior. The matrix L is a 17 × 17 discretized second-order finitedifference matrix, effectively corresponding to the second derivative operator. Thus, the addition of this term enhances the log-likelihood for solutions with large curvature, effectively penalizing deviations from smoothness. The optimal regularization strength α is determined by carrying out a scan of various possible parameter values and calculating the total log-likelihood of the unfolding solution in each case, comparing it to the case with no regularization. The desired optimal value is that which maximizes the log-likelihood, and is found to be α = 1.
The end result of the Monte Carlo exploration of the parameter space is a set of probability distributions for each of the 34 model parameters, corresponding to the value of each bin of the charm and bottom hadron spectra integrated over all rapidities, including the correlations among them, as depicted in Fig. 13(a). The diagonal of the triangle shows the marginal probability distribution for each of the 17 bins of the charm and bottom hadron spectra. Correlations among bins are shown in the upper triangular [green] area for p c T and p c T , the far-right triangular [blue] area for p b T and p b T , and the lower-left square [orange] area for p b T and p c T . Panels Fig. 13(b) and (d) show the marginal distributions in detail for charm and bottom hadrons in two selected p T bins. We select the parameter that maximizes the marginal distributions as the desired value of the spectrum at each bin; the 16 th and 84 th quantiles of the distribution are taken as the 1σ uncertainty associated with the point estimate, as indicated by the dotted lines.
Panel Fig. 13(c) shows the joint probability distribution of the charm and bottom hadron yields for the p T bins in Fig. 13(b) and (d). The shape of the distribution indicates the existence of a strong negative correlation between the yields in the bins at hand. It is possible to see that the bins are largely uncorrelated, except for intermediate p b T and p c T , where a strong negative correlation exists due to the yields being similar in this kinematic region. For the correlations among bins in the same hadron spectrum, a very strong positive correlation is seen among neighboring p T bins owing to the smoothness requirement on the unfolded spectra, which is imposed via regularization.
The unfolded yield of charm and bottom hadrons can be tested for consistency with the inputs provided, namely the spectrum and DCA T distributions of inclusive heavy-flavor electrons, by applying the decay matrices to the unfolded result. Figure 14 shows the so-called 'refolded' spectra of charm (bottom) electrons in green (blue), along with their sum, in red. The refolded inclusive spectrum compares very well with the published spectrum, as shown by the ratio plot in the bottom panel. Similarly, Fig. 15 shows, for every electron p T bin, the refolded inclusive electron DCA T distributions, its charm and bottom components, and the total background DCA T , obtained as discussed in section III C. The ratio plots in the bottom panel demonstrate an excellent agreement with the DCA T of measured electrons. The shaded gray region indicates the range over which the DCA T is used in the unfolding procedure. Notice that bottom electrons have a broader DCA T than those from charm, as expected.

IV. SYSTEMATIC UNCERTAINTIES
The unfolding procedure, as described in section III D, takes the measured electron DCA T distributions and published electron spectrum as inputs, along with their corresponding statistical uncertainties, which are propagated to the final result. However, additional sources of systematic uncertainty must be taken into account. Namely, we identify the following as the most significant: The uncertainty associated with the normalization of individual electron background components originates from the parameterization of the associated primary particle spectrum. Each spectrum is repeatedly deformed randomly within the extent of its own statistical and sys-tematic uncertainties, with a new parameterization being obtained at every iteration. The RMS value of all parameterizations is then taken as the associated systematic uncertainty. In this manner, a systematic uncertainty will exist for every background electron source in the cocktail. Their combined effect on the unfolded result is estimated by running the unfolding procedure for every combination of individual background normalizations, raised and lowered by their associated parameterization uncertainty.  15. DCAT distribution of electron candidates in various pT bins, along with the contribution from total background electrons (brown) and the refolded electrons from charm (green) and bottom (blue) hadron decays. The sum of these three components is shown in red, and the ratio with the data is shown in the bottom panels.
To estimate the p T -correlated systematic uncertainties associated with the inclusive heavy-flavor-electron spectrum, we deform the shape of the spectrum by tilting and kinking the curve about two pivot points, at p T = 2.5 GeV/c and p T = 5.0 GeV/c. The choice of these points is motivated by specific features of the previous analysis which produced the inclusive heavy-flavor-electron spectrum [25], related to the method of background subtraction. Tilting refers to a rotation of the spectrum about one of the two pivots, such that the first and last points go up and down, respectively, by a fraction of their systematic uncertainty. The kinking of the spectrum introduces a deformation whereby the spectrum takes on a "v" shape at the pivots. This procedure resulted in 8 variations of the spectrum. The ones that resulted in the largest deviation from the nominal unfolded result were taken as the associated systematic uncertainty.
Section III D described how the optimal value of the regularization strength α maximizes the total loglikelihood of the unfolded solution. An uncertainty on this value is determined by finding the values of α around the maximum which lead to a decrease of the loglikelihood by half a unit, effectively corresponding to a 1σ uncertainty. The deviations of the unfold result obtained with these values (α = 0.71 and α = 1.55), relative to the nominal result when using the optimal parameter, define the extent of the associated systematic uncertainty.
Finally, a systematic uncertainty is associated with the choice of θ prior . The magnitude of this uncertainty is estimated by selecting a different prior and evaluating the change in the unfold result. In particular, the heavyflavor-hadron yields obtained with pythia were scaled by a modified blast wave calculation, as described in Ref. [34]. Because a feature of Markov chains, such as the one used in this analysis, is that the probability of reaching a given state is independent of the starting point, the sensitivity to the initial choice of prior is expected to be minimal after a sufficient number of iterations. Figure 16 shows the relative contribution of each source of uncertainty as a function of p T , to the unfolded fraction of electrons from bottom decays. The most significant contribution comes from the unfold uncertainty, which originates from the statistical uncertainty on the inclusive heavy-flavor spectrum and DCA T as it is propagated through the unfold procedure. The next most significant contribution comes from the background electron cocktail and its normalization, supplying an approximate 10% uncertainty at low p T . The total systematic uncertainty is obtained by adding the contributions of every source in quadrature. along the diagonal of Fig. 13. The uncertainties incorporate both the unfolding uncertainty (which includes the propagation of the total uncertainty in the inclusive heavy-flavor-electron measurements provided as input) as well as the systematic uncertainties discussed in section IV. The uncertainty band is narrowest in the region where electron DCA T measurements provide constraint to the unfold procedure, namely 1.5 < p T [GeV/c] < 6.0. As presented, the hadron cross section is integrated over rapidity by construction, following directly from the procedure used to populate the decay matrices used in the unfolding procedure. Namely, hadrons simulated in pythia at all rapidities are allowed to decay, recording only the probability of producing an electron within |η| < 0.35. It thus follows that the cross sections in Fig. 17 depend on the hadron rapidity distribution implemented in the pythia generator. This model dependence implies an associated uncertainty which has not been evaluated since, as previously mentioned, this would be computationally prohibitive. Furthermore, the model dependence is reduced when applying the decay matrix to arrive at results in electron space.
To compare the unfolded differential hadron cross sections to existing measurements, we use the pythia generator to calculate the ratio of open heavy-flavor hadrons of a given species at midrapidity relative to inclusive-hadron production as a function of p T . In this manner, the unfolded yield of D 0 mesons within |y| < 1 can be compared to a measurement by the STAR collaboration [35] obtained by fully reconstructing the hadron decays, as shown in Fig. 18. The unfolded D 0 yield is fit with a modified Hagedorn function, with the ratio of data relative to the fit being shown in the bottom panel. Good agreement with results published by STAR is observed within uncertainties. The model dependence of the unfolded charm and bottom cross sections can be reduced by applying the decay model-that is, multiplying the hadron cross sections by the decay matrices-to obtain the refolded cross section of heavy-flavor-decay electrons in the PHENIX central arm acceptance. The result can be normalized to obtain the fully invariant differential cross sections shown in Fig. 19, where the b → e curve has been scaled down by a factor of 100 for ease of visualization. Also shown are FONLL pQCD calculations [3], which are in reasonable agreement with both charm and bottom cross sections within uncertainties. The large uncertainties in FONLL are driven by variations in the factorization and renormalization scales, as well as uncertainties in the heavy quark masses and the parton distribution functions used (CTEQ 6.6 in this particular case, as the default in [3]). Like other heavy-flavor measurements at RHIC, the results presented are higher than the FONLL calculation. However, it is notable that the agreement with the central FONLL prediction improves at high p T , where the effects of the quark mass in the calculation become less significant.
The ratio of electrons from bottom to inclusive heavyflavor decays, b → e/(c → e + b → e), can be constructed from the electron cross sections, and is shown in Fig. 20. In this measurement, the electrons from the feed-down decay b → c → e are considered part of the bottom electron sample. The contribution of bottom decays to the inclusive electron sample is seen to increase sharply with p T , coming to dominate over that of charm quarks (arXiv:1809.08737) FIG. 18. Unfolded differential cross section of D 0 mesons at midrapidity |y| < 1, compared to a corresponding measurement by the STAR experiment [36] obtained by direct reconstruction of the hadron decays.
above p T ≈ 4 GeV/c. The solid gray line corresponds to the FONLL calculation, with its uncertainty depicted by dashed gray lines. The measured bottom electron fraction is observed to be consistent with the FONLL calculation within uncertainties. In particular, good agreement with the central FONLL value is seen below p T ≈ 3 GeV/c, with the measured fraction rising slightly above that at higher p T . Figure 21 shows a comparison of the unfolded bottom electron fraction with earlier measurements made by the STAR [12] and PHENIX [37] collaborations using electron-hadron and electron-D correlations. It is apparent that the size of the dataset, combined with the unfolding method used in the present analysis provides a result with smaller total uncertainty and significantly extended kinematic reach at low p T . Furthermore, the unfolded result provides a more direct determination of the bottom electron fraction since-unlike the earlier measurements-it does not depend on model-dependent pythia templates of event kinematics to describe the Fraction of bottom electrons obtained with the unfolding procedure (red), compared to previous measurements by PHENIX [37] and STAR [12] using electron-hadron correlations. Also shown are theory comparisons to a pQCD FONLL calculation [3].
not be rejected. In short, both the unfold and the STAR bottom fraction measurements are consistent given the uncertainties.

VI. SUMMARY
We have reported on a new measurement of the differential-invariant production cross section of separated-heavy-flavor electrons in p+p collisions at √ s = 200 GeV. The measurement proceeds via an unfolding analysis where the yield of open-heavy-flavor hadrons is inferred from the inclusive-heavy-flavor electron spectrum, and the electron DCA T distribution measured with the PHENIX silicon-vertex detector. The individual yields of charm and bottom electrons, as well as the bottom electron fraction, are found to be consistent with FONLL calculations. This measurement will provide a precision baseline for future-heavy-flavorseparation analyses. In particular, forthcoming PHENIX results using a high-statistics Au+Au dataset promise to reduce current uncertainties and shed light on the centrality dependence of charm and bottom suppression.