Long Lived Particles Searches in Heavy Ion Collisions at the LHC

We show that heavy ion collisions at the LHC provide a promising environment to search for new long lived particles. A main advantage lies in the possibility to operate the main detectors with lower triggers, which can increase the number of observable events by orders of magnitude if the long lived particles are produced with low transverse momentum. If the LHC is operated with Pb nuclei this is insufficient to overcome the suppression due to the lower instantaneous luminosity compared to proton runs, but for lighter nuclei a higher sensitivity per running time can be achieved than in proton collisions. We illustrate this explicitly for heavy neutrino searches in the Minimal Neutrino Standard Model. In less minimal models with complicated event topology the absence of pile-up provides another key advantage of heavy ion collisions because it avoids the problem of vertex mis-identification. This provides strong motivation to further explore the possibility to search for New Physics in heavy ion collisions.


I. INTRODUCTION
The Large Hadron Collider (LHC) at CERN was built for three main reasons, 1) to unveil the mechanism that breaks the electroweak symmetry, 2) to search for new elementary particles that can help to resolve open problems in particles physics and cosmology and 3) to study the properties of the quark-gluon plasma (QGP) at high collision energies. The primary goal has been achieved with the discovery of a scalar boson in 2012 [1,2]. Its properties, as measured to date, are in good agreement with that of the Higgs boson in the standard model (SM), suggesting that the Brout-Englert-Higgs mechanism [3][4][5] is responsible for the spontaneous electroweak symmetry breaking in the SM. In parallel to that, heavy ion collisions at the LHC have helped to considerably improve the understanding of the QGP at high collision energies [6]. However, contrary to expectations based on the naturalness paradigm, no new elementary particle other than the Higgs boson has been found to date. This has given rise to the concern that conventional searches may have looked in the wrong place, and CERN is currently investing considerable effort into alternative pathways [7].
One possibility to explain the absence of New Physics signatures in conventional collider searches could be that the new elementary particles that address open problems such as the Dark Matter, neutrino masses or the baryon asymmetry of the universe, have escaped detection not because they are too heavy, but because they are only feebly coupled to the SM. The feeble coupling can suppress the decay rate of the new particles and give them a macroscopic lifetime. Such long lived particles (LLPs) appear in a wide range models of physics beyond the SM [8]. They can owe their longevity to small coupling * marco.drewes@uclouvain.be † andrea.giammanco@uclouvain.be ‡ jan.hajer@uclouvain.be § michele.lucente@uclouvain.be constants, a heavy mediator, a small mass gap to the daughter particle or a combination of all of these. LLPs can give rise to striking displaced vertex or displaced track signatures. Recently several proposals have been made to improve the sensitivity of the LHC to LLPs by adding new detectors, including the recently approved FASER experiment [9] and other proposed dedicated detectors, such as MATHUSLA [10][11][12], CODEX-b [13] and Al3X [14]. In reference [15] an alternative strategy was proposed that explores the idea that searches for LLPs in the heavy ion collisions that are performed to study the QGP can help to fully exploit the discovery potential of the existing detectors. The displacement makes it possible to distinguish the signal from the many tracks environment that is created in a heavy ion collision, because all primary tracks from primary SM interactions originate from within the microscopic volume of the two colliding nuclei. In the present article we provide details and updates of the analysis presented in reference [15].

II. HEAVY ION COLLISIONS AT THE LHC
For equal integrated luminosity and equal center-ofmass energy per nucleon, heavy ion collisions guarantee larger hard-scattering cross sections than pp collisions, thanks to the enhancement factor of ≈ A 2 in the number of parton level interactions, where A is the mass number of the isotope under consideration. In the case of lead isotopes ( 208 82 Pb) accelerated in the LHC, A = 208 provides an enhancement of four orders of magnitude. There are, however, several drawbacks.  ticle masses in the final state of the hard process under consideration [17], and it is typically larger for gluon-initiated processes than for quark-antiquark collisions. For instance, for tt production there is a drop of an order of magnitude between 14 TeV and 5.52 TeV [18] due to the large mass of the top quark. However, the W -boson production cross section per nucleon is only reduced by a factor of around 2.5 (cf. Table I). The reduction factor is around 2.3 for B mesons [19][20][21], which are lighter but whose production at these LHC energies is mostly initiated by gluon-gluon fusion (opposed to W bosons, which are mostly created by quark-antiquark annihilation). 2) Heavy ion collisions are characterized by the production of a very large particle multiplicity, and in particular a very large multiplicity of charged-particle tracks, which poses challenges for data acquisition and data analysis to the multipurpose LHC experiments. While a large track multiplicity generates a strong background for prompt signatures, the decay of feebly interacting LLPs produces displaced tracks at macroscopic distances from the interaction point that can easily be distinguished from the tracks that originate from the primary interaction at the collision point.
3) The instantaneous luminosity in heavy ion runs is limited to considerably lower values compared to pp collisions, cf. data when searching for rare phenomena. We discuss these luminosity limitations in some detail in Section IV. 4) Heavy ion Runs at the LHC are relatively short: not more than one month is allocated in the yearly schedule, as opposed to around six in the pp case. This is not a fundamental restriction, and one can imagine that the sharing of time may change in the future depending on the priorities of the LHC experiments. In the following we compare the sensitivity per equal running time, given a realistic instantaneous luminosity, in order to remain independent of possible changes in the planning.
On the other hand, there are key advantages in heavy ion collisions.
i) The number of parton level interactions per collision is larger. ii) The probability of mis-identifying the primary vertex is practically negligible for heavy ion collisions. This is in contrast to the pile up that one has to face when colliding high intensity proton beams, which leads to tracks that originate from different points in the same bunch crossing and creates a considerable background for displaced signatures. Hence, heavy ion collisions provide a much cleaner environment to search for signatures stemming from the decay of LLPs, cf. Figure 1. iii) The lower instantaneous luminosity can enable AT-LAS and CMS to significantly lower their trigger thresholds, in particular for clean analysis objects such as muons. This, e.g., allows to search for signatures with comparably low transverse momentum p T (see for example [23]), which is particularly interesting in scenarios involving light mediators. iv) Heavy ion collisions can offer entirely new production mechanisms that are absent in proton collisions. Ultraperipheral heavy-ion collisions generate strong electromagentic fields that can drastically increase the production cross section for some exotic states that couple to photons, as emphasised in recent publications on monopoles [24] and axion like particles [25]. It has also been suggested that thermal pessimistic (p = 1) realistic (p = 1.5) optimistic (p = 1.9)  (9)). L0 is the peak luminosity, τ b the optimal beam lifetime, and Lave the optimized average luminosity. The last column contains the ratio between the number of events N = LσW in NN-and pp-production, where L is the integrated luminosity (cf. definition (5)) and σW is given in Table I. Following [16] we use an optimistic turnaround time of 1.25 h, which we compensate in the case of heavy ion collisions by assuming that the useful run time is only half of the complete run time.
processes in the QGP can help to produce a sizeable number of exotic states. We do not explore this effect in the present work, a list of references can e.g. be found in reference [26].
This article presents an illustrative study with an analysis strategy based entirely on aspects i) and iii). The effect of point ii) is model dependent, and explained in more detail in Section III. A detailed quantitative analysis of the effects deriving from aspect ii) goes beyond the scope of the present article, whose main purpose is to point out the potential of heavy ion collisions.

III. TRACK AND VERTEX MULTIPLICITIES
Historically, heavy ion collisions have been considered an overly complicated environment, therefore unsuitable for precise measurements of particle properties or searches of rare phenomena, because of their large final-state particle multiplicity, as opposed to pp collisions. However, due to the high pile-up during Run 4 in pp collisions, the track multiplicity is expected to become comparable for pp and PbPb collisions and even smaller for lighter ion beams [27].
In PbPb collisions, hard-scattering signals are more likely to originate in the most central events, where up to around 2 000 charged particles are produced per unit of rapidity at √ s NN = 5.52 TeV [28], meaning that around 10 000 tracks can be found in the tracking acceptance of the multi-purpose experiments ATLAS and CMS. In contrast, pp collisions during standard Runs in 2017 were typically overlaid by about 30 pile-up events, each adding about 25 charged particles on average within the tracking acceptance of the multi-purpose detectors [29][30][31], meaning that ≈ 750 charged particles per event are coming from pileup. This is not expected to increase by a large factor until the end of Run 3. The HL-LHC will bring a big jump: current projections assume that, in order to accumulate 3 000 fb −1 as planned, each bunch crossing will be accompanied by about 200 pile up events [32,33], meaning 5 000 additional charged particles per hard-scattering event.
In conclusion, in the HL-LHC era the difference in track multiplicity between PbPb and pp collisions will reduce to a mere factor of two. A lot of ingenuity has been invested by the major LHC experiments in recent years to overcome the issues deriving from such a large track multiplicity [34,35]. In addition to the planned detector upgrades, all particle reconstruction and identification algorithms have been made more robust and optimized for a regime of very large multiplicities, and these efforts automatically benefit also the analysis of heavy ion data. Although a very large track multiplicity is expected to degrade the reconstruction and identification of displaced vertices, the adverse effect of pile up on vertex-finding performance is coming more from the presence of additional primary-interaction vertices than from the sheer number of tracks. This is demonstrated for example from the comparison of b-tagging performance in pp and pPb collisions in tt studies [36]: using the same algorithm as the standard pp analysis, and an equal efficiency of correctly tagging b-quark-initiated jets, the misidentification rate of light jets is smaller in pPb events (0.1 % vs. 0.8 %) in spite of the larger track multiplicity. Although those algorithms will have to be retuned to recover a comparable efficiency in the more extreme conditions of high-centrality PbPb collisions, we take it as an indication that at first order pile up affects displaced-particle performance more than track multiplicity. A dedicated b-tagging retuning was for example performed for the conditions of the PbPb Run of 2015, demonstrating an acceptable efficiency versus purity even for the most central collisions [37], but a new dedicated tuning is necessary after any tracking detector upgrade. Similar qualitative considerations apply to the case of algorithms for the reconstruction of long-lived particles.

IV. AVERAGE INSTANTANEOUS LUMINOSITY
The maximum luminosity achievable in heavy ion collisions is constrained by multiple factors.

1) Technical limits set on the injector performance. 2) The cross section per nucleon is increased compared
to pp collisions. This results in a more rapid decline of the beam intensity. Moreover, most of the interactions are unwanted electromagnetic interactions caused by the stronger electromagnetic fields and soft hadronic processes, i.e., electromagnetic dissociation (EMD) and bound-free pair production (BFPP), cf. e.g. references [38][39][40] and references therein for details. The change of the mass/charge ratio caused by these processes leads to secondary beams that can potentially quench the LHC magnets. This problem was only recently mitigated for ATLAS and CMS by directing the secondary beams between magnets, while a special new collimator is required for ALICE [41,42]. 3) Collecting the maximum rate of events that the LHC can deliver is not necessarily ideal for all the experiments. For instance, the ALICE experiment is limited in the amount of data that it can acquire by the repetition time of its time projection chamber [43], thus instantaneous luminosity is levelled at their interaction point by adjusting the horizontal separation between the bunches. Similarly also the LHCb experiment only uses about 10 % of the available beam intensity [44].
The upper limit on the achievable instantaneous luminosity depends on the charge Z and mass A of the accelerated nuclei in a complicated manner and is currently under investigation. For the purpose of the present article we use the numbers presented in Table II, which are computed based on estimates presented at a recent HL-LHC workshop [16], cf. also [22]. In the following we briefly summarise how we used these data. The instantaneous luminosity at one interaction point (IP) scales according to [45] where n b is the number of bunches per beam and N b is the number of nucleons per bunch. The decay of the beam due to interactions follows where n IP is the number of interaction points, σ tot is the total cross section, N 0 = N b (0) is the initial intensity and is the beam lifetime. Here L 0 is the initial luminosity. Therefore, the number of nucleons per bunch decays ac- cording to The evolution of the instantaneous luminosity L (t) and integrated luminosity L(t) are then The turnaround time t ta is the average time between two physics runs. Therefore, the average luminosity is which is maximized for Finally the average luminosity for the optimal run time is Additionally, the initial bunch intensity follows roughly where the exponent characterises the number of nucleons per bunch. For a given isotope, it is limited by the heavyion injector chain, the bunch charges and intra-beam scatterings. Simple estimates based on fixed target studies with Ar beams suggest that 1 < ∼ p < ∼ 1.9 is realistic [16].

V. AN EXAMPLE: HEAVY NEUTRINOS
In the following we use the example of heavy neutrinos with masses below the electroweak scale that interact with the SM exclusively through their mixing with ordinary neutrinos to illustrate the potential of New Physics searches in heavy ion collisions. This is an extremely conservative approach for two reasons. First, we do not take advantage of any of the new production mechanisms that the strong electromagnetic fields or the QGP offer in comparison to proton collisions, cf. point iv). Second, we do not take advantage of the lack of pile-up, aspect ii), which we do not expect to play a major role in the minimal seesaw model considered here. This point can, however, give heavy ion collisions a crucial advantage over proton runs in searches for signatures with a more complicated topology than the decays shown in Figure 2. In the context of heavy neutrinos this could e.g. be the case in left-right symmetric models [46][47][48] where decays mediated by Majorons can lead to pairs of displaced vertices [49].
Right handed neutrinos ν R appear in many extensions of the SM. The implications of their existence strongly depend on their mass M , and they could explain several open puzzles in cosmology and particle physics, cf. e.g. [50]. Most notably they can explain the light neutrino masses via the type-I seesaw mechanism [51][52][53][54][55][56], which requires one flavour of ν R for each non-zero neutrino mass in the SM. In addition they may explain the baryon asymmetry of the universe via leptogenesis [57], act as Dark Matter candidates [58] or address various anomalies observed in neutrino oscillation experiments [59]. 1 The minimal extension of the SM with right handed neutrinos can be obtained by adding all renormalisable operators that only contain the ν R and SM fields to the SM Lagrangian, Here φ is the SM Higgs doublet, La are the SM lepton doublets, the F a are Yukawa coupling constants and ε is the antisymmetric SU(2) tensor. Here we work in a simple toy model with only a single flavour of ν R , which is sufficient because the displaced vertex signature does not rely on interference effects amongst different neutrinos or correlations between their parameters. The heavy neutrino interactions with the SM can be described by the mixing angles θ a = φ F a /M , which characterise the relative suppression of their weak interactions compared to those of the light neutrinos. The Lagrangian that describes the interaction of the heavy neutrino mass eigenstate N ν R + θ a ν c La + c.c. with the SM reads where v 174 GeV is the Higgs field expectation value in vacuum and h the physical Higgs field after spontaneous 10 -8 breaking of the electroweak symmetry. If the heavy neutrinos approximately respect a generalised B − L symmetry [68], then the U 2 a ≡ |θ a | 2 can be large enough [69,70] to produce sizeable numbers of heavy neutrinos in collider experiments.
The number of displaced vertex events with a lepton of flavour a at the first vertex and a lepton of flavour b from the second vertex which can be seen in a detector can then be estimated by as Here l 1 is the length of the effective detector volume in a simplified model of a spherical detector, l 0 the minimal displacement that is required by the trigger, λ N = βγ/Γ N is the particle decay length, where Γ N is the heavy neutrino decay width, β is the heavy neutrino velocity and βγ = |p|/M the usual Lorentz factor, U 2 = a U 2 a is the total mixing and f cut ⊂ [0, 1] is an overall efficiency factor that parameterises the effects of cuts due to triggers, deviations of the detector geometry from a sphere and detector efficiencies. The analytic formula (12) allows for an intuitive understanding of the sensitivity curves obtained from simulations, cf. Figure 4. As illustrated in Figure 5 it can reproduce the results of simulated data surprisingly well.
One may wonder whether the heavy neutrinos can leave the dense plasma that surrounds the collision point. Intuitively this should clearly be the case because the scattering cross section of heavy neutrinos is suppressed by a factor ∼ U 2 compared to that of ordinary neutrinos. For a slightly more quantitative estimate, we can estimate the mean free path λ T of the relativistic heavy neutrinos of energy ω p m W /2 that are produced in real gauge boson decays as λ T Γ −1 T , where Γ T in the thermal damping rate in a plasma of temperature T . In this regime it is known that Γ T < U 2 T 2 ω p [109]. We can therefore estimate which is always much larger than a few tens of fm.
Since f cut is largest for muons, in the following we concentrate on the case b = µ. Moreover, we employ the simplified assumption U 2 = U 2 µ . The expression (12) then further reduces to A. Heavy neutrinos from W boson decay

Event generation
We first study the perspectives to find heavy neutrinos produced in the decay of W bosons in a displaced vertex search. Our treatment of the detector closely follows that in reference [71], but we have adapted the simulation of the production for different colliding isotopes. We calculate the Feynman rules for Lagrangian (11) with FeynRules 2.3 [110], using the implementation [111] that is based on the computations in references [64,112]. We then generate events for the processes shown in Figure 2 with MadGraph5_aMC@NLO 2.6.4 [113], which we have extended to be able to simulate heavy ion collisions. This allows us to use published PDFs [114] for the simulation of lead collisions. We use MadWidth [115] to calculate the N decay width and simulate the decays with MadSpin [116,117]. An estimate of the lifetime neglecting hadronic resonances is given in Figure 3. We finally hadronize and create a shower of colored particles with Pythia 8.2 [118]. We calculate the detector efficiencies of the CMS detector using our own code based on public information of the detector geometry. Most importantly we use for the extension of the tracker 1.1 and 2.8 m in the transversal and longitudinal direction, respectively. In [71] it has been shown that in pp collisions the expected performance of the ATLAS detector is comparable to the one of the CMS detector for this search strategy. We expect the same to be true in heavy ion collisions.
We search in event samples that have either been triggered by a single muon or by a pair of muons. The minimal transverse momentum p T of the muon used for the pair triggers can be softer than in the single muon triggers. For the single muon trigger we require p T = 25 GeV. For the tagging and tracking efficiencies we use the values as found in the DELPHES 3.4.1 [119] detector cards. In order to reduce the background from long lived SM hadrons we require that the secondary vertices have a minimal PbPb L = 5 nb -1 Figure 5: Ratio between the simplified detector model (12) and the number of events predicted by the simulations performed in Section V A, for PbPb, ArAr (pessimistic and optimistic) and pp, respectively. The displayed region corresponds to parameter values where the model (12) predicts more than 0.1 events. displacement l 0 of 5 mm. In order to suppress further backgrounds, in particular from nuclear interactions of hadrons produced in the primary collisions with the detector material, we require at least 2 displaced tracks with an invariant mass of at least 5 GeV in the reconstruction of the displaced vertices. The reconstruction efficiency is near 100 % if the produced particles traverse the entire tracker. If a particle transverses only a fraction of the tracker the efficiency is reduced. We adapt a ray tracing [120,121] method to compute the particle's trajectory and use the length of the remaining path within the tracking system as the criterion to estimate the vertex reconstruction efficiency. It has recently been shown in reference [122] that the detection efficiency drops only linearly with the displacement if advanced algorithms are used. We adopt this functional dependence and assume that the maximal displacement that can still be detected can be improved by a factor 2 if optimized algorithms are used.
We fix the integrated luminosity in PbPb runs to 5 nb −1 , a realistic value for one month in the heavy ion program. We then use the relations presented in Section IV to estimate the integrated luminosity that could be achieved with Ar in the same period as 0.5 and 5 pb −1 for pessimistic and optimistic assumptions for the scaling behaviour, respectively. For protons we use 50 fb −1 .

Backgrounds
Following the approach in reference [71], we work under the assumption that the SM background can be efficiently excluded by the cuts on the invariant mass and the displacement, cf. Figure 3. Quantifying the remaining backgrounds would require a very realistic simulation of the whole detector. These include cosmic rays and beam-halo muons, which only occur at a low rate in the experimental caverns and can mostly be recognised [123], as well as scattering of SM neutrinos from the collision point with the detector, which have a low cross section of charged-current interaction in the detector material. In summary we assume that the background number is smaller than one and do a (under this assumption) conservative statistical analysis with one background event, using the non-observation of four events and the observation of nine events for exclusion and discovery, respectively.

Results
We present our results in Figure 6. It shows that the suppression of the number of events due to the reduced instantaneous luminosity of heavy ion runs in comparison to proton runs overcompensates the A 2 enhancement per collision, i.e. point i), so that Pb collisions are clearly not competitive. For lighter nuclei like Ar the perspectives are somewhat better, as the expected number of events per unit of running time is only about an order of magnitude smaller than in proton runs. If the heavy neutrinos have mixing angles slightly below the current experimental limits, then they would first be discovered in proton collisions, but heavy ion collisions would still offer a way to probe the interactions of the new particles in a very different environment. For heavy neutrinos that are produced in W boson decays, the sensitivity is only marginally increased when lowering trigger thresholds, i.e. point iii), because most µ ± from the primary vertex have p T > 25 GeV due to the mass of the W boson.

B. Heavy neutrinos from B meson decays
The situation is very different for heavy neutrinos produced in B meson decays. The cut-off in sensitivity along the M axis in this case is not determined by the fact that the N decays too quickly to give a displaced vertex signal, but by kinematics: The production cross section exhibits a sharp cut when M approaches the B meson mass m B . Since this cut occurs in a mass range where the expression (12) suggests that the sensitivity should   Figure 6: Sensitivity of the CMS detector for heavy neutrinos produced in W decays in PbPb (solid red), ArAr (green hashed band), and pp (dashed blue) collisions with the luminosities indicated in the plot. These roughly correspond to equal running time of a month. The left and right panel correspond to exclusion (9 events) and discovery (25 events), respectively. The result are based on a simulation of W induced processes using MadGraph5_aMC@NLO with the parameter described in Section V A. The green band reflects the current uncertainty in the beam intensity that can be achieved in ArAr collisions. The grey areas represent the exclusion limits of former experiments NuTeV [124], CHARM [125], DELPHI [126] and CMS [127]. Light neutrino oscillation data and the baryon asymmetry of the universe can be explained in the entire white part of the plot if there are at least three flavours of heavy neutrinos [128]. a We do not display a constraint on the mixing angle from the requirement to generate the light neutrino masses because light neutrino oscillation data only imposes a lower bound on the mixing of an individual heavy neutrino species if one makes additional model dependent assumptions [129]. We expect a comparable result for the ATLAS detector.
a If there are only two heavy neutrinos, then baryogenesis roughly speaking requires ( U 2 10 −5 )( M /GeV ) < 1 [92,130,131] and is only possible for specific flavour mixing patterns [92,132,133]. These restrictions are lifted for three or more right handed neutrino flavours [128]. still improve when increasing M , cf. Figure 4, we expect that one can achieve maximal sensitivity just below the threshold. This means that the sensitivity is maximal in a region where the momenta in the B meson rest frame of both, the N and the µ ± that is produced along with it, are much smaller than m B . The p T distribution of B mesons in the laboratory frame peaks around 3 GeV. As a result, the vast majority of µ ± have p T well below standard p T cuts, cf. Figure 7. Hence, there is an enormous potential for improving the sensitivity if one can lower the triggers on the primary muon p T . For B meson induced processes in heavy ion collisions we assume a trigger threshold of 3 GeV, which roughly corresponds to the kinematic limits dictated by the magnetic bending and the geometry of tracking detectors.
The production of heavy neutrinos in B meson decays cannot be simulated in the same way as described in Section V A. A detailed simulation of N production from B mesons and their decay is technically challenging and goes beyond the scope of this work, the main purpose of which is to estimate the order of magnitude of the sensitivity that can be reached in heavy ion runs. We therefore resort to a modification of the simplified detector model (12) to determine the number of events, Here σ B is the B meson production cross section and the factor 1/9 accounts for the branching ratio of the decay into final states including neutrinos.

Matching the simplified detector model to simulations
We determine the parameters l 0 , l 1 and f cut in the model (15) by fitting the simplified detector model (12) to the results of our simulations for N production in W de- dσ dp T

[pb] [GeV]
Figure 7: Parton level differential cross section dσ/dpT (including theoretical uncertainties) for B mesons produced in central pp collisions with |η| < 4. The differential cross section is to first order independent of the ion used. Therefore, we show √ s equal to 14 TeV (dotted purple) for pp, 7 TeV (dashed blue) as a proxy for ArAr and 5.5 TeV (solid red) as a proxy for PbPb. The predictions have been derived with the FONLL framework [19][20][21], using a value f (b → B + ) = 0.403 for the b-quark fragmentation fraction [134] and the CTEQ 6.6 parton distribution functions [135]. cays shown in Figure 6. This corresponds to modelling the LHC detectors ATLAS or CMS as spherical, which turns out to be a good estimate up to factors of 2-3, cf. Figure 5. For the neutrino production cross section σ ν in W boson decays we use the results from MadGraph5_aMC@NLO, i.e., 1.12 · 10 4 pb for proton collisions and 40 2 · 4898 pb and 208 2 · 4228 pb for Ar and Pb, respectively.
In order to account for the Lorentz factor βγ for each choice of M we compute the N momentum in the laboratory frame as a function of the W boson momentum and the angle between the spacial W and N momenta. We then average equation (12) over W momenta, using a distribution which we have generated simulating the process pp → W (jj) using MadGraph5_aMC@NLO with subsequent hadronization and matching with soft jets via Pythia. With l 0 = 2 cm and l 1 = 20 cm we can reproduce the results of our simulation shown in Figure 6 in good approximation if we set the overall effective efficiency to f cut = 0.1.
The fitted parameter values can be understood in terms of physical arguments. The choice l 0 = 2 cm is qualitatively in good agreement with what one would expect from the geometrical cuts 0.5 cm and 10 cm on the minimal displacement in transversal and longitudinal direction that were used in the simulation. l 1 = 20 cm indicates a typical distance at which one can still reconstruct the displaced vertex. In the simulation we assumed that the vertex reconstruction efficiency linearly drops from 100 % to zero between a displacement of 5 mm and 55 cm, hence 20 cm is a reasonable average between these values. The fact that all of the parameter values can be understood physically provides a strong self-consistency check for our approach. In Figure 5 we show the ratio between the simplified detector model (12) and the results of the simulation described in Section V A 1 within the region where equation (12) predicts more than 0.1 events. Given the non-linear dependence of the function (12) on the parameters and the fact that N d changes over six orders of magnitude within this region, it is absolutely non-trivial that the simplified model reproduces the simulation up to a factor 2-3 within that region.

Computing the number of events
In order to determine σ B in the model (15) we first compute the differential cross section dσ/dp T for B mesons produced at different collision energies within the FONLL framework [19][20][21], in the range p T ∈ [0, 300] GeV using a value f (b → B + ) = 0.403 for the b-quark fragmentation fraction [134] and the CTEQ 6.6 parton distribution functions [135], accepting events with a pseudo-rapidity |η| < 4. The results are shown in Figure 7. We validate the predictions against experimental results [23], noticing that the data are mostly centred on the upper side of the theoretical uncertainty band. By using central value predictions we are thus underestimating the differential cross section, and the derived results can be interpreted as being conservative. We then fix the value of σ B by integrating over dσ/dp T , where the integration limits have to be fixed by the p T cuts. The B meson p T distribution is a good proxy for the p T distribution of the leading muon if the heavy-neutrino has a mass comparable to the B meson, since in this case the muon will be soft in the B meson rest-frame. We can use this approximation because the sensitivity is maximal for M near the B meson mass. We can therefore incorporate the lower p T cut in heavy ion collisions compared to proton collisions, point iii), by computing σ B as an integral over dσ/dp T with different lower integration limits that reflect the different p T cuts on the primary muon. For σ B in proton collisions we use 25 GeV < p T < 300 GeV, in heavy ions collisions we use 3 GeV < p T < 300 GeV. p T values below 3 GeV are very hard to access even in heavy ion collisions because the CMS magnetic field prevents particles with such low momentum from reaching the detector in most of the solid angle range where it is sensitive. All other cuts and efficiencies are summarised in f cut and should be similar for proton and heavy ion collisions, except for a sub-dominant change due to the fact that the momentum distributions in heavy ion collisions are slightly different. We therefore adapt the value f cut = 0.1 obtained from fitting the simplified detector model (12) to the simulation.
We finally take account of the Lorentz factor in the model (15)   of the B meson momentum in the laboratory frame and the angle between this momentum and the N momentum. We average equation (15) over both, using a flat prior for the angle in the B rest frame and B meson spectra, which we have determined by generating the process pp → bb(j) using MadGraph5_aMC@NLO with subsequent hadronization and matching of soft jets with Pythia.

Results
We present the results of our computation in Figure 8, where we compare the sensitivity that can be achieved in proton and heavy ion collisions for equal running time using te same luminosities as in Section V A. The results show that data from PbPb collisions could improve existing bounds on the properties of heavy neutrinos by more than an order of magnitude. Furthermore, for ArAr collisions, the combined enhancement due to the larger number of nucleons, point i), and the lower cut in p T , point iii), can overcompensate the suppression of the sensitivity due to the lower instantaneous luminosity compared to proton collisions, and one can achieve a better sensitivity per unit of running time. Here we have not taken advantage of the absence of pile up at all, i.e., point ii).

VI. DISCUSSION AND CONCLUSION
In reference [15] it was proposed to search for LLPs via displaced vertex searches in heavy ion collisions at the LHC. In the present work we provide details of the analysis. Heavy ion collisions provide three main advantages in the context of LLP searches: i) The number of parton level interactions per collision is larger. ii) There is no pile up, which e.g. renders the probability of mis-identifying the primary vertex practically negligible, and iii) the lower luminosity makes it possible to considerably loosen the triggers used in the main detectors. The track multiplicity, which is traditionally considered to be a reason that speaks against New Physics searches in heavy ion collisions, is not considerably higher than in high pile up pp collisions, leaving the lower instantaneous luminosity as the main disadvantage.
In the present work we focus on aspects i) and iii), using the specific case of heavy neutrinos with masses in the GeV range as an illustrative example. We consider two production mechanisms of heavy neutrinos, production in W boson decay and in B meson decay. If the same cuts are applied as in pp collisions we find that the limitations on the instantaneous luminosity for PbPb suppress the observable number of events per unit of run time by almost two orders of magnitude. The suppression can be reduced to less than one order of magnitude for lighter nuclei, the use of those is currently explored by the heavy ion community for other reasons [16] such as the longer beam lifetime. For the production in W boson decays this means that heavy ion collisions in general do not offer a competitive alternative to searches in proton collisions, though the integrated luminosity of the HL-LHC in PbPb collisions would be sufficient to push the sensitivity far beyond current experimental limits, cf. Figure 6. Low-ering the triggers in this case only leads to a marginal improvement.
The situation is much more promising when considering the production in B meson decays, which leads to a larger number of events, but signatures with much lower p T . The results shown in Figure 8 are remarkable in several ways. First, data from the complete PbPb run could improve the sensitivity of searches for heavy neutrinos by more than an order of magnitude. For a small range of masses over 4 GeV the improvement would amount to two orders of magnitude. If the LHC's heavy ion runs were performed with Ar instead, the improvement would be up to three orders of magnitude. Assuming the current schedule for the upcoming runs this is still less than what can be achieved with all proton data, but it means that a comparable large number of heavy neutrinos can be produced in heavy ion collisions. This means that if heavy neutrinos or any other hidden particles are found in the currently allowed parameter region, then heavy ion collisions would allow to study their properties in a very different environment. In particular, this environment would resemble the primordial plasma that filled the early universe, which would allow to test thermal corrections to the properties of the particles. This would, e.g., be interesting in the context of the generation of lepton asymmetries in the νMSM at temperatures below the electroweak scale [102,103,136], which could affect the resonant production of Dark Matter [137,138].
Second, the sensitivity that could be achieved in a given unit of running time is actually larger in ArAr collisions than in proton collisions due to the lower cuts on p T that can be imposed. 3 This is not sufficient to entirely compensate for the longer scheduled running time for proton collisions. However, we did not take advantage of the absence of pile-up, point ii), in the present analysis. This suggests that for models where pile up poses a serious problem for the extraction of signatures, cf. e.g. Figure 1, heavy ion collisions could actually be more sensitive than proton collisions.
In summary, we find that the possibility to operate the LHC main detectors with lower triggers makes heavy ion collisions a promising place to search for LLPs that decay into particles with low p T . This can help to explore regions of the parameter space of hidden sector models that are hard to study in proton collisions. We have shown this explicitly for heavy neutrino searches in the νMSM. The absence of pile-up in heavy ion collisions further entirely avoids the problem of vertex mis-identification, i.e., eliminates a systematic limitation in LLP searches with non-trivail event topology. We postpone a more detailed study of this aspect to future work. In addition to this, it is well known that heavy ion collisions can offer entirely new production mechanisms that are absent in proton collisions. In combination, this provides strong motivation to include potential New Physics searches in the discussion of the future of the heavy ion program at CERN [26].