Detecting hidden sector dark matter at HL-LHC and HE-LHC via long-lived stau decays

We investigate a class of models where the supergravity model with the standard model gauge group is extended by a hidden sector $U(1)_X$ gauge group and where the lightest supersymmetric particle is the neutralino in the hidden sector. We investigate this possibility in a class of models where the stau is the lightest supersymmetric particle in the MSSM sector and the next-to-lightest supersymmetric particle of the $U(1)_X$-extended SUGRA model. In this case the stau will decay into the neutralino of the hidden sector. For the case when the mass gap between the stau and the hidden sector neutralino is small and the mixing between the $U(1)_Y$ and $U(1)_X$ is also small, the stau can decay into the hidden sector neutralino and a tau which may be reconstructed as a displaced track coming from a high $p_T$ track of the charged stau. Simulations for this possibility are carried out for HL-LHC and HE-LHC. The discovery of such a displaced track from a stau will indicate the presence of hidden sector dark matter.


Introduction
Most of the searches for dark matter (DM) are focused on dark matter being a particle interacting weakly with the standard model (SM) particles and having a cross section in a range accessible to direct detection and indirect detection experiment. For example in the context of supersymmetry (SUSY) if the lightest particle is neutral with R parity conservation, it is a candidate for dark matter. However, it is entirely possible that dark matter resides in hidden sectors which are ubiquitous in supergravity (SUGRA) and string models (see, e.g., [1]). Further, SUGRA models with a minimal supersymmetric standard model (MSSM) spectrum extended by a U (1) X gauge group brings in an additional vector superfield with particle content of B µ , λ X where B µ is the new gauge boson and λ X is its gaugino superpartner. The U (1) X can mix with hypercharge U (1) Y via kinetic mixing [2,3]. Additionally with Stueckelberg mass mixing of U (1) X and U (1) Y one brings in a chiral superfield which contains a Weyl fermion ψ [4][5][6]. After electroweak symmetry breaking the above leads to a 6 × 6 neutralino mass matrix, where the additional two neutralinos reside in the hidden sector with highly suppressed couplings to the visible sector. Let us suppose that one of the two neutralinos which lie in the hidden sector is the lightest supersymmetric particle (LSP) of the extended model and further the next-to-lightest supersymmetric particle (NLSP) is a stau which lies close to the hidden sector neutralino. In this case the stau will decay into the hidden sector neutralino with a long lifetime. Such a decay can leave a track in the inner detectors (ID) of the ATLAS and CMS experiments. In this work we explore this possibility within the framework of supergravity grand unified model with an extended U (1) X sector including both the gauge kinetic mixing and the Stueckelberg mass mixing. U (1) extensions of supersymmetric models and their implications on dark matter and collider analyses have been studied extensively in the literature [7]. However, the setup in the present work is quite different from these.
The outline of the rest of paper is as follows: In section 2 we discuss the U (1) X extended SUGRA model with gauge kinetic mixing and Stueckelberg mass mixing. In section 3, we discuss implementation of this model and the mechanism that leads to a long lived stau consistent with the current experimental constraints on the light Higgs boson mass as measured by the ATLAS and CMS Collaborations [8,9], and the relic density as measured by the Planck Collaboration [10]. In section 4, further details of the generation of relic density for the dark matter in the hidden sector is discussed. Currently, the LHC has completed its phase 2 and has shut down for two years for the period 2019-2020 for an upgrade and the upgraded LHC will operate at 14 TeV in the period 2021-2023. During this period the upgraded LHC will collect about 300 fb −1 of additional data for each detector. Thereafter there will be a major upgrade of the LHC to high luminosity LHC (HL-LHC) during the period 2023-2026. This final upgraded HL-LHC will resume operations in late 2026 and is expected to run for ten years till 2036. It is projected that at the end of this period each detector will collect about 3000 fb −1 of data. Future colliders beyond HL-LHC are also being discussed. Among these are a 100 TeV pp collider at CERN and also a 100 TeV pp collider in China [11,12] each of which requires a circular ring of about 100 km. Further, a third possibility of a 27 TeV pp collider, the high energy LHC (HE-LHC) at CERN is also under study [13][14][15][16]. Such a collider can be built within the existing tunnel at CERN by installing 16 T superconducting magnets using FCC technology capable of enhancing the center-ofmass energy of the collider to 27 TeV. If built, the HE-LHC will operate at a luminosity of 2.5 × 10 35 cm −2 s −1 and collect 10−15 ab −1 of data. In this work we will focus on HL-LHC and HE-LHC. Thus in section 5, we discuss the production cross section of the NLSP stau at the LHC at 14 TeV and at 27 TeV (for previous work on HL-LHC and HE-LHC see [17][18][19][20]). In section 6, an analysis of signal and background simulation and event selection is carried out. In section 7, a cut-flow analysis and and the result of this analysis are discussed. Here the analysis is done with no pile-up and with pile-up. The analysis also makes a comparative study of the discovery potential of HL-LHC and HE-LHC for the detection of hidden sector dark matter. Conclusions are given in section 8.

The model
As discussed above we consider an extension of the standard model gauge group by an additional abelian gauge group U (1) X of gauge coupling strength g X . The particle spectrum in the visible sector, i.e., quarks, leptons, Higgs and their superpartners are assumed neutral under U (1) X . We focus first on the abelian gauge sector of the extended model which contains two vector superfields, a vector superfield B associated with the hypercharge gauge group U (1) Y , a vector superfield C associated with the hidden sector gauge group U (1) X , and a chiral scalar superfield S. In the Wess-Zumino gauge the B and C superfields have the following components and The chiral scalar superfield S has the expansion The gauge kinetic energy sector of the model is Next we allow gauge kinetic mixing between the U (1) X and U (1) Y sectors with terms of the form As a result of Eq. (4) the hidden U (1) X interacts with the MSSM fields via the small kinetic mixing parameter δ. The kinetic terms in Eq. (4) and Eq. (5) can be diagonalized using the transformation Aside from gauge kinetic mixing, we assume a Stueckelberg mass mixing between the U (1) X and U (1) Y sectors so that We note that Eq. (7) is invariant under U (1) Y and U (1) X gauge transformation so that, In unitary gauge the axion field a is absorbed to generate mass for the U (1) X gauge boson. It is convenient from this point on to introduce Majorana spinors ψ S , λ X and λ Y so that In addition to the above we add a soft SUSY breaking term to the Lagrangian so that where m X is mass of the U (1) X gaugino and M XY is the U (1) X -U (1) Y mixing mass. We note that the mixing parameter M XY and M 2 even when set to zero at the grand unification scale will assume non-vanishing values due to renormalization group evolution. Thus M XY has the beta-function evolution so that where g Y is the U (1) Y gauge coupling. Similarly, the mixing parameter M 2 has the betafunction so that β (1) In the MSSM sector we will take the soft terms to consist of m 0 , A 0 , m 1 , m 2 , m 3 , tan β, sgn(µ).
Here m 0 is the universal scalar mass, A 0 is the universal trilinear coupling, m 1 , m 2 , m 3 are the masses of the U (1), SU (2) L , and SU (3) C gauginos, tan β = v u /v d is the ratio of the Higgs VeVs and sgn(µ) is the sign of the Higgs mixing parameter which is chosen to be positive.
We focus first on the neutralino sector of the extended SUGRA model. We choose as basis (ψ S , λ X , λ Y , λ 3 ,h 1 ,h 2 ) where the first two fields arise from the extended sector and the last four, i.e., λ Y , λ 3 ,h 1 ,h 2 are the gaugino and higgsino fields of the MSSM sector. Using Eq. (6) we rotate into the new basis (ψ S , λ X , λ Y , λ 3 ,h 1 ,h 2 ) so that the 6 × 6 neutralino mass matrix takes the form where s β ≡ sin β, c β ≡ cos β, s W ≡ sin θ W , c W ≡ cos θ W with M Z being the Z boson mass. We label the mass eigenstates asξ Here the first two neutralinosξ 0 1 andξ 0 2 reside mostly in the hidden sector while the remaining fourχ 0 i (i = 1 · · · 4) reside mostly in the MSSM sector. We assumeξ 0 1 to be the LSP. In the limit of small mixings between the hidden and the MSSM sector the masses of the hidden sector neutralinos are For the case whenξ 0 1 is the least massive of all sparticles in the U (1) X extended SUGRA model, dark matter will reside in the hidden sector. Such a possibility has been foreseen in previous works (see, e.g., [21][22][23]).
We turn now to the charge neutral gauge vector boson sector. Here the 2 × 2 mass square matrix of the standard model is enlarged to become a 3×3 mass square matrix in the U (1) Xextended SUGRA model. Thus after spontaneous electroweak symmetry breaking and the Stueckelberg mass growth the 3 × 3 mass squared matrix of neutral vector bosons in the basis (C µ , B µ , A 3 µ ) is given by where A 3 µ is the third isospin component, g 2 is the SU (2) L gauge coupling, κ = (c δ − s δ ), = M 2 /M 1 and v 2 = v 2 u + v 2 d . The mass-squared matrix of Eq. (17) has one zero eigenvalue which is the photon while the other two eigenvalues are where M + is identified as the Z boson mass while M − as the Z boson. The diagonalization of the mass-squared matrix of Eq. (17) can be done via two orthogonal transformations where the first is given by [6] which transforms the mass matrix to where α = c δ − s δ . The gauge eigenstates of M 2 V can be rotated into the corresponding mass eigenstates (Z , Z, γ) using the second transformation via the rotation matrix with c W (c θ )(c φ ) ≡ cos θ W (cos θ)(cos φ) and s W (s θ )(s φ ) ≡ sin θ W (sin θ)(sin φ), where θ W represents the mixing angle between the new gauge sector and the standard model gauge bosons while the other angles are given by . The resulting mixing angle is thus given by

Model implementation and long-lived stau
One of the by-products of models with a hidden sector coupling to the MSSM only via a small kinetic mixing is the presence of long-lived particles (LLP) with late decays into hidden sector particles. The signature of the production of such particles at hadron colliders is very unique especially if the LLP is charged and leaves a track in the detector which can be easily identified. In this study we will be looking for long-lived staus which have lifetimes long enough allowing them to decay inside the detector tracker.
The input parameters of the U (1) X -extended MSSM/SUGRA [33] are of the usual nonuniversal SUGRA model with additional parameters as below (all at the GUT scale) where m 0 , A 0 , m 1 , m 2 , m 3 , tan β and sgn(µ) are the soft parameters in the MSSM sector as defined earlier. The parameters M 2 and M XY are set to zero at the GUT scale. The input parameters must be such as to satisfy a number of experimental constraints. These include the constraint that the computed Higgs boson mass must be consistent with the Higgs boson mass measurements by the ATLAS and the CMS collaborations. Further, the relic density of dark matter given by the model must be consistent with that measured by the Planck experiment, and sparticle spectrum of the model be consistent with the lower experimental limits on sparticle masses. The consistency of the computed Higgs boson mass with the experimental determination of m h 0 ∼ 125 GeV requires the loop correction to the Higgs boson mass be large which in turn implies that the size of weak scale supersymmetry lie in the several TeV region. Typically this leads to the average squark masses also lying in the TeV region. Such a situation is realized on the hyperbolic branch of radiative breaking of electroweak symmetry [34][35][36] (for related works see [37][38][39][40][41]). It turns out that there are at least two ways in which the squark masses may be large, i.e., either m 0 is large or m 3 is large lying in the several TeV region while m 0 can be relatively small. In the latter case renormalization group running would generate squark masses lying in the several TeV region while the slepton masses would be relatively much lighter [42]. In this analysis we follow the second possibility and choose m 3 in the several TeV region but m 0 relatively much smaller.
With this set of input parameters, we scan the U (1) X -extended MSSM/SUGRA parameter space to obtain a set of benchmark points satisyfing the Higgs boson mass at 125 ± 2 GeV and the dark matter relic density at Ωh 2 ≤ 0.123. The benchmark points are shown in Table 1.  We choose the parameters m 1 , m 2 , M 1 and m X so that the hidden sector neutralinoξ 0 1 is the LSP and thus the dark matter candidate. The small value for m 0 allows the stau to be the NLSP. However, the smallness of m 0 can be problematic for satisfying the Higgs boson mass. This is compensated by requiring a large m 3 [42] as evident from Table 1. The RGE running of the stop mass is driven by m 3 which develops a large enough mass to bring the Higgs mass above its tree-level value and close to the experimentally observed one. In the process, the gluino also gets a large mass. The resulting spectrum of some of the revelant particles is shown in Table 2.  Table 2: Display of the Higgs boson (h 0 ) mass, the µ parameter, the stau mass, the relevant electroweak gaugino masses, and the relic density for the benchmarks of Table 1 computed at the electroweak scale. The track length, cτ 0 (in mm) left by the long-lived stau is also shown. All masses are in GeV.
In Table 2, all the benchmarks satisfy the Higgs boson mass and the relic density constraints. The LSP mass, as well as the masses of the MSSM neutralinoχ 0 1 and of the charginoχ ± 1 are shown. Also the masses of the stau and tau sneutrino are given. Here the stau is the lighter of the two staus which can be made lighter than the tau sneutrino with a large off-diagonal element in the stau mass-squared matrix. The mass gap between the NLSP and the hidden sector LSP ranges from ∼ 8 GeV (for point (f)) to ∼ 20 GeV (for point (e)). The only decay mode of the stau is to the hidden sector neutralino, i.e.τ → τξ 0 1 . The smallness of the available phase space suppresses the stau decay width. Another source of suppression comes from the fact that the MSSM particles communicate with the hidden sector particles only through the small kinetic mixing coefficient δ which, according to Table 1, is chosen to be very small, i.e. O(10 −6 ). The coupling between the stau and the LSP is proportional to whereD is the matrix that diagonalizes the 6 × 6 slepton mass-squared matrix, and N is the matrix that diagonalizes the 6 × 6 neutralino mass matrix, Here, P L (P R ) is the left (right) projection operator and m τ the tau mass.
Since the hidden sector neutralinos interact with the visible sector only minimally, the bino, wino and higgsino contents ofξ 0 1 are negligible, i.e. N 13 ≈ N 14 ≈ N 15 ≈ N 16 ≈ 0. Further, since s δ 1, N 12 s δ 1 and so the coupling given by Eq. (25) is very small. This leads to a further suppression of the stau decay width. In fact, the stau decay widths for the benchmark points of Table 1 are O(10 −16 ) GeV which results in a large decay length, cτ 0 , as shown in Table 2.
Other than their direct production, staus can be produced following the decay of a tau sneutrino. Thus, for our benchmark points of Table 1, the tau sneutrino decays predominantly to a stau and a W boson with branching ratios ranging from 70% to 98%. Thus, we will also consider the production of sneutrinos which are a source of staus as well as the direct production of staus. We note that in Table 1 the kinetic mixing parameter δ is chosen in the range ∼ 10 −5 − 10 −6 so that staus decay in the inner detector tracker. Theoretically δ arises at the loop level from mixings between the hidden sector and the visible sector. The size of the mixing depends on the model and its value can range from 10 −3 to orders of magnitude smaller depending on the model [43]. Values of δ in Table 1 lie well within this range.
A comment regarding the Z and Z bosons is in order. For the benchmarks of Table 1, the Z mass obtained from Eq. (18) is ∼ M 1 since M 2 ∼ 0 and s δ 1. Thus the spectrum contains a Z with a mass range of ∼ 420 GeV to ∼ 700 GeV. However, due to the very small coupling between this Z boson and the SM particles, its production cross-section at pp colliders is extremely suppressed and thus such a mass range can easily escape detection and so the typical experimental bounds on the Z mass or on m Z /g X do not apply here [44]. According to Eq. (18), the Z boson mass receives a correction due to gauge kinetic and mass mixings. Knowing that M 2 M 1 and s δ 1, we can write M 2 − as According to Eq. (13)

Dark matter relic density
In the standard approach to calculating the dark matter relic density, the LSP is assumed to be in thermal equilibrium with the bath and has efficient self-annihilation to SM particles which will eventually deplete the relic abundance until freeze-out sets in. In SUSY models, obtaining a bino-like LSP (lightest MSSM neutralino) is usually problematic for dark matter relic density. In this case, the self-annihilation of the LSP is suppressed and one needs coannihilation to deplete the relic density to its experimentally observed value [10], In the analysis here, the LSP is not bino-like but the hidden sector neutralino,ξ 0 1 , which has very weak couplings to the MSSM particles and so self-annihilation is extremely small. The next odd sector particle, the NLSP, is the stau,τ with so that coannihilation is generally effective. Thus, one can have three processes responsible for the observed relic density ofξ 0 1 , namely, where SM, SM , SM stand for some standard model particles. To have a feel for the size of the first process in Eq. (32) , i.e. LSP self-annihilation, we consider theξ 0 1ξ 0 1 Z vertex, whose coupling is proportional to where |N 15 | 2 and |N 16 | 2 represent the higgsino content ofξ 0 1 [see Eq. (28)] which is negligible due to the very weak interaction of the LSP with the MSSM particles. Hence the annihilation cross-section of two LSPs is found to be extremely small. The second process of Eq. (32) is inefficient as well due to the smallness of the coupling from Eq. (25) which is O(10 −6 ). The only channel with efficient annihilation is the last one which involvesττ with purely MSSM interactions and no dependence on s δ but has a larger Boltzmann suppression ∼ e −2mτ /T and thus the reason for condition Eq. (31). For such very weak couplings of the dark matter particle, one should ask whether chemical equilibrium can be achieved, i.e., where n is the number density and n eq is the equilibrium number density. Chemical equilibrium is generally guaranteed if conversion-driven processes such as co-scatteringξ 0 1 SM ↔ τ SM, decay and inverse decay of the NLSP,ξ 0 1 SM ↔τ are fast enough around the time of freeze-out. In this case one must solve the coupled Boltzmann equations [45,46] which include those conversion-driven processes. The full coupled set of Boltzmann equations pertaining toξ 0 1 andτ are given below in Eqs. (35) and (36), which take into consideration all conversion-driven processes (LSP ↔ NLSP), and with Y = n/s, where s is the entropy density and x = mξ0 1 /T . The first two terms in Eq. (35) are negligible and so is the second term in Eq. (36). The last three terms in each of those equations represent the conversion terms which play an important role in establishing DM freeze-out. The last term in Eqs. (35) and (36) represents scattering of DM particles into odd sector particles,ξ 0 1ξ 0 1 →τ +τ − (last term) which is negligible due to the very weak coupling ofξ 0 1 and the thermal suppression by n eq ξ 0 1 .
Since mτ > mξ0 1 the co-scattering ofξ 0 1 intoτ requires thatξ 0 1 and the SM particle have enough momentum, such a process is highly momentum-dependent. However, for the benchmark points of Table 1, the stau can decay into an LSP and a tau such that mτ > mξ0 1 + m τ and the decay width of the stau, Γτ , ranges from ∼ 3.5 × 10 −16 GeV to ∼ 1.3 × 10 −15 GeV. Knowing that the Hubble parameter, H(T ) is given at a temperature T by it is found that H(T f ) < Γτ for a freeze-out temperature T f = mξ0 1 /x f for the benchmark points (a)−(f), where the average freeze-out temperature occurs for x f ∼ 26.5. The forwardbackward processesτ ↔ξ 0 1 τ help equilibrateξ 0 1 andτ . Notice that the inverse decay plays the same role as co-scattering but has a larger rate. The conversion ofξ 0 1 into aτ is followed by stau self-annihilation into SM particles via the dominant processesτ +τ − → h 0 h 0 and τ +τ − → W + W − which eventually deplete the relic abundance satisfying the current limit as shown in Table 2. So in principle since the inverse decay channel is open, the relic density is determined by coannihilation because inverse decay processes decouple later. For points (g) and (h), H(T f ) > Γτ and so the processτ ↔ξ 0 1 τ decouples which is when co-scattering ξ 0 1 SM ↔τ SM starts playing an important role in converting DM particles intoτ followed by annihilation into SM particles,ττ → SM SM.
To summarize, the hidden sector communicates with the MSSM via the kinetic mixing coefficient and for s δ 10 −6 the dark sector is in kinetic equilibrium with the MSSM [47]. For the coupling strengths considered in this analysis, the DM particle annihilation and coannihilation viaξ 0 1ξ 0 1 → SM SM andξ 0 1τ → SM SM are negligible whereasττ → SM SM is dominant. For fast decay and inverse decay ofτ (which sets the chemical equilibrium betweenξ 0 1 andτ ), co-scattering processes do not contribute to the relic density and the latter is merely determined by coannihilation (i.e. byτ self-annihilation) [48]. When the decay width ofτ falls below the Hubble parameter around freeze-out, coannihilation and co-scattering freeze-out will determine the final relic abundance [49]. Here, two cases arise: if the freeze-out temperature of coannihilation is larger than co-scattering then the former freezes out earlier thus the number ofξ 0 1 andτ in a comoving volume is fixed. Co-scattering processes only redistribute the two particles' number densities and so the relic density is set by coannihilation. If the freeze-out temperature of co-scattering is greater than coannihilation then co-scattering freezes-out first which means the LSP is no longer being converted to the NLSP. The remaining NLSPs will be removed by coannihilation (or self-annihilation to be precise). Therefore the relic density is set by co-scattering.
For even weaker couplings of the dark sector, the LSP may fall out of thermal equilibrium and decouple from the bath soon after being produced. Such a particle is known as a FIMP (feebly interacting massive particle). If in the early universe, the LSPs had little initial abundance due to inflationary effects or other mechanisms then even though the interactions with the bath is feeble, dark matter particles may still be produced over time until the interaction rate falls below the expansion rate of the universe and the relic abundance "freezes-in". This is known as the freeze-in mechanism [50,51] which can be viewed as the opposite of the usual freeze-out mechanism where one starts with a huge initial abundance of dark matter particles which are in thermal equilibrium with the bath. The production of such a feeble particle is through the decays of heavier particles. For a certain range of couplings, the LSP relic density can be even due to both contributions from freeze-out and freeze-in [52]. For the range of couplings we consider, freeze-in does not factor in and the relic abundance is purely due to the freeze-out of the LSP via the mechanisms described above.
5 Stau pair production and stau associated production with a sneutrino at the LHC The main mechanism for the production of a light stau at the LHC is through pair production, pp →τ +τ − and associated production with a tau sneutrino, pp →τν τ . In the U (1) X -extended MSSM/SUGRA, the stau pair production proceeds via γ, Z and Z schannel processes, i.e. qq → γ, Z, Z →τ +τ − , whereas stau associated production with a tau sneutrino proceeds by the exchange of W ± boson. The coupling of Z to fermions is small and in particular the coupling to up-type quarks is proportional to Since 1 the mixing angle θ W is very small and so is s δ which means that the coupling of Eq. (38) is small as well. For this reason, the contribution to the cross-section from Z can be neglected. Thus, the production cross-section of the stau pair can be determined directly from the MSSM. We calculate the di-stau and stau-tau sneutrino LHC production cross-sections using Prospino2 [53,54] at the next-to-leading order (NLO) in QCD at 14 TeV and at 27 TeV using the CTEQ5 PDF set [55]. The results of the analysis are presented in Table 3. Note that for a pp collider the production cross-section ofτ +ν τ is larger thañ

Signal and background simulation and event selection
Our signal consists of a mixture of stau pair production and stau associated production with a tau sneutrino. The end products of the decay chain and the relevant final states are as in Eq. (39) pp where τ h corresponds to a hadronically decaying τ , represents a light lepton (electron or muon) and E miss T is the missing transverse energy due to neutrinos and the LSP. The event preselection criteria involves at least one isolated light lepton and at most one hadronically decaying tau to retain as much signal as possible. No selection criteria is imposed on the missing transverse energy, E miss T , as in most of the parameter points considered the final states involve little E miss T which in most situations is below the dectectors' trigger level. Furthermore, since our stau is long-lived, it will leave a track in the inner detector (ID) tracker characterized by low speed and large invariant mass. We are interested in looking at tracks left by charged particles (mostly leptons) originating from the decay of the long-lived stau. Some studies already exist in this direction, see, e.g. [56]. Since the lepton track is soft (of low p T ), the combination of the stau track and lepton track constitute what is known as a kinked track [57]. The lepton tracks are highly displaced and so are characterized by a large impact parameter, d 0 , which is the shortest distance, in the (x, y) plane perpendicular to the beams' direction, between the track and the collision point. Such a signature is a combination between kinked and displaced tracks. The decay length of the long-lived stau can be determined by where (x m , y m ) and (x p , y p ) are the vertex coordinates of the mother and daughter particles, respectively. For the long-lived stau, (x p , y p ) represents the vertex (or track initial point) of the tau (or the resulting leptons) with a large impact parameter and (x m , y m ) is taken to be (0, 0), i.e. at the primary vertex. It is known that imposing cuts on the impact parameter and decay length as |d 0 | > (2−4) mm and d xy > (4−8) mm will greatly reduce the SM background [58][59][60][61][62]. To show the size of the impact parameter of the signal events in relation to the SM background, we present such a distribution in Fig. 1. The benchmarks (a), (b), (c) and (d) are shown by the black histograms whereas the SM backgrounds are represented by the colored ones. Note that no preselection cuts have been imposed yet on the signal and background in this plot. One can clearly see that the SM background events fall to zero at |d 0 | ∼ 200 mm whereas signal events extend all the way up to ∼ 300 mm as a result of the late decay products of the stau.

Cut-flow analysis and results
We give a cut-and-count analysis for the discovery potential of a long-lived stau for the signal benchmark points of Table 1 at the HL-LHC and HE-LHC by comparing results of the number of signal events surviving the cuts at select integrated luminosities due to zero and non-zero pile-up environments. Even though we know that at high luminosities pile-up will be a significant player, an analysis with no pile-up would give us an idea on how the performance is affected by adding pile-up which is a sign of the effectiveness of the considered pile-up subtraction algorithm. As explained in the section 6, we are looking for a light lepton (electron or muon) track with high impact parameter, |d 0 |, originating from a high momentum track due to the long-lived charged stau. Hereafter, we list the kinematic variables used to discriminate the signal from the SM background: 1. |d 0 |: the track impact parameter which is chosen to be large enough to eliminate as many background events as possible. 5. ∆R(τ , track): the minimum spatial separation between the lepton tracks and the stau track. A small cut on this variable ensures that the lepton track considered has originated from a long-lived stau.
6. β = p/E: the velocity of the long-lived particle. A cut on β allows us to reject events with muons faking a stau track.
We present in the left panel of Fig. 2 ∆R(τ , track) which has peak values for small spatial separation. Thus a cut of ∆R(τ , track) < 0.6 should be sufficient to ensure that the lepton tracks have actually originated from the corresponding stau track. The right panel of Fig. 2 displays the decay length, d xy of the stau which clearly can travel up to 1 m in the ID, knowing that the typical tracker radius is between 35 mm and 1200 mm.

Results with no pile-up
We start by showing results for the case of no pile-up. After applying the preselection cuts, cuts on the kinematic variables 1−6 of section 7 are applied on the signal and background samples. We give in Table 4 the cut-flow for three parameter points, (a), (c) and (f) and the SM backgrounds where the samples are normalized to their cross-section values, in fb. The points are chosen to represent cases of maximal (c), moderate (a) and low (f) signal event yield. It is clear that no backgrounds survive the cuts as one would expect from such a signal topology. Note that the two kinematic variables that have the most impact on the backgrounds (and partly on the signal) are |d 0 | and the track isolation condition. In an actual collider experiment, the backgrounds are not exactly zero but are mostly instrumental in nature [72]. The other background sources come from accidental crossing of tracks especially in pile-up environments. We will consider this when discussing pile-up in our analysis next section.  For the eight benchmark points of Table 1 we give the projected number of signal events surviving the cuts at select integrated luminosities at the HL-LHC. We present the results in Table 5. One can see that four out of the eight points may be discovered at the HL-LHC with integrated luminosities up to 3000 fb −1 where we are assuming that a signal event yield > 5 is enough to claim discovery over an almost zero background.  Table 5: Projected number of signal events at select integrated luminosities for benchmark points of Table 1 at HL-LHC for the case of no pile-up.
While still in the case of no pile-up, we give the same cut-flow for the signal points (a), (c) and (f) and the SM backgrounds but for the HE-LHC. The effect of the kinematic variables on the signal and background is the same for HE-LHC as in the HL-LHC. The results are given in Table 6.   One can see that at the HE-LHC all of the eight benchmarks can be discovered with integrated luminosities up to 6000 fb −1 (see Table 7).

Effect of pile-up
We study the effect of pile-up on the signal and background events yield by considering an average of 128 interactions per bunch crossing. The presence of pile-up increases the track multiplicity and jet activity especially in the low momentum regime. The PUPPI algorithm is used for pile-up subtraction which is based on identifying charged particles from pile-up and assigning weights for neutral ones. The weights are then used to rescale the particles' fourmomenta. Hence PUPPI improves the reconstruction of objects such as jets at the particle level before the clustering sequence is initiated. Improvements have been shown also at the level of E miss T . For this reason, we will use the PUPPI jets and E miss T in our kinematic variables. Thus, the same kinematic variables mentioned in section 7.1 will be used here with slight modifications and additions. As can be seen from Fig. 3, due to pile-up, the lepton track multiplicity has increased dramatically and the number of lepton tracks matching the number of isolated leptons (isolated lepton tracks criterion) is now very small. Applying this criterion leads to almost a loss of the entire signal while keeping a lot of background events. To mitigate this issue, we apply an additional cut on the lepton tracks thus requiring p tracks T > 5 GeV. This cut tends to clean low momentum lepton tracks and restores the importance of the "isolated lepton tracks" criterion. p Hadronic T calculated using PUPPI jets. Furthermore, the cut values on some of the kinematic variables need to be adjusted to accommodate the pile-up environment. Hence harder cuts need to be applied to further clean the effects of pile-up. Different combinations of cut values were tried and the ones giving the optimal results are summarized in Table 8 Tables 4 and 6, while the bottom cut is additional and used on signal and background after inclusion of pile-up. The other cuts are the same as in Tables 4 and 6.
Applying the new cut values, the signal event yield drops as one would expect in the case of pile-up. The number of signal events surviving the cuts for cases of pile-up and no pile-up are displayed in Fig. 4 Table 1.
Given the rate at which the HL-LHC is collecting data, points (a) and (c) may need a run time of ∼ 2 years to be discovered while points (b) and (e) may take up to 5 to 6 years. This is shown in Fig. 5. On the other hand, the run time should be greatly reduced at the HE-LHC which is expected to collect data at the rate of 820 fb −1 /year. Thus points (a) and (c) will require ∼ 4 months of runtime while points (b) and (e) may take up to 5 months to a year. As for the other points, point (d) needs ∼ 3 years, point (f) ∼ 5 years and points (g) and (h) ∼ 8 years. Before concluding, we discuss the effect of the mass mixing coefficient and the kinetic mixing coefficient δ starting from the GUT scale. Thus in this analysis, we have set to zero (i.e. M 2 = 0) at the GUT scale and gave δ a non-zero value. From Eq. (13), the RGE running of M 2 induces a tiny value for at the electroweak scale. Thus our analysis includes the effect of both mass and kinetic mixings. Now, one can reverse the situation and set δ to zero and give a tiny value at the GUT scale. The coefficient δ does not run and remains zero at the electroweak scale. We have checked that with some tuning of one can reproduce the same effect that δ has on the stau decay width. For example, if we consider point (a) of Table 1 and set δ = 0 and = 6.2 × 10 −7 at the GUT scale (so that = 4.2 × 10 −7 at the electroweak scale) we reproduce the same decay width and lifetime as given in Table 2 for the same point.

Conclusions
In this analysis we presented an extension of the MSSM/SUGRA with an extra abelian gauge group U (1) X . Under this extension, the MSSM/SUGRA is augmented by an additional U (1) vector supermultiplet and a U (1) chiral supermultiplet. The MSSM fields are not charged under U (1) X and the only communication between the MSSM and the hidden sector is through a gauge kinetic mixing coefficient, δ and a mass mixing parameter, . As a result, the neutral gauge boson sector has an additional boson: a Z boson with couplings to the MSSM suppressed by s δ and which can easily escape detection due to its very small production cross-section at colliders. The gaugino sector is extended as well and in particular the neutralino mass matrix becomes 6 × 6 with two additional neutralinos. The lightest of the six neutralinos is the hidden sector,ξ 0 1 which is a dark matter candidate with very weak interaction with the visible sector. The NLSP is the stau which has a suppressed decay channel toξ 0 1 making it a long-lived particle. The suppression is due to two sources: a small mixing coefficient, δ, and a phase space suppression, i.e. a small mass gap between the LSP and the NLSP. Even though the dark matter candidate has very weak interactions with the bath, its relic abundance may still be produced via the freeze-out mechanism through conversion-driven processes. Since dark matter self-annihilation and stau-LSP annihilation are highly inefficient, the LSP relic density is mainly set by stau-stau annihilation to SM particles with the stau decay and inverse decay to the LSP responsible for maintaining chemical equilibrium. The strength of the latter process falls below the Hubble parameter at freeze-out for two of the considered benchmark points, (g) and (h), which makes coscattering a leading process in convertingξ 0 1 toτ followed by stau self-annihilation which eventually depletes the relic abundance. Because of its very weak interactions, the LSPproton scattering cross-section is negligible making such a dark matter candidate easily escape direct detection in scattering experiments. However, because of the suppressed decay width of the stau which is the NLSP, an opportunity to observe such a particle through its long-lived decay at the LHC exists and is of interest. The charged stau will leave a track in the ID before decaying into the hidden sector dark matter. Thus for this class of models, an observation of such a track will point to the existence of hidden sector dark matter.
In summary, we have given an analysis for the potential of HL-LHC and HE-LHC discovering hidden sector dark matter via long-lived stau through its pair production and its associated production with a tau sneutrino and its subsequent decay. The characteristic signature of a charged long-lived stau is a high p T track decaying to another charged track (resulting in a kinked track) with leptons having a large impact parameter. With proper cuts on select kinematic variables, all physical backgrounds can be rejected for both cases of no pile-up and pile-up. We show that half of the eight benchmark points considered can be discovered at the HL-LHC while all of those points are with in reach of the HE-LHC. It is also shown that a transition from HL-LHC to HE-LHC will reduce the runtime for discovery of points (a), (b), (c) and (e) by ∼ 80 to ∼ 90%.