Search for long-lived particles using delayed photons in proton-proton collisions at √ s = 13 TeV

A search for long-lived particles decaying to photons and weakly interacting particles, using proton-proton collision data at √ s = 13 TeV collected by the CMS experiment in 2016–2017 is presented. The data set corresponds to an integrated luminosity of 77.4 fb − 1 . Results are interpreted in the context of supersymmetry with gauge-mediated supersymmetry breaking, where the neutralino is long-lived and decays to a photon and a gravitino. Limits are presented as a function of the neutralino proper decay length and mass. For neutralino proper decay lengths of 0.1, 1, 10, and 100 m, masses up to 320, 525, 360, and 215 GeV are excluded at 95% conﬁdence level, respectively. We extend the previous best limits in the neutralino proper decay length by up to one order of magnitude, and in the neutralino mass by up to 100 GeV. ”Published in Physical Review D as doi:10.1103/PhysRevD.100.112003 .”


Introduction
The results of a search for long-lived particles (LLP) decaying to a photon and a weaklyinteracting particle are presented. Neutral particles with long lifetimes are predicted in many models of physics beyond the standard model (SM). In this paper, a benchmark scenario of supersymmetry (SUSY) [1][2][3][4][5][6][7][8][9][10][11][12][13][14] with gauge-mediated SUSY breaking (GMSB) [15][16][17][18][19][20][21][22][23] is employed, commonly referred to as the "Snowmass Points and Slopes 8" (SPS8) benchmark model [24]. In this scenario, pair-produced squarks and gluinos undergo cascade decays as shown in Fig. 1, and eventually produce the lightest SUSY particle (LSP), the gravitino ( G), which is stable and weakly interacting. The phenomenology of such decay chains is primarily determined by the nature of the next-to-lightest SUSY particle (NLSP). In the SPS8 benchmark, the NLSP is the lightest neutralino, χ 0 1 , and the mass of the NLSP is linearly related to the effective scale of SUSY breaking, Λ [15,25]. In the SPS8 model, Λ is a free parameter whose value determines the primary production mode and decay rate of SUSY particles. Depending on the value of Λ, the coupling of the NLSP to the gravitino could be very weak and lead to long NLSP lifetimes. The dominant decay mode of the NLSP is to a photon and a gravitino, resulting in a final state with one or two photons and missing transverse momentum (p miss T ). The dominant squark-pair and gluino-pair production modes also result in additional energetic jets. If the NLSP has a proper decay length that is a significant fraction of the radius of the CMS tracking volume (about 1.2 m), then the photons produced at the secondary vertex tend to exhibit distinctive features. Because of their production at displaced vertices and their resulting trajectories, the photons have significantly delayed arrival times (order of ns) at the CMS electromagnetic calorimeter (ECAL) compared to particles produced at the primary vertex and traveling at the speed of light. They also enter the ECAL at non-normal impact angles.
The present search makes use of these features to identify potential signals of physics beyond the SM. We select events with one or two displaced or delayed photons, and three or more jets. Signal events are expected to produce large p miss T as the LSP escapes the detector volume without detection. In the case of very long-lived NLSPs, one of the NLSPs may completely escape the detector, further increasing the p miss T . Previously, similar searches for LLPs decaying to displaced or delayed photons have been performed by the CMS [26] and ATLAS [27] detector based on GEANT4 [41] and are reconstructed with the same algorithms as used for data. Additional pp interactions in the same or adjacent bunch crossings, referred to as pileup, are also simulated.

Trigger and event selection
The unique signature of delayed photons is best exploited with specialized triggers and dedicated photon reconstruction and identification criteria. There is a difference between the search selections for the 2016 and 2017 data sets, primarily because of the introduction of a targeted HLT algorithm implemented for the 2017 data set, which superseded a general diphoton trigger used for the 2016 data set.

Trigger selection
For the 2016 data set, events are selected by the standard diphoton trigger, requiring transverse momenta (p T ) larger than 42 and 25 GeV for the leading and subleading photons, respectively. Loose identification criteria are imposed on the photon shower width in the ECAL and on the ratio of the energies recorded in the ECAL and HCAL to reduce the rate of background from jets misidentified as photons.
For the 2017 data set, a dedicated HLT algorithm was developed to select events with a single photon satisfying requirements consistent with production at a displaced vertex. Such photons tend to strike the front face of the barrel ECAL at a non-normal incidence angle, resulting in a more elliptical electromagnetic shower in the η-φ plane [26]. In addition to standard requirements on the shower width and electromagnetic to hadronic energy ratio, requirements on the major and minor axes of the shower are also imposed. This allows the identification of the elliptical shower shape, described in greater detail in Sec. 4.2. Loose requirements on the amount of energy around the direction of the photon in the CMS subdetectors (isolation) are also imposed on trigger photon candidates, and the photon p T is required to exceed 60 GeV. Electrons misidentified as photons are suppressed by requiring the candidate photon to be geometrically isolated from charged-particle tracks. Relaxing the trigger requirement from two photons to only one photon increases the background rate, and in order to reduce the trigger rate to a level acceptable for the operation of the HLT the scalar p T sum of all jets (H T ) is required to exceed 350 GeV. For signals with neutralino proper decay length larger than 10 m, the signal acceptance is improved by about a factor of two compared to the 2016 data set.

Object reconstruction and selection
A particle-flow (PF) algorithm [42] is used to reconstruct and identify each individual particle in an event using an optimized combination of information from the various elements of the CMS detector. The candidate vertex with the largest value of summed physics-object p 2 T is taken to be the primary pp interaction vertex. The physics objects are the jets, clustered using the jet finding algorithm [43,44] with the tracks assigned to candidate vertices as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the p T of those jets.
Photon candidates are reconstructed from energy clusters in the ECAL [45] and identified based on the transverse shower width, the hadronic to electromagnetic energy ratio, and the degree of isolation from charged particle tracks. Photons are required to satisfy |η| < 2.5 and to not fall in the transition region between the barrel and endcap of the ECAL (1.444 < |η| < 1.566), where the photon reconstruction is not optimal. For the 2016 data set, photon candidates that share the same energy cluster as an identified electron associated with the primary vertex are vetoed following the procedure detailed in Ref. [45]. To remain consistent with the HLT selection, photons matched geometrically to charged-particle tracks are vetoed for the 2017 data set as well.
Because of algorithms designed to reject noise and out-of-time pileup, the default photon reconstruction vetoes photons delayed by more than 3 ns. To evade this veto, a second set of out-of-time (OOT) photons is therefore defined, in which the clustering starts from ECAL deposits whose signals are delayed by more than 3 ns. The remainder of the reconstruction algorithm for OOT photons is identical to the standard photon reconstruction described in the previous paragraph. In addition to being delayed, signal photons tend to impact the front face of the barrel ECAL at a non-normal incidence angle, and yield electromagnetic showers that are more elliptical in the η-φ plane. To make use of this discriminating feature, we define the OOT photon identification criteria including selection requirements on the S major and S minor observables defined as: where S φφ , S ηη , and S ηφ are the second central moments of the spatial distribution of the energy deposits in the ECAL in η-φ coordinates, and are proportional to the squared lengths of the semimajor and semiminor axes of the elliptical shower shape. The full set of criteria for the OOT photon selection additionally includes requirements on the transverse shower width and isolation and was obtained through a separate optimization that maximizes the discrimination between displaced signal photons and background photons associated with the primary vertex.
Hadronic jets are reconstructed by clustering PF candidates using the anti-k T algorithm with a distance parameter of 0.4 [43,44]. Further details of the performance of the jet reconstruction can be found in Ref. [46]. Jets used in any selection of this analysis are required to have p T > 30 GeV and |η| < 3.0.
The negative vector p T sum of all the PF candidates in an event is defined as p miss T , and its magnitude is denoted as p miss T [47]. The p miss T is modified to account for corrections to the energy scale of the reconstructed jets in the event. Because OOT photons are not part of the standard PF candidate reconstruction used to compute the p miss where t i ECAL is the timestamp of the signal pulse in crystal i [48]. The estimated time resolution of the signal pulse in crystal i is σ i and is parametrized as: where A i is the amplitude of the signal detected by crystal i, σ N i is the pedestal noise for crystal i, and N and C are constants fitted from a dedicated measurement of the time resolution of the crystal sensors.
To measure the crystal sensor time resolution, we follow a procedure similar to that described in Refs. [48,49]. We first apply a very loose selection on photons using S major and S minor in order to reject jets. Pairs of crystals from the same photon cluster are selected by requiring that their energies are within 20% of each other, are nearest neighbors either in the η or φ directions, and are within the same 5×5 grid of crystals defining a trigger tower. The distributions of time differences measured in such crystal pairs are fitted using Gaussian functions in bins of the effective amplitude A eff /σ N , and the standard deviation of each fitted Gaussian function is trended as a function of A eff /σ N . The effective amplitude is obtained combining the signals in the two crystals and is denoted by: The results for the 2016 and 2017 data sets are shown in Fig. 2. These resolution measurements are fitted with the functional form given by Eq. (3), and the N and C parameters are extracted and summarized in Table 1. These parameters are then used to calculate the weights for the photon timestamp in Eq. (2). The observed worsening of the constant term to the time resolution in 2017 may be due to a progressive loss of transparency of the crystals from radiation damage. To calibrate the photon timestamp response, electrons from Z → e + e − decays with an invariant mass between 60 and 150 GeV are reconstructed as photons. For each such photon candidate, the t ECAL is adjusted for the time-of-flight between the primary vertex and the location of the impact of the photon on the front face of ECAL. The timestamp for each photon is recorded, and the mean and RMS parameters of the resulting distribution are extracted as a function of the photon energy. The time response mean is adjusted to zero for both data and simulation, and the timestamps in the simulated events are smeared by an additional Gaussian-distributed random variable such that the resolution in simulation matches that measured in data. The calibrated photon arrival time is denoted as t γ . These calibrations are applied to simulated signal samples in order to accurately predict the signal response, and their uncertainties are propagated to the predicted shape of the t γ distribution for the signal as a systematic uncertainty. The time resolution of a single photon candidate is roughly 400 ps. The resolution is constant up to a photon timestamp of 25 ns, the upper boundary of t γ used during the signal extraction.

Event selection
Events with at least one photon in the barrel region of the detector (|η| < 1.444) with p T larger than 70 GeV are selected. Standard photons [45] and OOT photons are required to pass the "tight" working points. Both photon identifications are tuned to have an average efficiency of about 70%. Furthermore, a displaced photon identification requirement based on the S major and S minor variables is imposed. The calibrated arrival time of this tight photon, t γ , is used as one of the final discriminating observables to distinguish signal from background. For the dominant squark-pair and gluino-pair production modes shown in Fig. 1, the NLSP is generally produced in association with several jets, and therefore we also require events to have three or more jets with p T larger than 30 GeV.
In order to remain compatible with the respective HLT selection, slightly different event selection criteria are imposed on the 2016 and 2017 data sets. For the 2016 data set, triggered by a diphoton HLT, a second photon with p T larger than 40 GeV is required to match the analogous HLT requirement. For the 2017 data set, the first category, referred to as the 2017γ category, requires events with no subleading photon or events where the subleading photon does not pass the photon identification criteria. The second category requires events to have a subleading photon satisfying the photon identification criteria, and is referred to as the 2017γγ category. The second-photon requirement helps to reduce background by one to two orders of magnitude, while the signal yield remains high for low to intermediate lifetimes. Finally, for the 2017 data set, the H T is required to be larger than 400 GeV in order to match the requirements of the HLT and to reach the plateau of the trigger efficiency.
For the 2016 and 2017γγ analyses, for a given neutralino proper decay length, the signal yield increases as a function of the SUSY breaking scale, Λ, by roughly a factor of two over the range considered for this analysis (Λ from 100 to 400 TeV). The product of signal efficiency and acceptance for the lowest Λ is roughly 10.0 ± 0.1% and 0.15 ± 0.01% for neutralino proper decay lengths of 0.1 and 100 m, respectively. For the 2017γ analysis, the product of signal efficiency and acceptance varies as a function of Λ from 5.5 ± 0.1 to 10.4 ± 0.2% for a neutralino proper decay length of 0.1 m, and from 0.22 ± 0.03 to 0.65 ± 0.05% for a neutralino proper decay length of 100 m. These trends can be explained by the harder photon spectrum and increase in jet activity that result from an increase in Λ, while an increase in the neutralino proper decay length results in either one or both of the NLSPs decaying outside the fiducial region of ECAL.  for data is separated into events with t γ ≥ 1 ns (blue, darker) and t γ < 1 ns (red, lighter), scaled to match the total number of events with t γ ≥ 1 ns. The t γ distribution for data is separated into events with p miss T ≥ 100 GeV (blue, darker) and p miss T < 100 GeV (red, lighter), scaled to match the total number of events with p miss T ≥ 100 GeV. The signal (black, dotted) is shown in the left plot only for events with t γ ≥ 1 ns, and in the right plot only for events with p miss T ≥ 100 GeV. The entries in each bin are normalized by the bin width. The horizontal bars on data indicate the bin boundaries. The last bin in each plot includes overflow events. plied, the main background contribution is from pp collision processes with high p miss T , which have the same timing distribution as low-p miss T collider data, ensuring that the two discriminating variables are independent for background processes. This includes proton collisions from satellite bunches spaced ∼ 2.5 ns apart from the main bunches. The noncollision backgrounds, which include cosmic ray muons, beam halo muons, and electronic noise deposits, are reduced to a negligible level by the jet multiplicity requirement and the photon selections.

Signal extraction and background estimation
As the p miss T and t γ observables are statistically independent for background processes, the background distribution can be factorized into the product of the distributions of these two observables. This permits the use of the so called "ABCD" method to predict the background yield in the signal-enriched bin C as N C = (N D N B )/N A , where N X is the number of background events. In order to account for potential signal contamination in bins A, B, and D, a modified ABCD method is used where a binned maximum likelihood fit is performed simultaneously in the four bins, with the signal strength included as a floating parameter that scales the signal yield uniformly in each bin. The background component of the fit is constrained to obey the standard ABCD relationship, within the bounds of a small systematic uncertainty derived from a validation check of the method in a control region (CR). Systematic uncertainties that impact the signal and background yields are treated as nuisance parameters with log-normal probability density functions.
For each point in the signal model parameter space (Λ and cτ in Table 2), the boundaries in p miss T and t γ that define the A, B, C, and D bins are chosen to yield optimal expected sensitivity. For the optimization procedure, in order to remain unbiased by the observed data in the signalenriched regions, we estimate the background yields using only the observed yield in data for bin A (N A ) as follows. Template shapes for the observable p miss T (t γ ) are derived from data requiring that |t γ | < 1 ns (p miss T < 100 GeV). These regions are defined to have negligible signal distribution for data is separated into events with t γ ≥ 1 ns (blue, darker) and t γ < 1 ns (red, lighter), scaled to match the total number of events with t γ ≥ 1 ns. The t γ distribution for data is separated into events with p miss T ≥ 100 GeV (blue, darker) and p miss T < 100 GeV (red, lighter), scaled to match the total number of events with p miss T ≥ 100 GeV. The signal (black, dotted) is shown in the left plots only for events with t γ ≥ 1 ns, and in the right plots only for events with p miss T ≥ 100 GeV. The entries in each bin are normalized by the bin width. The horizontal bars on data indicate the bin boundaries. The last bin in each plot includes overflow events. yield. We obtain the ratios r B/A ( r D/A ) by dividing the number of events with p miss T ( |t γ | ) larger than the given bin boundary by the number of events with p miss T ( |t γ | ) smaller than the bin boundary. The background yields in bins B, D, and C are calculated as N A r B/A , N A r D/A , and N A r B/A r D/A , respectively. The resulting optimized bin boundaries in t γ and p miss T are obtained by choosing the bin boundaries that yield the best expected limit and are summarized in Table 2 for all the SPS8 model parameter space points considered. To simplify the analysis, groups of similar signal model parameters share the same optimized bin boundaries.
It should be noted that we set the lower and upper boundaries in t γ to be -2 ns and 25 ns, respectively. The lower boundary is set by five times the single photon candidate time resolution, while the upper boundary is set to avoid contamination from the next LHC bunch crossing.  To verify that the p miss T and t γ observables are independent, we define CRs that isolate different SM processes that are similar to the backgrounds expected in the signal region (SR). The γ+jets CR, dominated by the γ+jets process, is defined as events satisfying the same requirements as the SR, but having fewer than three jets. The multijet CR, dominated by QCD multijet production, comprises events satisfying the same requirements as the SR, but with an inverted isolation requirement on the leading photon. We measure the correlation coefficients between p miss T and t γ to be less than 1% for both the γ+jets CR and multijet CR, supporting their independence. A closure test on the predicted background yield in these CRs is propagated as a systematic uncertainty, as discussed further in Sec. 6.

Systematic uncertainties
The dominant uncertainty in the search is the statistical uncertainty in the background prediction of the modified ABCD method. There are several subdominant systematic uncertainties that affect the prediction of the signal yield in all four bins. These systematic uncertainties include the uncertainty in the integrated luminosity measurement [50,51], in the energy scale and resolution of the photons and jets, and in the trigger and photon identification efficiencies. For all these cases, dedicated measurements are performed that evaluate corrections and uncertainties in the efficiencies and energy scales in simulated signal events, and these uncertainties are propagated to the signal yield predictions as an uncertainty in the predicted shapes of the distributions of the discriminating observables p miss T and t γ . The calibration of the timestamp discussed in Sec. 4.3 has associated uncertainties that affect both the offset and the resolution in t γ , and are propagated in the shape prediction for the t γ distribution for the signal benchmarks. As we use Z → e + e − events to measure the photon identification efficiency, the corresponding systematic uncertainty includes the impact of the difference in detector response between an electron and a photon. Table 3 provides a summary of the systematic uncertainties in the analysis and their assigned values for each data set, as well as additional information about the correlations between the uncertainties.
As the modified ABCD method for estimating the background requires that the discriminating observables p miss T and t γ are independent, we propagate a systematic uncertainty for any potential interdependence of these observables. We select events in the γ+jets and multijet CR and separate events into the same A, B, C, and D bins defined for the signal region. We compare the background yield in bin C predicted by the ABCD method with the observed yield, and propagate the difference as a systematic uncertainty. This systematic uncertainty is referred to as "the closure" in Table 3. For the cases with neutralino proper decay length smaller than 0.1 m, this systematic uncertainty is relatively small, at 4% or less. For the cases with neutralino proper decay length larger than 0.1 m, the data yields in bin C of the CRs are small and are limited by statistical uncertainty. As a result, a relatively large systematic uncertainty of 90% of the predicted background yield is propagated. Table 3: Summary of systematic uncertainties in the analysis. Also included are notes on whether each source affects signal yields (Sig) or background (Bkg) estimates, to which bins each uncertainty applies, and how the correlations of the uncertainties between the different data sets are treated. We assign different values for the uncertainty in the closure of the background prediction for short and long lifetime signal models. The column labeled 2017 includes both the 2017γ and 2017γγ categories.

Results and interpretation
Tables 4 and 5 list the yields and postfit background predictions for the background-only fit in each of the four bins of the 2016, 2017γ, and 2017γγ categories, respectively, for all the t γ -p miss T bin boundaries used. No statistically significant deviation from the background expectation is observed. The search result is interpreted in terms of limits on the neutralino production cross section for scenarios in the GMSB SPS8 signal model set.
The modified frequentist criterion CL s [52][53][54] with the profile likelihood ratio test statistic determined by toy experiments is used to evaluate the observed and expected limits at 95% confidence level (CL) on the signal production cross sections. The limits are shown in Fig. 5 as functions of the mass of the neutralino NLSP χ 0 1 (linearly related to the SUSY breaking scale, Λ) and the proper decay length of the neutralino. The two-photon category (2016 and 2017γγ) and the one-photon category (2017γ) are complementary as the sensitivity at small proper decay length is better for the 2016 and 2017γγ categories because of the extra background suppression from requiring two photons, while the sensitivity at large proper decay lengths is better for the 2017γ analysis because of the significantly improved signal acceptance from the dedicated displaced single-photon trigger. As a result, the sensitivity to signal models with proper lifetimes greater than the ECAL timing resolution for a single photon candidate is improved

Summary
A search for long-lived particles that decay to a photon and a weakly interacting particle has been presented. The search is based on proton-proton collisions at a center-of-mass energy of 13 TeV collected by the CMS experiment in 2016-2017. The photon from this particle's decay would enter the electromagnetic calorimeter at non-normal impact angles and with delayed times, and this striking combination of features is exploited to suppress backgrounds. The search is performed using a combination of the 2016 and 2017 data sets, corresponding to a total integrated luminosity of 77.4 fb −1 . Both single-photon and diphoton event samples are used for the search, with each sample providing a complementary sensitivity at larger and smaller long-lived particle proper decay lengths, respectively. The results are interpreted in the context of supersymmetry with gauge-mediated supersymmetry breaking, using the SPS8 benchmark model. For neutralino proper decay lengths of 0.1, 1, 10, and 100 m, masses up to about 320, 525, 360, and 215 GeV are excluded at 95% confidence level, respectively. The previous best limits are extended by one order of magnitude in the neutralino proper decay length and by 100 GeV in the mass reach.  institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses.