Search for trilepton resonances from chargino and neutralino pair production in $\sqrt{s}$ = 13 TeV $pp$ collisions with the ATLAS detector

A search is performed for the electroweak pair production of charginos and associated production of a chargino and neutralino, each of which decays through an $R$-parity-violating coupling into a lepton and a $W$, $Z$, or Higgs boson. The trilepton invariant-mass spectrum is constructed from events with three or more leptons, targeting chargino decays that include an electron or muon and a leptonically decaying $Z$ boson. The analyzed dataset corresponds to an integrated luminosity of 139 fb$^{-1}$ of proton-proton collision data produced by the Large Hadron Collider at a center-of-mass energy of $\sqrt{s}$ = 13 TeV and collected by the ATLAS experiment between 2015 and 2018. The data are found to be consistent with predictions from the Standard Model. The results are interpreted as limits at 95% confidence level on model-independent cross sections for processes beyond the Standard Model. Limits are also set on the production of charginos and neutralinos for a Minimal Supersymmetric Standard Model with an approximate $B$-$L$ symmetry. Charginos and neutralinos with masses between 100 GeV and 1100 GeV are excluded depending on the assumed decay branching fractions into a lepton (electron, muon, or $\tau$-lepton) plus a boson ($W$, $Z$, or Higgs).


Introduction
The extension of the Standard Model (SM) of particle physics with supersymmetry (SUSY) [1][2][3][4][5][6] can introduce processes that violate baryon number ( ) and lepton number ( ) conservation, for instance proton decay. As such processes have not been observed, it is common to introduce an ad hoc requirement to conserve -parity [7], where the -parity of a particle is defined as = (−1) 3( − )+2 . Here the , , and are the baryon number, lepton number, and spin of the particle, respectively. All SM particles have = 1 and their SUSY partners have = −1. -parity conservation (RPC) therefore requires the lightest SUSY particle (LSP) to be stable. In RPC scenarios, a stable LSP must necessarily be neutral in electric and color charge to be compatible with astrophysical data [8,9].
Theories predicting -parity violation (RPV) [10,11] are viable if the interactions that violate B − L conservation have small couplings and violate only one of or at tree level, thus preventing rapid proton decay. The benchmark model for this search is a Minimal Supersymmetric Standard Model (MSSM) [12,13] extension that adds a gauged (1) B−L [14][15][16][17][18] to the (3) C × (2) L × (1) Y of the SM and includes three generations of right-handed neutrino supermultiplets. Any one of the right-handed sneutrinos has the correct quantum numbers to spontaneously break the B − L symmetry, and its vacuum expectation value (VEV) introduces violation only at tree level [17]. The size of the RPV coupling is directly related to the right-handed sneutrino VEV, and therefore to the neutrino sector. As a consequence the RPV coupling is kept small by the small values of the neutrino masses. The LSP may decay into SM particles through the RPV coupling, which allows the LSP to have electric and color charges.
The B − L RPV model predicts unique signatures [19,20] that are forbidden if -parity conservation is assumed. In a set of simulations [21,22] the MSSM parameters were scanned and the exact physical sparticle spectrum was calculated for each simulated point. It was seen [23,24] that two likely LSP candidates with moderate production cross sections at the Large Hadron Collider (LHC) are the wino-type chargino (˜± 1 ) and wino-type neutralino (˜0 1 ), the SUSY partners of the electroweak gauge fields of the bosons. Both LSP candidates were found to be nearly mass degenerate with one another for all simulations and therefore both decay primarily via RPV couplings [24]. The RPV coupling was also found by the simulations to be large enough that both the˜± 1 and˜0 1 decay promptly [24]. Therefore, this search targets prompt decays. In this model the chargino may decay into a boson and a charged lepton ( ℓ), a Higgs boson and a charged lepton ( ℓ), or a boson and a neutrino ( ), while the neutralino may decay into ℓ, , or , as shown in Figure 1. The˜± 1 and˜0 1 branching fractions depend on tan , the ratio of the VEVs of the two Higgs fields, and the neutrino mass hierarchy. For example, the branching fractions to electrons are predicted to be small in the normal hierarchy. This paper presents a search for the electroweak pair production of two charginos (˜± 1˜∓ 1 ) or associated production of a chargino and neutralino (˜± 1˜0 1 ). In contrast to RPC searches, there is no significant missing transverse momentum from an invisible LSP in the event, and all decay products can leave visible energy deposits in the detector. A resonance search in the trilepton mass ( ℓ ) is performed in three orthogonal signal regions, all of which target events where the decay of at least one˜± 1 forms a trilepton resonance. One signal region requires four or more leptons and targets events where the second˜± 1 or˜0 1 (denoted hereafter by˜± 1 /˜0 1 ) decay can be fully reconstructed. A second signal region also requires four or more leptons but targets decays of the second˜± 1 /˜0 1 that include one or more leptons and at least one neutrino. A third signal region requires exactly three leptons, targeting decays of the second˜± 1 /˜0 1 that include no leptons. Figure 1: Diagrams of (left)˜± 1˜∓ 1 and (right)˜± 1˜0 1 production with at least one˜± 1 → ℓ → ℓℓℓ decay. The -parity-violating coupling allows prompt˜± 1 decays into ℓ, ℓ, or and prompt˜0 1 decays into ℓ, , or .
Several SM processes with similar final-state particles can contribute to the signal regions, with the largest contributions from the , , and¯processes. The expected yields of these processes are estimated using Monte Carlo (MC) simulation that is normalized to data in three highly populated control regions. Additional event selections are applied to reject events from SM processes in the signal regions while maintaining a high selection efficiency for events from the target˜± 1˜∓ 1 and˜± 1˜0 1 models.
A scan over the possible˜± 1 and˜0 1 branching fractions to both bosons and leptons is performed when setting model-specific limits. Model-independent limits are also explored in narrow slices of the ℓ spectrum, with no assumptions made on the˜± 1 /˜0 1 branching fractions or decay kinematics of a generic beyond-the-SM process.
Previous searches for the production of wino-type charginos and neutralinos in -parity-conserving models have targeted final states with three or more leptons via and boson decays and found no significant excess in data over background expectations, with the ATLAS [25,26] and CMS [27,28] collaborations setting limits on wino masses of up to 580 GeV and 650 GeV, respectively. Searches have also been performed for trilepton resonances from heavy leptons in type-III seesaw scenarios by the ATLAS [29,30] and CMS [31] collaborations, but none have attempted to fully reconstruct both decay chains of the charginos and neutralinos. A previous search by ATLAS [32] for events from the B − L RPV model targeted by this analysis focused on the pair production of top squarks [33].
A brief overview of the ATLAS detector is given in Section 2, and a description of the dataset and the MC simulation is presented in Section 3. Details of the reconstruction of the events used in the search are presented in Section 4, and the design of signal regions sensitive to the B − L RPV model is discussed in Section 5. The description of the SM backgrounds and the strategy for their estimation are given in Section 6, followed by an explanation of the systematic uncertainties in Section 7. The results of the search and their interpretation for various B − L RPV model scenarios are presented in Section 8, and the conclusions are given in Section 9. 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the center of the detector and the -axis along the beam pipe. The -axis points from the IP to the center of the LHC ring, and the -axis points upwards. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2), and the rapidity is defined as = (1/2)ln[( + )/( − )], where is energy and is longitudinal momentum. Angular distance is measured in units of Δ ≡ √︁ (Δ ) 2 + (Δ ) 2 . , andb ackgrounds, which are estimated from MC simulation that is normalized to data in dedicated control regions, as described in Section 6.1. The contribution from events with one or more misidentified or nonprompt (fake) leptons is separately predicted using a data-driven method described in Section 6.2.
Diboson, triboson, and +jets samples [40,41] were simulated using the S 2.2 [42] generator. Triboson and most diboson processes were simulated with S 2.2.2 while +jets and semileptonically decaying diboson processes were simulated with S 2.2.1. The matrix element calculations were matched to the parton shower (PS) simulation using Catani-Seymour dipole factorization [43,44]. The matching was performed separately for different jet multiplicities and merged into an inclusive sample using an improved Catani-Krauss-Kuhn-Webber (CKKW) matching procedure [45,46] extended to next-to-leading-order (NLO) accuracy in QCD using the MEPS@NLO prescription [45][46][47][48]. The virtual QCD correction for matrix elements at NLO accuracy was provided by the O L library [49,50]. The NNPDF3.0NNLO [51] set of parton distribution functions (PDFs) was used together with a dedicated set of tuned PS parameters (tune) developed by the S authors [44].
The +jets (diboson) samples were calculated for up to two (one) additional partons at NLO and up to four (three) additional partons at leading order (LO) in QCD, and the triboson samples were calculated at NLO in QCD for the inclusive processes and at LO in QCD for up to two additional parton emissions. Diboson samples include loop-induced and electroweak production. The diboson and triboson samples do not include Higgs boson contributions. The cross sections calculated by the event generators were used for all samples except for +jets, which was normalized to a next-to-next-to-leading-order (NNLO) cross-section prediction [52].
The¯ [53],¯ [54], and [55] process samples were simulated at NLO in QCD using the P -B [56][57][58] v2 generator and the NNPDF3.0NLO PDF set. The matrix element calculations were interfaced with P 8.230 [59] for the PS using the A14 tune [60] and the NNPDF2.3LO PDF set [61]. The ℎ damp parameter 2 was set to be 1.5 times larger than the top-quark mass following optimization studies using data [62]. The¯inclusive production cross section was corrected to the theory prediction calculated at NNLO in QCD and included the resummation of next-to-next-to-leading logarithmic (NNLL) soft-gluon terms calculated with T ++2.0 [63]. The inclusive production cross section was corrected to the theory prediction at NLO in QCD with NNLL corrections to the soft-gluon terms [64,65]. Both samples were generated in the five-flavor scheme, setting all quark masses to zero except for the top quark.
The diagram-removal strategy [66] was employed in the sample to remove the interference withp roduction [62].
Other top-quark production processes were simulated with the M G 5_aMC@NLO v2 [67] generator at either NLO in QCD with the NNPDF3.0NLO PDF set or at LO in QCD using the NNPDF2.3LO PDF set. They were interfaced with P 8 using the A14 tune and the NNPDF2.3LO PDF set. Generator versions M G 5_aMC@NLO v2.3 and P 8.212 were used for , ,¯,¯, andp rocesses, while versions M G 5_aMC@NLO v2.2 and P 8.186 were used for¯,¯, and four-top processes. These top-quark processes were generated at LO in QCD with the exception of¯, , and , which were generated at NLO in QCD.
Higgs boson production via gluon-gluon fusion (ggF) was simulated at NNLO accuracy in QCD using the P -B v2 NNLOPS program [68] and interfaced with P 8.212 using the AZNLO tune [69] and PDF4LHC15 NNLO PDF set [70]. The MC prediction was normalized to the next-to-next-to-next-toleading-order (NNNLO) cross section in QCD plus electroweak corrections at NLO [71,72].
Higgs boson production via vector-boson fusion (VBF) and Higgs boson production in association with a or boson ( ) were generated using P -B v2 and interfaced with P 8.212 using the AZNLO tune and CTEQ6L1 [73] PDF set. The P predictions are accurate to NLO in QCD and were tuned to match calculations including effects due to finite heavy-quark masses and soft-gluon resummations up to NNLL. The MC predictions were normalized to NNLO QCD cross-section calculations with NLO electroweak corrections [74][75][76][77].
The B − L RPV˜± 1˜∓ 1 and˜± 1˜0 1 signal samples were produced using M G 5_aMC@NLO v2.6 and the NNPDF2.3LO PDF set with up to two additional partons calculated at LO in QCD and interfaced with P 8.230 using the A14 tune and NNPDF2.3LO PDF set. The scale parameter for jet-parton CKKW-L matching was set to a quarter of the˜± 1 /˜0 1 mass. Samples were generated at masses between 100 GeV and 1500 GeV in steps of 50 GeV. Signals with masses below 100 GeV were not explored as they have been excluded by previous three-lepton searches for charginos and neutralinos [25-31].
Signal events were generated with equal˜± 1 /˜0 1 branching fractions to each boson ( , , or Higgs bosons where kinematically accessible) plus charged-lepton ( , , or -lepton) channel. In order to explore different assumptions for the˜± 1 /˜0 1 branching fractions in the analysis, simulated events are reweighted appropriately, assuming that the˜± 1 and˜0 1 branching fractions change in the same way. Generated signal events were filtered to have at least three leptons, two of which were associated with a boson. Hadronically decaying -leptons were not considered by this three-lepton filter for the˜± 1˜0 1 events, increasing the useful statistics of the MC sample. The˜± 1 were also required to decay via a boson in thẽ ± 1˜0 1 events to increase the number of events with a trilepton resonance. The inclusive production cross sections were calculated assuming mass-degenerate, wino-like˜± 1 and˜0 1 , as predicted by the B − L RPV model [23], and were calculated at NLO in QCD with next-to-leading-logarithmic (NLL) corrections to the soft-gluon terms [78][79][80][81][82]. The cross sections and their uncertainties were derived from an envelope of cross-section predictions using different PDF sets and factorization and renormalization scales [83]. The inclusive cross sections for˜± 1˜∓ 1 (˜± 1˜0 1 ) production at a center-of-mass energy of √ = 13 TeV range from 11.6 ± 0.5 (22.7 ± 1.0) pb for masses of 100 GeV to 0.040 ± 0.006 (0.080 ± 0.013) fb for masses of 1500 GeV.
The modeling of -and -hadron decays in samples generated with P -B or M G 5_aMC@NLO was performed with E G 1.2.0 [84]. Events from all generators were propagated through a full simula-tion of the ATLAS detector [85] using G 4 [86] to model the interactions of particles with the detector. A parameterized simulation of the ATLAS calorimeter [85] was used for faster detector simulation of signal, , and¯processes and was found to be in agreement with the full simulation. The effect of multiple interactions in the same and neighboring bunch crossings (pileup) was modeled by overlaying simulated minimum-bias collisions onto each hard-scattering event. The minimum-bias events were generated with P 8.210 using the A3 tune [87] and NNPDF2.3LO PDF set. For each simulated hard-scatter process a separate MC sample is generated to reflect the conditions of the 2015+2016, the 2017, and the 2018 datasets. The number of overlaid minimum-bias collisions is sampled for each event according to the distribution of the average number of interactions per bunch crossing measured in that dataset.

Event reconstruction
The data events used in the analysis were recorded during stable beam conditions at the LHC and were required to meet data quality criteria. Data events were collected with triggers requiring at least a single electron or a single muon reconstructed by the trigger system, with various lepton-T thresholds depending upon the relative quality (including isolation) of the trigger-level leptons [37]. In the analysis, tighter quality and T requirements are applied to the fully reconstructed signal leptons, as described below, to ensure the event selection is free from bias in the trigger reconstruction. Each event for which the trigger was activated is required to have at least one electron (muon) with a fully calibrated T above 27, 61, or 141 GeV (27.3 or 52.5 GeV), with larger-T requirements corresponding to reduced lepton-quality requirements of the trigger. For the 2015 data, the T requirement of the analysis for the loosest-quality electron trigger is lowered to 121 GeV. The single-lepton triggers are found to be more than 90% efficient for the signal model with mass of 100 GeV and more than 99% efficient for signal models of mass 300 GeV or higher.
Both the data and MC events are required to have at least one reconstructed vertex that is associated with two or more tracks of transverse momentum T > 500 MeV. The primary vertex of each event is selected as the vertex with the largest Σ 2 T of associated tracks [88]. The primary objects considered by this analysis are electrons, muons, and jets. Electron candidates are reconstructed from three-dimensional energy clusters in the electromagnetic calorimeter that are matched to an ID track and calibrated in situ using → decays [89]. Muon candidates in the detector are typically reconstructed from a combined fit of tracks formed in the MS and ID and calibrated in situ using → and / → decays [90]. Jet candidates are reconstructed from three-dimensional energy clusters formed using both the electromagnetic and hadronic calorimeters [91]. Clusters are grouped using the anti-algorithm [92, 93] with a radius parameter = 0.4. The jet energy scale (JES) and resolution (JER) are first corrected to particle level using MC simulation and then calibrated in situ through +jets, +jets, and multĳet measurements [94].
Two levels of selection criteria are defined for leptons and jets; the looser "baseline" criteria and the tighter "signal" criteria. Baseline objects are used for resolving ambiguities between overlapping objects, calculating the missing transverse momentum (p miss T ) of an event, and as inputs to the data-driven estimation of fake-lepton events. Baseline electrons are required to meet the "loose and B-layer likelihood" quality criteria [89], satisfy T > 10 GeV, and be within the ID acceptance (| | < 2.47) but outside the barrel/endcap transition region of the electromagnetic calorimeter (1.37 < | | < 1.52). Baseline muons are required to meet the "medium" quality criteria [90], satisfy T > 10 GeV, and fall within the MS acceptance (| | < 2.7). Each baseline electron or muon is also required to have a trajectory consistent with the primary vertex to suppress pileup. For this purpose, the transverse impact parameter ( 0 ) of a lepton is defined as the distance in the transverse plane between the beam-line and the closest point of the associated ID track. The longitudinal impact parameter ( 0 ) then corresponds to the -coordinate distance between that point and the primary vertex. A selection of | 0 sin | < 0.5 mm, where is the polar angle of the track, is required for each lepton to ensure it is compatible with the primary vertex.
Baseline jets are required to satisfy T > 20 GeV and fall within the full calorimeter acceptance (| | < 4.5). The identification of baseline jets containing -hadrons ( -jets) is performed using the MV2 multivariate discriminant built using information from track impact parameters, the presence of displaced secondary vertices, and the reconstructed flight paths of -and -hadrons inside the jet [95]. The identification criteria are tuned to an average identification efficiency of 85% as obtained for -jets in simulated¯events, corresponding to rejection factors of 25, 2.7, and 6.1 for jets originating from light quarks and gluons, -quarks, and -leptons, respectively.
While photons are not used directly in the analysis, baseline photons are defined for use in the calculation of p miss T . Baseline photons are required to meet the "tight" quality criteria [89], satisfy T > 25 GeV, and fall within the ID acceptance (| | < 2.37) and outside the calorimeter's transition region (1.37 < | | < 1.52).
To aid in the correct reconstruction and identification of leptons and jets an overlap-removal procedure is performed, preventing the reconstruction of a single particle as multiple objects. First, any electron that shares a track with a muon in the ID is removed, as the track is consistent with track segments in the MS. Next, jets are removed if they are within Δ = 0.2 of a lepton and are either not -tagged or satisfy T > 100 GeV, as they are consistent with the energy deposited by an electron shower or muon bremsstrahlung. For the overlap of a jet with a nearby muon, the jet is discarded only if it is associated with fewer than three tracks of T ≥ 500 MeV. Finally, electrons and muons within Δ = 0.4 of any remaining jets are discarded to reject fake leptons originating from hadron decays. In the overlap-removal procedure the calculation of Δ uses rapidity instead of to ensure the distance measurement is Lorentz invariant for jets with non-negligible masses.
The p miss T of each event, with magnitude miss T , is defined as the negative vector sum of the transverse momenta of all identified baseline objects (electrons, muons, jets, and photons) and an additional soft term [96]. The soft term is constructed from all tracks associated with the primary vertex that are not associated with any baseline object. The p miss T is therefore adjusted to include the full calibration of the reconstructed baseline objects while minimizing any pileup dependence in the soft term.
Tighter "signal" criteria are applied to the final leptons and jets considered by the analysis to ensure a high selection purity and accurate T measurement. Any event with a baseline lepton that fails to satisfy the signal criteria is rejected to reduce the contamination from fake-lepton events. Signal leptons are required to have T > 12 GeV and electrons must meet the "medium" quality criteria [89]. At least one of the signal leptons must be identified as having activated a trigger and must pass the larger T requirement of that trigger. The track associated with each signal electron or muon must pass a requirement on 0 and its uncertainty 0 such that | 0 / 0 | < 5 (3) for electrons (muons), ensuring the selection of leptons with prompt, well-reconstructed tracks. Finally, signal leptons must be sufficiently isolated from additional detector activity by passing a T -dependent "tight" requirement on both calorimeter-based and track-based isolation variables [89,90]. The calorimeter-based isolation is defined within a cone of size Δ = 0.2 around the lepton, and the amount of nonassociated calorimeter transverse energy within the cone must be below 6% (15%) of the electron (muon) T . The track-based isolation cone size is Δ = 0.2 for low-T electrons and decreases linearly with T above 50 GeV as the electron's shower becomes more collimated. For muons, the size of the track-isolation cone is Δ = 0.3 for muons with T ≤ 33 GeV and decreases linearly with T to Δ = 0.2 at T = 50 GeV, improving the selection efficiency for higher-T muons.
The track-based isolation only considers nonassociated tracks that are consistent with the primary vertex, and the scalar sum of track T ( iso T ) is required to be below 6% (4%) of the electron (muon) T . The lepton cone T is then defined as the scalar sum of the lepton T and iso T , and is useful in parameterizing the behavior of fake leptons. The muon "tight" isolation requirement is roughly 96% efficient for all T and [90], while for electrons it is 70% efficient at 20 GeV and becomes more than 98% efficient above 100 GeV [89].
Signal jets are required to have | | < 2.8, and events are rejected if they contain a jet that fails to meet the "loose" quality criteria [97], reducing contamination from electronic noise bursts and noncollision backgrounds. To suppress jets originating from pileup, jets with T < 120 GeV and within the ID acceptance (| | < 2.5) are required to pass the "medium" working point of the track-based jet vertex tagger [98,99]. All MC simulation samples are corrected using per-event weights to account for small differences with data in signal-lepton identification, reconstruction, isolation and triggering efficiencies [89,90], as well as in signal-jet pileup rejection [98] and flavor-identification efficiencies [95].

Search strategy
The B − L RPV model allows for many different decay modes of˜± 1 /˜0 1 and therefore many possible final states. A decay of interest is˜± 1 → ℓ → ℓℓℓ because of the large number of leptons produced from a single resonance. The invariant-mass distribution of the trilepton resonance ( ℓ ) is narrow due to the excellent momentum resolution of reconstructed electrons and muons. No SM process naturally produces a three-lepton resonance, leading to a smooth combinatorial background distribution in which a resonance would be distinguishable.
Three orthogonal signal regions (SRs) are developed in Section 5.1 to select˜± 1˜∓ 1 and˜± 1˜0 1 events with at least one˜± 1 → ℓ → ℓℓℓ decay. Each SR targets different decay scenarios of the second˜± 1 /˜0 1 through requirements on the number of leptons and reconstructed , , or Higgs bosons. Matching procedures are developed for events with additional leptons or boson candidates to optimally assign the decay products to each˜± 1 /˜0 1 , as described in Section 5.2. The SRs utilize event-wide information to reduce combinatorial backgrounds, as described in Section 5.3.

Signal regions targeting trilepton decays
Each SR requires at least three signal leptons, two of which are identified as candidate boson decay products if they have the same flavor and opposite sign of their electric charge (SFOS) and have an invariant mass ℓℓ within 10 GeV of the boson mass. For events which have more than one SFOS pair, the pair with ℓℓ closest to the mass is chosen. The ℓ of the˜± 1 is then reconstructed from the chosen SFOS pair and a third lepton. Deviations of ℓℓ from the expected boson mass of 91.2 GeV can occur due to the imperfect energy reconstruction of leptons, particularly at high T . The ℓ resolution is therefore improved by shifting the value of ℓ by an amount equal to (91.2 − ℓℓ ) GeV.
Events are separated into the three SRs according to the number of leptons and the presence of a second reconstructed , , or Higgs boson from the second˜± 1 /˜0 1 decay. The SRFR region targets events where all decay products are visible and "fully reconstructed". The SR4ℓ region targets events with four or more leptons and possible miss T , while the SR3ℓ region targets events with only three visible leptons and substantial miss T , with at least one neutrino coming from the decay of the second˜± 1 /˜0 1   of SR for an event is described below and summarized in Figure 2. Additional selections to reduce the SM background contributions are subsequently applied in each of the SRs separately, as described in Section 5.3.
To target fully visible events, SRFR requires a fourth lepton and a second reconstructed , , or Higgs boson. Pairs of jets are considered for the second boson if their invariant mass is consistent with that of a or boson, with 71.2 < < 111.2 GeV. If at least one of the jets is a -jet, the requirement is loosened to 71.2 < < 150 GeV to allow for Higgs boson decays. Additional SFOS lepton pairs are also considered for the second boson candidate in events with six or more leptons if their invariant mass is consistent with the boson mass, such that 81.2 < ℓℓ < 101.2 GeV. If there are multiple candidates for the second boson, the pairing selected is that with invariant mass closest to the boson mass, or closest to the Higgs boson mass for pairs that include at least one -jet.
The SR4ℓ region targets events in which the decay of the second˜± 1 /˜0 1 includes one or more leptons but is not fully reconstructed due to the presence of neutrinos. Events with four or more leptons that fail all SRFR requirements are selected by SR4ℓ. The SR3ℓ region targets decays of the second˜± 1 /˜0 1 that include no leptons, requiring exactly three leptons in the event. While each region targets specific˜± 1 /˜0 1 decay chains, events in which one or more leptons fall outside the detector acceptance or are not reconstructed may still be selected by other regions. For the signal sample with a mass of 500 GeV and democratic˜± 1 /˜0 1 branching fractions to bosons and leptons, the SRFR, SR4ℓ, and SR3ℓ regions have selection efficiencies of 3%, 4%, and 5%, respectively.
Within each SR the search is performed in the ℓ spectrum to maximize the discovery sensitivity to a resonance. The binning of the ℓ observable was optimized using simulated˜± 1˜∓ 1 and˜± 1˜0 1 signal samples with reconstructed invariant mass resolutions of around 2%, as measured from the widths of Gaussian fits to the reconstructed invariant mass distributions. The optimized binning accounts for the predicted background expectation. Lower edges are set at The last bin has no upper edge and includes all events with ℓ > 580 GeV. The same binning is used for all three SRs, facilitating the discovery of a trilepton resonance that would contribute to all SRs.

Assignment of leptons and boson candidates to˜± 1 /˜0 1 decays
The presence of one or more additional leptons from the second˜± 1 /˜0 1 decay introduces ambiguity in the assignment of a lepton and boson produced directly from a˜± 1 /˜0 1 decay. A matching procedure is implemented to identify the "direct" leptons that come directly from the˜± 1 /˜0 1 decays, rather than from the subsequent decay of a boson, and to assign them to each˜± 1 /˜0 1 . The procedure optimizes the sensitivity to signals of various masses by maintaining a high efficiency for the correct assignments while reducing the contamination from SM processes. In SRFR, both the trilepton decay and the fully visible decay of the second˜± 1 /˜0 1 , with reconstructed mass˜, 2 , are chosen as the groupings that minimize the mass asymmetry between the mass-degenerate˜± 1˜∓ 1 or˜± 1˜0 1 pair, where asym ℓ is defined as (2) The matching efficiency for the signal samples is 60% at 100 GeV and 80% or more for masses of 200 GeV and larger.
The matching procedure for a direct lepton to the candidate for all other analysis regions with four or more leptons is developed to optimize the sensitivity of the SR4ℓ region. Two methods are implemented, and the choice of method exploits the correlation between the true mass of the˜± 1 /˜0 1 and T , the scalar sum of the T of all leptons in the event. A method targeting low-mass signals is used when T < 550 GeV and a method targeting high-mass signals is used when T ≥ 550 GeV. For low-mass signals, the˜± 1 /˜0 1 can often be produced with a sufficiently large momentum such that the decay products are near to one another, and the lepton that is closest in angular distance Δ to the reconstructed boson is chosen. For high-mass signals, the˜± 1 /˜0 1 decay products are often produced at a wide angle with respect to each other, and mispairings will produce a ℓ that is smaller than the˜± 1 /˜0 1 mass. Therefore, the lepton that maximizes the reconstructed ℓ is chosen. The matching efficiency of this procedure for signal samples with various ± 1 /˜0 1 masses is 90% at 100 GeV, 30% at 300 GeV, and 70% at 700 GeV. While a low matching efficiency is seen at 300 GeV due to the use of Δ matching when the ℓ maximization would be preferred, the overall analysis sensitivity is improved by avoiding the ℓ maximization of low-T backgrounds.
As noted in Section 1, the preferred flavor of the direct lepton(s) is related to the neutrino mass hierarchy. The sensitivity to˜± 1˜∓ 1 and˜± 1˜0 1 events may therefore be improved by imposing constraints on the flavor of the direct lepton(s), targeting the favored signal decays while rejecting additional SM backgrounds. Two additional sets of SRs are developed that are each identical to the nominal set of three SRs except that they require the direct lepton(s) to be either electron (SRFR , SR4ℓ , SR3ℓ ) or muon (SRFR , SR4ℓ , SR3ℓ ). These additional " " and " " channels are used separately from the "inclusive" channel and from one another, and are only used when targeting signal models with high˜± 1 /˜0 1 branching fractions to either electrons or muons, as discussed in Section 8.2.

Rejection of combinatorial Standard Model backgrounds
The composition and kinematics of the final-state particles that are produced from the decay chains of thẽ ± 1˜∓ 1 or˜± 1˜0 1 processes can be combinatorially reproduced by certain SM processes. The process has a significant contribution to SRFR and SR4ℓ when both bosons decay leptonically. Events from the process are rejected if they have exactly four leptons that form two SFOS pairs and the mass ℓℓ,2 of the second pair, the pair not selected for the primary˜± 1 candidate, is within 20 GeV of the boson mass.
In SR4ℓ, which targets decay chains of the second˜± 1 /˜0 1 with at least one neutrino, the contribution is further reduced by requiring miss T > 80 GeV in events with a second same-flavor lepton pair.
The SM¯process can also contribute significantly in the SRs, and is identifiable by the presence of two -jets from the two top-quark decays. Signal events that include a Higgs boson decay may also include two -jets, with a 72% efficiency of identifying both -jets using the flavor-tagging algorithm described in Section 4. The -jets will often be collimated due to the boost of the Higgs boson. Therefore, an additional selection is applied in all SRs that requires the two highest-T -jets, if they are found in the event, to satisfy Δ ( 1 , 2 ) < 1.5.

The
,¯, and other SM backgrounds can be further reduced in SRFR by taking advantage of the fully visible decay of the second˜± 1 /˜0 1 . As the˜± 1 and˜0 1 are expected to be mass-degenerate, the asym ℓ (Eq. (2)) between the˜± 1˜∓ 1 or˜± 1˜0 1 pair is expected to be small. A requirement of asym ℓ < 0.1 in SRFR is effective in rejecting combinatorial backgrounds for which asym ℓ is more evenly distributed. Events in SR3ℓ are expected to exhibit a significant miss T because the second˜± 1 /˜0 1 decays directly into a neutrino and a boson, while the subsequent decay of the boson may also produce neutrinos. A requirement of miss T > 150 GeV reduces contamination from SM processes with no neutrinos, particularly +jets events that include a fake lepton. The SM process with fully leptonic decays is also a significant contributor to SR3ℓ, and contains a single neutrino from the decay. The measured miss T is therefore representative of the T of the neutrino, and the transverse mass T of the boson can be reconstructed from the T of the lepton and the azimuthal separation Δ between the lepton and p miss T , with The T of a boson has a kinematic edge at the mass, and signal events in SR3ℓ usually produce lepton-miss T pairings with a larger T . The minimum T of all lepton-miss T pairings for which the other two leptons form a SFOS pair, defined as min T , is required to be min T > 125 GeV in SR3ℓ. This definition allows events to be rejected even if the incorrect SFOS pair was selected for the boson.

Background estimation and validation
The MC samples described in Section 3 are used to predict the expected background yield from SM processes. To improve the accuracy of the MC prediction in the unique phase-space of this analysis and to constrain the systematic uncertainties discussed in Section 7, the MC predictions are normalized in control regions (CRs). Each CR is dedicated to the measurement of an important SM process and they are discussed in Section 6.1. A dedicated data-driven estimation is used for the fake-lepton background and is discussed in Section 6.2. A fit based on a profile likelihood test statistic [100] is performed on all CRs and SRs simultaneously using the HistFitter package [101] to estimate the final post-fit background prediction and uncertainty.
The CRs are developed to be kinematically similar to the SRs but with a small number of selections inverted, reducing any possible signal contamination and ensuring orthogonality between regions. Validation regions (VRs) between the CRs and SRs are developed to ensure the validity of the extrapolation of the yield normalization across the inverted selections and into the SRs. The regions are developed so that any possible signal contamination from˜± 1 /˜0 1 with democratic branching fractions to bosons and leptons is typically less than 1% in each CR and less than 5% in each VR. This ensures an accurate estimation of Table 2: Selection criteria for the various signal, control, and validation regions used in the analysis. All regions require a pair of leptons with the same flavor and opposite sign of their electric charge and with an invariant mass between 81.2 GeV and 101.2 GeV. Additionally, they require a third lepton and a trilepton invariant mass above 90 GeV. The 2nd boson requirement indicates the presence of two additional jets or leptons consistent with a , , or Higgs boson decay. The asterisk (*) in the SR4ℓ miss T requirement indicates that this selection is only considered for events with two pairs of same-flavor leptons. The Δ ( 1 , 2 ) selection is only considered for events with at least two -jets.
the SM backgrounds and an unbiased validation. Any contamination from the signal model in the CRs is accounted for in the fit for completeness. All regions are required to have at least three leptons and one SFOS pair with ℓℓ within 10 GeV of the boson mass. The CRs and VRs are inclusive in ℓ as this variable is seen to be well modeled by the MC simulation. A requirement of ℓ > 90 GeV is made in all regions, corresponding to the lowest ℓ probed by the SRs. The selections for the various regions are discussed below and summarized in Table 2.

Primary backgrounds
The major SM backgrounds that are fitted in dedicated CRs are the , , and¯processes. The yields of other SM processes are small and are therefore not normalized by the fit but taken directly from the MC prediction. These include the triboson, Higgs boson, and "Other" background categories, where Other consists almost completely of the ,¯, and processes.
The process is dominant in the three-lepton SR3ℓ, and the CR control region is developed by inverting the miss T requirement and selecting events with min T consistent with the presence of a boson. This removes possible signal contamination from˜± 1 and˜0 1 , which typically have a high miss T and min T in SR3ℓ due to one or more boosted neutrinos. Two VRs, VR miss T and VR min T , are designed to test the validity of the normalization in SR3ℓ using similar miss T and min T requirements, respectively. Good data-MC agreement is seen in both VRs, and the miss T and min T distributions are shown for CR , VR miss T , and VR min T in Figure 3. These distributions have all region selections applied except the variable shown, where the CR or VR selections are indicated by arrows. The exception is the miss T distribution in VR miss T , which is shown with all region selections applied. The underflow of the min T distribution does not consider events with min T below 30 GeV. validation region has a similar selection, but requires ℓℓ,2 to be between 5 and 20 GeV of the mass, falling naturally between the CR requirement and the 20 GeV ℓℓ,2 veto of SR4ℓ and SRFR. The ℓℓ,2 distribution that includes both CR and VR is shown in Figure 3, and good agreement is seen between data and the post-fit background estimates. Events for which one decays into a pair of -leptons that both then subsequently decay leptonically are included in this validation region. Good modeling in the three-lepton regions is also expected for such events when only one -lepton decays leptonically, although this process is strongly suppressed by the miss T and min T requirements. The control region CR¯targets the¯process in the SRs, for which the boson decays leptonically and one or both top quarks decay leptonically, and requires at least two -jets in the event. The˜± 1 /˜0 1 may also produce two -jets through the decay of a Higgs boson, but because of the boost of the Higgs boson they are produced back-to-back less often. Therefore, the -jets in CR¯are required to be produced with Δ ( 1 , 2 ) > 2.5, while the SRs require events with at least two -jets to satisfy Δ ( 1 , 2 ) < 1.5. A requirement of miss T > 40 GeV is also imposed to reduce the contamination from the +jets process. To increase the number of events in CR¯the lepton multiplicity requirement is relaxed to ℓ ≥ 3, allowing one top quark to decay fully hadronically. The presence or absence of a fourth lepton does not bias the other selections as the ratio of three-lepton to four-lepton events in the¯sample is well-modeled. The VRv alidation region is defined with the same selections but requiring 1.5 < Δ ( 1 , 2 ) < 2.5, falling naturally between CR¯and the SRs. The Δ ( 1 , 2 ) distribution for both CR¯and VR¯is shown in Figure 3.
To maintain orthogonality between the¯regions and the other CRs used in the fit, a requirement of Δ ( 1 , 2 ) < 1.5 is applied to all other analysis regions.
The ℓ distributions for the CRs and VRs are given in Figure 4. No significant shape disagreement is seen between data and MC simulation, validating the modeling of the backgrounds in ℓ . The normalization in CR , CR , and CR¯is therefore performed inclusively in ℓ to improve the statistical precision.
The observed event yields in the CRs and VRs are compared with the background estimates and are shown in Figure 5.

Backgrounds from fake leptons
Processes that include one or more fake leptons are estimated with the data-driven fake-factor method [103, 104], avoiding a reliance on MC simulation to model the prompt-lepton quality criteria of fake leptons. The modeling is also made difficult by the many sources of fake-lepton processes, each of which is kinematically different and provides a relative contribution to the background estimate that is dependent on the analysis phase-space. The most relevant sources for this analysis include the in-flight decays of heavy-flavor hadrons (HF) and misidentified light-flavor jets or in-flight decays of pions and kaons (LF). The fake muons in this analysis are predominantly from HF sources while fake electrons are produced from both HF and LF sources, with their relative contribution varying from 2:1 to 1:5 depending upon the analysis region. The pair production of two electrons from the conversion of a prompt photon (Conv) is also considered a fake-lepton process but makes a minor contribution. In this analysis the relevant fake processes (and their sources) are +jets (LF, HF) and¯(HF) in the three-lepton regions and (LF) and (LF, Conv) in the four-lepton regions, with SRFR also having a large contribution from¯(HF).
Pair-produced electrons are not considered as fake leptons if they are produced from the conversion of bremsstrahlung from a prompt electron, such as that from a leptonically decaying boson. Events with such electrons are not targeted by the fake-factor method but are instead taken directly from MC simulation, which is considered to adequately model such processes. These events are included in the Other category and are a minor contribution in CR and the fake measurement and validation regions, described below, and are negligible in all other regions.
A fake measurement region CRFake is designed to target the +jets process to provide a selection of events enhanced with fake leptons from sources representative of those expected in the SRs. The CRFake region is not directly included in the fit, but is used to derive the fake-lepton estimation in each analysis region. Events are selected by requiring two signal leptons that form an SFOS pair and with an invariant mass within 10 GeV of the boson mass. One of the two signal leptons is required to have fired a single-lepton trigger, thus ensuring no selection bias from fake leptons. To enhance the +jets purity and reduce prompt-lepton event contamination from the process, CRFake requires miss T < 30 GeV and T < 30 GeV. A third, unpaired baseline lepton is also required in the event and is designated as the fake candidate. A requirement on the trilepton invariant mass of 3ℓ > 105 GeV reduces contamination from  Figure 5: The observed data and the SM background expectation in the CRs (pre-fit) and VRs (post-fit). The "Other" category consists mostly of the ,¯, and processes. The hatched bands indicate the combined theoretical, experimental, and MC statistical uncertainties in the background prediction. The bottom panel shows the fractional difference between the observed data and expected yields for the CRs and the significance of the difference for the VRs, computed following the profile likelihood method described in Ref. [102]. the → 4ℓ process.
For all regions, events are split into two populations according to whether the fake candidate meets the nominal signal-quality criteria (nom-ID) or fails to meet at least one of the signal-lepton identification, isolation, or impact parameter criteria (anti-ID). The expected contamination by prompt-lepton events from and processes, as estimated from MC simulation, is subtracted from both populations so that they better represent the yields from fake-lepton sources. The fake factor is defined as the ratio of the yield of nom-ID to anti-ID events in CRFake and reflects the relative likelihood for a fake lepton that meets the baseline criteria to either meet or fail to meet the signal-lepton quality criteria. This ratio has a dependence on the fake-lepton source but is fairly independent of the underlying physics process or any additional activity in the event. Therefore, in each analysis region the fake factor can be applied to a population of anti-ID events, defined with the same region selections but with one or more signal leptons replaced by anti-ID leptons, to predict the yield of fake-lepton events that have passed the selection requirements.
The fake factors are derived separately for electron and muon fake candidates and are parameterized as a function of cone T , which better reflects the T of the underlying particle that has produced the fake lepton, such as a HF hadron. Additional parameterizations of the fake factor were considered, including lepton , miss T , and the -jet multiplicity of the event, but a two-dimensional parameterization would significantly reduce the statistical precision of the fake factors. Alternative parameterizations are instead used to define a systematic uncertainty due to the choice of cone T . The statistical uncertainty of each fake factor is propagated to an uncertainty in the yield. An uncertainty due to the prompt-lepton subtraction is estimated by varying the subtracted yields of the and MC simulations up and down by 5%, corresponding to their cross-section uncertainties [105]. For any ℓ bin of an SR that does not have an anti-ID event, and therefore has a prediction of zero fake-lepton events, an uncertainty is applied corresponding to a yield of 0.32 fake events. This represents the largest fake estimate possible given a 1 upward fluctuation in the anti-ID event yield.
To validate the fake estimation, a dedicated validation region VRFake is developed closer to the SRs, using the same selections as CRFake but requiring miss T < 40 GeV and 30 < T < 50 GeV. Good agreement is seen between data and the post-fit background estimate in VRFake, and for the other VRs, for all observables relevant for the fake factor, including the ℓ distributions shown in Figure 4. A conservative closure uncertainty of 23% (27%) is applied to the yield of events with electron (muon) fake candidates, and is derived so as to cover the most discrepant cone T bin observed in VRFake.
The fake factor for electrons is sensitive to the relative composition of the fake sources, which primarily varies between LF and HF in the analysis regions. To derive an uncertainty in the fake-source composition, the MC fake factors are measured in MC simulation in CRFake for HF and LF sources separately. The inclusive MC fake factors are seen to be reproduced by reweighting the HF and LF MC fake factors according to the CRFake composition. Therefore, a composition systematic uncertainty is derived in each analysis region by comparing the inclusive CRFake MC fake factors with those calculated from a reweighting of HF and LF MC fake factors, according to the composition of that region. The systematic uncertainty is derived using only MC simulation, in order to provide clean sources of HF and LF fake electrons, but is applied to the nominal data-driven fake factors, and is measured to be at most 53% for the electron fake factors in SR4ℓ.

Systematic uncertainties
Uncertainties in the expected signal and background yields account for the statistical uncertainties of the MC samples, the experimental systematic uncertainties in the detector measurements, and the theoretical systematic uncertainties of the MC simulation modeling. The uncertainties of the major backgrounds normalized in the CRs reflect the limited statistical precision of the CRs and the systematic uncertainties in the extrapolation to the signal regions, and an additional uncertainty in the normalization factor from the combined fit is included. The uncertainties related to the data-driven fake background estimation are described in detail in Section 6.2.
Systematic uncertainties are treated as Gaussian nuisance parameters in the likelihood while the statistical uncertainties of the MC samples are treated as Poisson nuisance parameters. Unless stated otherwise, each experimental uncertainty is treated as fully correlated across the analysis regions, while each theoretical uncertainty is derived as the relative yield between an analysis region and a control region and is treated as uncorrelated across analysis regions.  A summary of the background uncertainties is shown in Figure 6. Individual uncertainties can be correlated or anti-correlated, for example between an uncertainty on a major background and the uncertainty on the CR-to-SR normalization procedure for that background. Bin-to-bin fluctuations in the uncertainty of the fake background estimation reflect the small anti-ID population and the conservative uncertainties applied when no anti-ID events are seen in the data. The effect of localized fluctuations in one SR is limited as all three SRs contribute to the overall sensitivity. A relative uncertainty of 2.9 is seen in the last ℓ bin of SRFR and is driven by a relative uncertainty of 2.8 in the fake estimation, reflecting the small post-fit background expectation. Theoretical uncertainties in the shape of the major diboson, triboson, and¯backgrounds are derived using MC simulation with varied generator parameters. For the other minor backgrounds a conservative 20% uncertainty is assumed. This value is larger than is typically expected for the minor background processes and the choice has a negligible effect on the final results due to the small contributions of these backgrounds. Uncertainties due to the choice of QCD renormalization and factorization scales [109] are assessed by varying the relevant generator parameters up and down by a factor of two around the nominal values, allowing for both independent and correlated variations of the two scales but prohibiting anti-correlated variations. Each QCD variation is kept separate and is treated as correlated across analysis regions. An uncertainty of 1% due to the chosen value of the strong coupling constant S is assessed by varying S by ±0.001 in the generator parameter settings. Uncertainties related to the choice of PDF sets, CT14NNLO [110] or MMHT2014NNLO [111], are derived by taking the envelope of the variation in event yield of 100 propagated uncertainties [70].
Additional theoretical uncertainties are assessed for the major backgrounds. These are related to assumptions made in the event generators and PS models, which can affect both the event kinematics and the cross section of the physics process. For the diboson backgrounds, the S parameters related to the PS matching scale and resummation scale are varied up and down by a factor of two around the nominal values, and an alternative recoil scheme is studied. For the¯background, the uncertainties in the hard scatter and in the PS are derived through a comparison with the S and M G 5_aMC@NLO +H 7 predictions, respectively. Additional uncertainties in the amount of initial-state radiation (ISR) in the¯background are assessed by varying the related generator parameters.
For the signal samples, theoretical uncertainties in the cross section are applied, ranging from 4.5% at 100 GeV to 16% at 1500 GeV. Uncertainties related to the QCD scale, PS matching scale, and amount of ISR are derived by varying the related generator parameters of the A14 tune [60].

Results
The data are compared with the post-fit background expectations, derived from a background-only profile likelihood fit of all CRs and SRs simultaneously as described in Section 6, and no significant excess is observed. The VRs, shown previously in Figure 5, demonstrate good modeling of the post-fit background expectation in regions kinematically similar to the SRs and for a variety of observables, validating the background-estimation technique. The observed and expected numbers of events in SRFR, SR4ℓ, and SR3ℓ are given in Table 3 inclusively in ℓ and for the inclusive, direct-lepton, and direct-lepton flavor channels. The background expectation and uncertainty are further split into contributions from each category of SM processes. Separate fits are performed for each flavor channel and for the inclusive channel, and therefore the predicted yields in the and channels may not necessarily add to the inclusive yield. Additionally, the SRFR regions have the same flavor requirement on both direct leptons in an event, and the data and predicted yields in the and channels do not add to the inclusive result.
The ℓ distributions in each SR, with binning corresponding to that used in the fit, are shown in Figure 7. The SRs show good agreement in the shape of the ℓ distribution between data and the SM expectation, with no significant localized excesses. Three example signals of mass 200, 500, and 800 GeV are included in these figures and peak strongly in their target ℓ bin for all three SRs, with the 800 GeV signal only visible in the last ℓ bin. Other observables in the SRs relevant for the extrapolation of the yield normalization are shown in Figure 8 and also demonstrate good agreement. Table 3: The observed yields and post-fit background expectations in SRFR, SR4ℓ, and SR3ℓ, shown inclusively and when the direct lepton from a˜± 1 /˜0 1 decay is required to be an electron or muon. The "Other" category consists mostly of the ,¯, and processes. Uncertainties in the background expectation include combined statistical and systematic uncertainties. The individual uncertainties may be correlated and do not necessarily combine in quadrature to give the total background uncertainty.   1)) is the same as that used in the fit and the yield is normalized to the bin width, with the last bin normalized using a width of 200 GeV. The "Other" category consists mostly of the ,¯, and processes. The hatched bands indicate the combined theoretical, experimental, and MC statistical uncertainties in the background prediction. The bottom panel shows the significance of the differences between the observed data and expected yields, computed following the profile likelihood method described in Ref. [102].

Model-independent limits on new physics in inclusive regions
Upper limits are set on the possible visible cross sections of generic beyond-the-SM (BSM) processes in each ℓ bin of each SR. These model-independent limits are derived at 95% confidence level (CL) using the CL s prescription [112], and results are evaluated using pseudo-experiments. A profile likelihood fit is performed on the numbers of observed and expected events in the target ℓ bin of one SR and the three CRs, and a generic BSM process is assumed to contribute only to the target ℓ bin. In this way no assumption is made concerning the˜± 1 /˜0 1 branching fractions or ℓ shape of the BSM process. No uncertainties in the yield of the BSM process are considered, except for the luminosity uncertainty.
This procedure is repeated for each of the 16 ℓ bins in each of the three SRs, with only one SR bin considered for each fit. This differs from the nominal fit strategy which is performed using the three CRs and the 48 ℓ bins of the SRs simultaneously, and minor differences from the significances shown in the bottom panel of Figure 7 are seen.
The model-independent limits are summarized in Table 4, which includes for each signal region: • the number of observed events obs , • the expected number of SM events exp and the associated uncertainty from a fit to the CRs only, • the observed limit on the visible cross section 95 obs of the potential BSM process, • the corresponding observed upper limit on the number of BSM events 95 obs , • the expected upper limit on the number of BSM events 95 exp and the associated uncertainty, • and the -value (and associated significance ) for the SM background alone to fluctuate to at least the number of observed events.
The observed limit 95 obs is defined as the ratio of 95 obs to the integrated luminosity, and it incorporates the cross section, acceptance, and selection efficiency of the generic BSM signal. No ℓ bin shows a significant excess in all three SRs, in contrast to what would be expected in the presence of a resonance that contributes to all SRs.The largest excess of data over the expected background is seen in SRFR for the ℓ region between 150 and 170 GeV, with an associated significance of 2.1 . This is consistent with the expectation from statistical fluctuations of the SM background when considering 48 independent signal regions. Table 4: Model-independent results where each row targets one ℓ bin of one SR and probes scenarios where a generic beyond-the-SM process is assumed to contribute only to that ℓ bin. The first two columns refer to the signal region and ℓ bin probed, while the third and fourth columns show the observed ( obs ) and expected ( exp ) event yields. The expected yields are obtained using a background-only fit of all the CRs, and the errors include statistical and systematic uncertainties. The fifth and sixth columns show the observed 95% CL upper limit on the visible cross section ( 95 obs ) and on the number of signal events ( 95 obs ), while the seventh column shows the expected 95% CL upper limit on the number of signal events ( 95 exp ) with the associated 1 uncertainties. The last column provides the discovery -value and significance ( ) of any excess of data above background expectation. Cases for which the observed yield is less than the expected yield are capped at a -value of 0.5.

Region
Range

Mass limits on B − L RPV production
Hypothesis tests for the B − L signal models are performed using the same CL s prescription [112], with exclusion lower limits set on the˜± 1 /˜0 1 masses for various scenarios of the˜± 1 /˜0 1 branching fractions using asymptotic formulas [100]. A profile likelihood fit is performed simultaneously to the CRs and all ℓ bins of the three SRs, benefiting from the contribution of a signal model to a small number of ℓ bins coherently across SRFR, SR4ℓ, and SR3ℓ. The signal strength is represented by a single parameter of interest and coherently scales the signal yield across all regions.
The sensitivity to the signal models is dependent on the˜± 1 /˜0 1 branching fractions to each lepton and boson type, and a scan is performed over various combinations. The contributions from the˜± 1˜∓ 1 and ± 1˜0 1 processes are treated together, and the˜± 1 /˜0 1 branching fractions are treated as fully correlated. Four scenarios are considered for the˜± 1 /˜0 1 branching fractions to leptons: the scenario with equal branching fractions to , , and -leptons and the three scenarios with 100% branching fractions to a single lepton type.
For each leptonic scenario, the˜± 1 /˜0 1 branching fractions to , , and Higgs bosons are scanned at 10% intervals. A 0% branching fraction to bosons is not explored and is replaced by a 1% branching fraction in the scans. No significant difference in sensitivity is seen for the relative˜± 1 /˜0 1 branching fractions to or Higgs bosons, with the sensitivity dominated by the branching fraction to bosons, which produces the target trilepton resonances. The three SRs contribute roughly equally to the overall sensitivity of the search, with a minor increase in sensitivity to Higgs boson decays from SRFR offset by a similar increase in sensitivity to boson decays from SR4ℓ.
The expected and observed mass-exclusion contours as a function of the˜± 1 /˜0 1 branching fraction to bosons are shown in Figure 9 for each of the four lepton-flavor scenarios. The˜± 1 /˜0 1 branching fractions to and Higgs bosons are set to be equal here. Limits are set for signal masses above 100 GeV, and agreement within the uncertainties is seen between the observed and expected limits. The observed limit is slightly weaker than the expected limit due to the minor excesses seen at low ℓ in SR4ℓ and in some high ℓ bins in SRFR and SR3ℓ.
The observed mass exclusions are strongest when the˜± 1 /˜0 1 branching fraction to bosons is largest, reaching 1100 GeV and 1050 GeV for the and channels, respectively. The limit is slightly reduced to 975 GeV when no assumption is made about the flavor of the directly produced lepton, and is weakest at 625 GeV when only˜± 1 /˜0 1 decays into -leptons are allowed. The observed mass limit becomes significantly reduced when the˜± 1 /˜0 1 branching fraction to bosons falls below 20%, reaching 375 GeV in the channel and 350 GeV in the channel when the branching fraction reaches 1%. No limits are set when requiring decays into -leptons for branching fractions to bosons below 11%.     Figure 9: Exclusion curves for the simplified model of˜± 1˜∓ 1 +˜± 1˜0 1 production as a function of˜± 1 /˜0 1 mass and branching fraction to bosons. Curves are derived separately when requiring that the charged-lepton decays of

Conclusions
This paper presents a search for wino-type˜± 1˜∓ 1 and˜± 1˜0 1 production where each˜± 1 /˜0 1 decays via an RPV coupling into a , , or Higgs boson and a lepton. The dataset corresponds to an integrated luminosity of 139 fb −1 of proton-proton collision data produced at a center-of-mass energy of √ = 13 TeV and collected by the ATLAS experiment at the LHC between 2015 and 2018. This search primarily targets the three-lepton decay of a˜± 1 and is the first ATLAS analysis using √ = 13 TeV data to search for a resonance in the ℓ spectrum. Three signal regions are defined that target events with three or more leptons and missing transverse momentum or with two fully reconstructed˜± 1 /˜0 1 decays. The observed event yields are found to be in agreement with Standard Model expectations, with no significant excess seen in the ℓ distributions of the signal regions.
Model-independent limits are set at a 95% confidence level for each ℓ bin in each signal region. The largest excess of data over the expectation in the 48 model-independent regions is found to be 2.1 . No trend is seen in the distribution of data excesses in ℓ bins across the three signal regions. Model-specific lower limits are also set on the˜± 1 /˜0 1 masses for various decay branching fractions into a lepton (electron, muon, or -lepton) plus a boson ( , , or Higgs), reflecting sensitivity to the neutrino mass hierarchy and the MSSM parameters of the B − L RPV theory. For scenarios with large˜± 1 /˜0 1 branching fractions to bosons, lower limits on the˜± 1 /˜0 1 masses are set at 625 GeV, 1050 GeV, and 1100 GeV for 100% branching fractions to a boson plus a -lepton, muon, or electron, respectively.       The ATLAS Collaboration