Search for dark matter and unparticles produced in association with a Z boson in proton-proton collisions at √ s = 8 TeV

A search for evidence of particle dark matter (DM) and unparticle production at the LHC has been performed using events containing two charged leptons, consistent with the decay of a Z boson, and large missing transverse momentum. This study is based on data collected with the CMS detector corresponding to an integrated luminosity of 19.7 fb − 1 of pp collisions at the LHC at a center-of-mass energy of 8 TeV. No excess of events is observed above the number expected from the standard model contributions. The results are interpreted in terms of 90% conﬁdence level limits on the DM-nucleon scattering cross section, as a function of the DM particle mass, for both spin-dependent and spin-independent scenarios. Limits are set on the effective cutoff scale Λ , and on the annihilation rate for DM particles, assuming that their branching fraction to quarks is 100%. Additionally, the most stringent 95% conﬁdence level limits to date on the unparticle model parameters are obtained.


Introduction
Ample evidence from astrophysical measurements supports the existence of dark matter (DM), which is assumed to be responsible for galactic gravitation that cannot be attributed to baryonic matter [1][2][3]. Recent DM searches have exploited a number of methods including direct detection [4][5][6][7][8][9][10][11], indirect detection [12,13], and particle production at colliders [14][15][16][17][18][19][20][21][22][23][24][25][26]. The currently favored possibility is that DM may take the form of weakly interacting massive particles (WIMP). The study presented here considers a mechanism for producing such particles at the CERN LHC [27]. In this scenario, a Z boson, produced in pp collisions, recoils against a pair of DM particles, χχ. The Z boson subsequently decays into two charged leptons ( + − , where = e or µ) producing a clean dilepton signature together with missing transverse momentum due to the undetected DM particles. In this analysis, the DM particle χ is assumed to be a Dirac fermion or a complex scalar particle whose coupling to standard model (SM) quarks q can be described by one of the effective interaction terms [28]: Vector, spin-independent(D5) :χ γ µ χqγ µ q Λ 2 ; Axial-Vector, spin-dependent(D8) :χ γ µ γ 5 χqγ µ γ 5 q Λ 2 ; Tensor, spin-dependent(D9) :χ σ µν χqσ µν q Λ 2 ; Vector, spin-independent(C3) : where Λ parameterizes the effective cutoff scale for interactions between DM particles and quarks. The operators denoted by D5, D8, and D9 couple to Dirac fermions, while C3 couples to complex scalars. The corresponding Feynman diagrams for production of a DM pair with a Z boson and up to one jet are shown in Fig. 1. A search similar to the one presented here has been performed by the ATLAS Collaboration [26], where the DM particle is assumed to be a Dirac fermion and couples to either vector bosons or quarks. The unparticle physics concept [29][30][31][32] is particularly interesting because it is based on scale invariance, which is anticipated in many beyond-the-SM physics scenarios [33][34][35]. The unparticle stuff of the scale-invariant sector appears as a non-integer number of invisible massless particles. In this scenario, the SM is extended by introducing a scale-invariant Banks-Zaks (BZ) field, which has a non-trivial infrared fixed point [36]. This field can interact with SM particles by exchanging heavy particles with a high mass scale M U . Below this mass scale, the coupling is non-renormalizable and the interaction is suppressed by powers of M U . The interaction Lagrangian density can be expressed as: in which C U is a normalization factor fixed by the matching, d U represents the possible noninteger scaling dimension of the unparticle operator O U , and the parameter λ = C U Λ d BZ U /M k U is a measure of the coupling between SM particles and unparticles. In general, an unparticle does not have a fixed invariant mass but has instead a continuous mass spectrum, and its real production in low energy processes described by the effective field theory in Eq. (1) can give rise to unusual missing energy distributions because of the possible non-integral values of the scaling dimension d U . In the past, the reinterpretation [37] of LEP single-photon data has been used to set unparticle limits. A recent search for unparticles at CMS [14] in monojet final states has shown no evidence for their existence. In this paper, a scalar unparticle with real emission is considered, and the scaling dimension d U > 1 is constrained by the unitarity condition. Figure 2 shows the two tree-level diagrams considered in this paper for the production of unparticles associated with a Z boson.  Both the DM and unparticle scenarios considered in this analysis produce a dilepton (e + e − or µ + µ − ) signature consistent with a Z boson, together with a large magnitude of missing transverse momentum. The analysis is based on the full data set recorded by the CMS detector in 2012, which corresponds to an integrated luminosity of 19.7 ± 0.5 fb −1 [38] at a center-ofmass energy of 8 TeV.

The CMS detector
The CMS detector is a multipurpose apparatus well suited to study high transverse momentum (p T ) physics processes in pp collisions. The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the superconducting solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity [39] coverage provided by the barrel and endcap detectors. The electromagnetic calorimeter consists of 75 848 lead tungstate crystals, which provide coverage in pseudorapidity |η| < 1.479 in a barrel region and 1.48 < |η| < 3.00 in two endcap regions (EE). A preshower detector consisting of two planes of silicon sensors interleaved with a total of 3X 0 of lead is located in front of the EE. The electron momentum is estimated by combining the energy measurement in the ECAL with the momentum measurement in the tracker. The momentum resolution for electrons with p T ≈ 45 GeV from Z → ee decays ranges from 1.7% for nonshowering electrons in the barrel region to 4.5% for showering electrons in the endcaps [40]. Muons are measured in the pseudorapidity range |η| < 2.4, with gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. The muon detection planes are made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers. Matching muons to tracks measured in the silicon tracker results in a relative transverse momentum resolution for muons with 20 < p T < 100 GeV of 1.3-2.0% in the barrel and better than 6% in the endcaps. The p T resolution in the barrel is better than 10% for muons with p T up to 1 TeV [41]. The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events, in a fixed time interval of less than 4 µs. The high-level trigger processor farm further decreases the event rate from around 100 kHz to less than 1 kHz, before data storage. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [39]. Variables of particular relevance to the present analysis are the missing transverse momentum vector p miss T and the magnitude of this quantity, E miss T . The quantity p miss T is defined as the projection on the plane perpendicular to the beams of the negative vector sum of the momenta of all reconstructed particles in an event.

Simulation
Samples of simulated DM particle events are generated using MADGRAPH 5.2.1 [42] matched to PYTHIA 6.4.26 [43] using tune Z2* for parton showering and hadronization. The PYTHIA 6 Z2* tune uses the CTEQ6L [44] parton distribution set. This tune is derived from the Z1 tune [45], which is based on CTEQ5L. The effective cutoff scale Λ is set to 1 TeV. The events for the unparticle models are generated with PYTHIA 8.1 [46][47][48] assuming a renormalization scale Λ U = 15 TeV, using tune 4C [49] for parton showering and hadronization. We evaluate other values of Λ U by rescaling the cross sections as needed. Figure 3 shows the distribution of E miss T at the generator level for both DM and unparticle production. In the unparticle scenario, the events with larger scaling dimension d U tend to have a broader E miss T distribution. For DM production, the shape of the E miss T is similar for couplings D5, D8, and C3, where the vector or axial vector couplings tend to produce nearly back-to-back DM particles. This configuration is less strongly favored for the tensor couplings, and thus the D9 couplings show a much broader E miss T distribution.
The POWHEG 2.0 [50][51][52][53][54] event generator is used to produce samples of events for the tt and tW background processes. The ZZ, WZ, and Drell-Yan (DY, Z/γ * → + − ) processes are generated using the MADGRAPH 5.1.3 [55] event generator. The default set of parton distribution functions (PDF) CTEQ6L [56] is used for leading-order (LO) generators, while the CT10 [57] set is used for next-to-leading-order (NLO) generators. The NLO calculations are used for background cross sections, whereas only LO calculations are available for the signal processes. For all Monte Carlo (MC) samples, the detector response is simulated using a detailed description of the CMS detector, based on the GEANT4 package [58]. Minimum bias events are superimposed on the simulated events to emulate the additional pp interactions per bunch crossing (pileup). All MC samples are corrected to reproduce the pileup distribution as measured in the data. The average number of pileup events per proton bunch crossing is about 20 for the 2012 data sample.     The unparticle curves have the scalar unparticle coupling λ between unparticle and SM fields set to 1, with the scaling dimension d U ranging from 1.5 to 2.1. The SM background ZZ → − + νν is shown as a red solid curve.

Event reconstruction
Events are collected by requiring dilepton (ee or µµ) triggers with thresholds of p T > 17 and 8 GeV for the leading and sub-leading leptons, respectively. Single-lepton triggers with thresholds of p T > 27 (24) GeV for electrons (muons) are also included to recover residual trigger inefficiencies. Prior to the selection of leptons, a primary vertex must be selected as the event vertex. The vertex with largest value of ∑ p 2 T for the associated tracks is selected. Simulation studies show that this requirement correctly selects the event vertex in more than 99% of both signal and background events. The lepton candidate tracks are required to be compatible with the event vertex.
A particle-flow (PF) event algorithm [59,60] reconstructs and identifies each individual particle with an optimized combination of information from the various elements of the CMS detector. The energy of photons is directly obtained from the ECAL measurement, corrected for zerosuppression effects. The energy of electrons is determined from a combination of the electron momentum at the event vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of its momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energy.
Electron candidates are reconstructed using two algorithms [40]: in the first, energy clusters in the ECAL are matched to signals in the silicon tracker, and in the second, tracks in the silicon tracker are matched to ECAL clusters. The electron candidates used in the analysis are required to be reconstructed by both algorithms. To reduce the electron misidentification rate, the candidates have to satisfy additional identification criteria that are based on the shape of the electromagnetic shower in the ECAL. In addition, the electron track is required to originate from the event vertex and to match the shower cluster in the ECAL. Electron candidates with an ECAL cluster in the transition region between ECAL barrel and endcap (1.44 < |η| < 1.57) are rejected because the reconstruction of an electron object in this region is not optimal. Candidates that are identified as coming from photon conversions [40] in the detector material are explicitly removed.
Muon candidate reconstruction is also based on two algorithms: in the first, tracks in the silicon tracker are matched with at least one muon segment in any detector plane of the muon system, and in the second algorithm a combined fit is performed to hits in both the silicon tracker and the muon system [41]. The muon candidates in this analysis are required to be reconstructed by both algorithms and to be further identified as muons by the PF algorithm. To reduce the muon misidentification rate, additional identification criteria are applied based on the number of space points measured in the tracker and in the muon system, the fit quality of the muon track, and its consistency with the event vertex location.
Leptons produced in the decay of Z bosons are expected to be isolated from hadronic activity in the event. Therefore, an isolation requirement is applied based on the sum of the momenta of the PF candidates found in a cone of radius R = √ (∆η) 2 + (∆φ) 2 = 0.4 around each lepton, where φ is the azimuthal angle. The isolation sum is required to be smaller than 15% (20%) of the p T of the electron (muon). To correct for the contribution to the isolation sum from pileup interactions and the underlying event, a median energy density (ρ) is determined on an eventby-event basis using the method described in Ref. [61]. For each electron, the mean energy deposit in the isolation cone of the electron, coming from other pp collisions in the same bunch crossing, is estimated following the method described in Ref. [40], and subtracted from the isolation sum. For muon candidates, only charged tracks associated with the event vertex are included. The sum of the p T for charged particles not associated with the event vertex in the cone of interest is rescaled by a factor corresponding to the average neutral to charge energy densities in jets and subtracted from the isolation sum.
Jets are reconstructed from PF candidates by using the anti-k T clustering algorithm [62] with a distance parameter of 0.5, as implemented in the FASTJET package [63,64]. Jets are found over the full calorimeter acceptance, |η| < 5. The jet momentum is defined as the vector sum of all particle momenta assigned to the jet, and is found in the simulation to be within 5% to 10% of the true hadron-level momentum over the whole p T range and detector acceptance. An overall energy subtraction is applied to correct for the extra energy clustered in jets due to pileup, following the procedure described in Ref. [65]. In the subtraction, the charged particle candidates associated with secondary vertices reconstructed in the event are also included. Other jet energy scale corrections applied are derived from simulation, and are confirmed by measurements of the energy balance in dijet and γ+jets events.

Event selection
Selected events are required to have exactly two well-identified, isolated leptons with the same flavor and opposite charge (e + e − or µ + µ − ), each with p T > 20 GeV. The invariant mass of the lepton pair is required to be within ±10 GeV of the nominal mass of the Z boson. Only leptons within the pseudorapidity range of |η| < 2.4 (2.5) for muons (electrons) are considered. To reduce the background from the WZ process where the W boson decays leptonically, events are removed if an additional electron or muon is reconstructed with p T > 10 GeV. As a very loose preselection requirement, the dilepton transverse momentum (p T ) is required to be larger than 50 GeV to reject the bulk of DY background events.
Since only a small amount of hadronic activity is expected in the final state of both DM and unparticle events, any event having two or more jets with p T > 30 GeV is rejected. Top quark decays, which always involve the emission of b quarks, are further suppressed with the use of techniques based on soft-muon and b-jet tagging. The rejection of events with soft muons having p T > 3 GeV reduces the background from semileptonic b decays. The b-jet tagging technique employed is based on the "combined secondary vertex" algorithm [66,67]. This algorithm selects a group of tracks forming a secondary vertex within a jet and generates a likelihood discriminant to distinguish between b jets and jets originating from light quarks, gluons, or charm quarks. The applied threshold provides, on average, 80% efficiency for tagging jets originating from b quarks, and 10% probability of light-flavor jet misidentification. The b-tagged jet is required to have p T > 20 GeV and to be reconstructed within the tracker acceptance volume (|η| < 2.5).
The final selection is optimized for both DM and unparticle signals to obtain the best expected cross section limit at 95% confidence level (CL) using four variables: allel to the direction of p T . The last three variables effectively suppress reducible background processes such as DY and top-quark production. If the best expected significance is used in the optimization, instead of the best expected limit, very similar results are obtained. In both electron and muon channels, a mass-independent event selection followed by a fit to the shape of the transverse mass ) distribution is used to discriminate between the signal and the backgrounds. For each set of selection requirements considered, the full analysis, including the estimation of backgrounds and the systematic uncertainties, is repeated. The final selection criteria obtained after optimization for both the electron and muon channels are: E miss T > 80 GeV, ∆φ , p miss T > 2.7, |u /p T | < 1, and |E miss T − p T |/p T < 0.2. A summary of the preselection and final selection criteria for the final analysis is listed in Table 1. Figure 4 shows the distributions of E miss T after preselection, in the ee and µµ channels. Good agreement is found between the observed distributions and the background prediction, which is described in the following section. Table 1: Summary of selections used in the analysis.

Variable Requirements
Preselection

Background estimation
The ZZ and WZ backgrounds are modeled using MC simulation, and normalized to their respective NLO cross sections computed with MCFM 6.8 [68]. Other backgrounds, including tt, tW, WW, Z → ττ, and DY are estimated from data for the final selection. The background from W+jets is negligible in the muon channel but significant in the electron channel, where an estimation method based on control samples in data is used for its estimation.
The background processes that do not involve Z boson production are referred to as nonresonant backgrounds. Such backgrounds arise mainly from leptonic W boson decays in tt, tW, and WW events. There are also small contributions from sand t-channel single top quark events and Z → ττ events in which τ leptons produce electrons or muons and E miss T . We estimate these non-resonant backgrounds using a data control sample, consisting of events with an opposite-charge different-flavor dilepton pair (e ± µ ∓ ) that otherwise pass the full selection. As the decay rates for Z → e + e − and Z → µ + µ − are equal, by equating the ratio of observed dilepton counts to the square of the ratio of efficiencies, the backgrounds in the ee and µµ channels can be estimated: in which the coefficient of 1/2 in the correction factors k ee and k µµ comes from the dilepton decay ratios for ee, µµ, and eµ in these nonresonant backgrounds, and N data ee and N data µµ are the numbers of selected ee and µµ events from data with masses inside the Z mass window. The ratio √ N data ee /N data µµ and the reciprocal quantity take into account the difference between the electron and muon selection efficiencies. The term N data, corr eµ is the number of eµ events 7 Efficiencies and systematic uncertainties observed in data corrected by subtracting ZZ, WZ, DY, and W+jets background contributions estimated using MC simulation. The validity of this procedure for predicting nonresonant backgrounds is checked with simulated events containing tt, tW, WW, and Z → ττ processes. We assign a systematic uncertainty of 17% (15%) to this background estimation in the electron (muon) channel based on an observed discrepancy between simulated events and data.
The DY process is dominant in the region of low E miss T . This process does not produce undetectable particles, and therefore the measured E miss A W+jets background event consists of a genuine prompt lepton from the W decay, and a nonisolated lepton resulting from the leptonic decay of heavy quarks, misidentified hadrons, or electrons from photon conversions. The rate at which jets are misidentified as leptons may not be accurately described in the MC simulation, so the rate of jets passing lepton identification requirements is determined using a control data sample enriched in jets. The genuine lepton contamination from W/Z+jets events in the selected control sample is subtracted using simulation to avoid biasing the calculation of the misidentification rate. The final estimation is obtained by applying these weights to a sample selected with lepton identification requirements that are looser than for the signal sample. The main source of systematic uncertainty for this background estimation comes from the measurement of the misidentification rate. A systematic uncertainty of 15% is assigned, based on the dependence of the calculated misidentification rates on the selection criteria applied to the control sample.

Efficiencies and systematic uncertainties
The efficiencies for selecting, reconstructing and identifying isolated leptons are determined from simulation, and then corrected with scale factors determined from applying a "tag-andprobe" technique [69] to Z → + − events. The trigger efficiencies for the electron and muon channels are found to be above 90%, varying as a function of p T and |η| of the lepton. The identification efficiency for electrons (muons), when applying the criteria described in Section 4, is found to be 95% (94%). The corresponding data-to-MC scale factors are typically in the range 0.94-1.01 (0.98-1.02) for the electron (muon) channel, depending on the p T and |η| of the lepton candidate. For both channels, the overall uncertainty in selecting and reconstructing leptons in an event is about 3%.
The systematic uncertainties include normalization uncertainties that affect the overall size of contributions, and shape uncertainties that alter the shapes of the distributions used in extracting the signal limits. The systematic uncertainties are summarized in Table 2.
The normalization uncertainties in the background estimates from data are described in Section 6. The overall approach for the estimation of the PDF and α S uncertainties (referred to as PDF+α S in the following) adopts the interim recommendations of the PDF4LHC group and is used both for signal and the background [70][71][72][73][74]. This is the most important uncertainty for the Table 2: Summary of systematic uncertainties. Each background uncertainty represents the variation of the relative yields of the particular background components. The signal uncertainties represent the relative variations in the signal acceptance, and ranges quoted cover both signals of DM and unparticles with different DM masses or scaling dimensions. For shape uncertainties, the numbers correspond to the overall effect of the shape variation on yield or acceptance. The symbol -indicates that the systematic uncertainty is not applicable.

Source
Background signals. As the mass of the DM particles increases, the PDF+α S uncertainty reaches 20%, which can be explained by the diminishing phase space for DM production and the rise of the corresponding uncertainty in the cross section. The efficiencies for signal, ZZ, and WZ processes are estimated using simulation, and the uncertainties in the corresponding yields are derived from variations of the renormalization and factorization scales, α S , and choice of PDFs, in which the factorization and renormalization scales are assessed by varying the original scales of the process by factors of 0.5 and 2. Typical values for the signal extraction efficiency are found to be around 40%. The uncertainty related to the renormalization and factorization scales is 5% for signal, and 7-8% for ZZ and WZ processes. The effect of variations in α S and choice of PDFs is 5-6% for the ZZ and WZ backgrounds. The uncertainty assigned to the luminosity measurement is 2.6% [38].
The contributions to the shape uncertainties come from the lepton momentum scale, the jet energy scale and resolution, the unclustered E miss T scale, the b tagging efficiency, and the pileup modeling. Each corresponding uncertainty is calculated by varying the respective variable of interest within its own uncertainties, and propagating the variations to the variable m T using the final selection. In the case of the lepton momentum scale, the uncertainty is computed by varying the momentum of the leptons by their uncertainties. The uncertainty in the muon momentum scale is 1%. For electrons, uncertainties of 0.6% for the barrel and 1.5% for the endcaps are applied. For the ZZ background, a comparison of the acceptance from normalized yields between MADGRAPH, POWHEG [75], and SHERPA 2.1.1 [76] shows that these generators differ in their total event prediction for the signal region. Therefore, an additional uncertainty of 14% is assigned as a generator-related shape systematic uncertainty in the ZZ background. This is the dominant uncertainty in the total background prediction for the signal region. For the WZ background, this difference in acceptance is not observed and data and simulation agree in the selected three-lepton control region.
The uncertainties in the calibration of the jet energy scale and resolution directly affect the assignments of jets to jet categories, the E miss T computation, and all the selections related to jets. The effect of the jet energy scale uncertainty is estimated by varying the energy scale by ±1σ. A similar strategy is used to evaluate the systematic uncertainty related to the jet energy resolution. The uncertainties in the final yields are found to be 3-5% (5-7%) for signal (background). The effect of the uncertainty in the energy scale of the unclustered component of the E miss T measurement is estimated by subtracting the leptons and jets from the E miss T summation and by varying the residual recoil by ±10%. The clustered component is then added back in order to recalculate the value of E miss T . The resultant uncertainty in the final yields is found to be of order 1-2%. Since the b tagging efficiencies measured in data are somewhat different from those predicted by the simulation, an event-by-event reweighting using data-to-MC scale factors is applied to simulated events. The uncertainty associated with this procedure is obtained by varying the event-by-event weight by ±1σ. The total uncertainty in the final yields is 0.6-1% (0.4-1.4%) for signal (background). All simulated events are reweighted to reproduce the pileup conditions observed in data. To compute the uncertainty related to pileup modeling, we shift the mean of the distribution in simulation by 5%. The variation of the final yields induced by this procedure is less than 1%. For the processes estimated from simulation, the sizes of the MC samples limit the precision of the modeling, and the corresponding statistical uncertainty is incorporated into the shape uncertainty. A similar treatment is applied to the backgrounds estimated from control samples in data based on the statistical uncertainties in the corresponding control samples.

Results
For both the electron and the muon channels, a shape-based analysis is employed. The expected numbers of background and signal events scaled by a signal strength modifier are combined in a binned likelihood for each bin of the m T distribution. The signal strength modifier, defined as the signal cross section divided by the cross section suggested by theory, determines the strength of the signal process [77]. The numbers of observed and expected events are shown in Table 3, including the expectation for a selected mass point for each type of signal. Figure 5 shows the m T distributions after the final selection. The observed distributions agree with the SM background predictions and no excess of events is observed.
Upper limits on the contribution of events from new physics are computed by using the modified frequentist approach CL s [78,79] based on asymptotic formulas [77,80].

DM interpretation
The observed limit on the cross section for DM production depends on the DM particle mass and the nature of DM interactions with SM particles. Within the framework of effective field theory, the upper limits on this cross section can be translated into 90% CL lower limits on the effective cutoff scale Λ as a function of DM particle mass m χ , as shown in Fig. 6. The choice of 90% CL is made in order to allow comparisons with direct detection experiments. The relic density of cold, non-baryonic DM has been measured by Planck telescope [81] using the anisotropy of the cosmic microwave background and of the spatial distribution of galaxies. They obtain a value Ωh 2 = 0.1198 ± 0.0026, where h is the Hubble constant. The implications of this result plotted in the plane of the effective cutoff scale Λ and DM mass m χ have been  calculated with MadDM 1.0 [82], and are shown in Fig. 6. Results from a search for DM particles using monojet signatures in CMS [14] are also plotted for comparison. It has been emphasized by several authors [28,[83][84][85] that the effective field theory approach is not valid over the full range of phase space that is accessible at the LHC, since the scales involved can be comparable to the collision energy. In the LHC regime, the assumption of a point-like interaction provides a reliable approximation of the underlying ultraviolet-complete theory only for appropriate choices of couplings and masses. To estimate the region of validity relevant to this analysis, we consider a simple tree level ultraviolet-complete model that contains a massive mediator (M) exchanged in the s-channel, with the couplings to quarks and DM particles described by coupling constants g q and g χ . The effective cutoff scale Λ thus can be expressed as Λ ∼ M/ √ g q g χ , when momentum transfer is small (Q tr < M). Imposing a condition on the couplings √ g q g χ < 4π to ensure stability of the perturbative calculation, and a mass requirement M > 2m χ , a lower bound Λ > m χ /2π is obtained for the region of validity. The area below this boundary, where the effective theory of DM is not expected to provide a reliable prediction at the LHC, is shown as a pink shaded area in each of the panels of Fig. 6.
[GeV]      [82] reflects the relic density of cold, non-baryonic DM: Ωh 2 = 0.1198 ± 0.0026 measured by Planck telescope [81]. Monojet results from CMS [14] are shown for comparison. Truncated limits with √ g q g χ = 1 are presented with red dot long-dashed lines. The blue double-dot and triple-dot dashed lines indicate the contours of R Λ = 80% for all operators with couplings √ g q g χ = π, and 4π.
However, the requirement of Λ > m χ /2π is not sufficient, according to some authors [83,[85][86][87][88][89][90][91][92][93][94], and the region of validity depends on the coupling values in the ultraviolet completion of the theory. Considering a more realistic minimum constraint Q tr < M ∼ √ g q g χ Λ, we can calculate the ratio R Λ of the number of events fulfilling the validity criteria over all events produced in the accessible phase space: in which the values of R Λ can be used to check the accuracy of the effective description in regions of parameter space (Λ, m χ ). Figure 6 includes the corresponding contours of R Λ = 80% for all operators with couplings √ g q g χ = π, and 4π. Alternatively, we can obtain the truncated limits by manually removing the events with Q tr > √ g q g χ Λ at the generator level. Figure 6 also shows these truncated limits with √ g q g χ = 1. For a certain value of m χ , the truncated limit goes to zero quickly because none of the events above this value fulfill the requirement Q tr < √ g q g χ Λ. For a maximum coupling g χ,q = 4π, 100% of the events pass this requirement, and the truncated limits coincide with the observed one and are not shown. Figure 7 shows the 90% CL upper limits on the DM-nucleon cross section as a function of DM particle mass for both the spin-dependent and spin-independent cases [83,95] [95]. The truncated limits for D5, D8, D9, and C3 with √ g q g χ = 1 are presented with dashed lines in the same shade as the untruncated ones. For comparison, direct search results as well as collider results from the CMS monojet [14] and monophoton [16] studies are shown. Results are also shown from a search for the invisible decays of the Higgs boson [96], interpreted in a Higgs-portal model [97,98], where a Higgs boson with a mass of 125 GeV acts as a mediator between scalar DM and SM particles. The central (solid) line corresponds to the Higgs-nucleon coupling value (0.326) from a lattice calculation [99], and the upper (dotdashed) and lower (dashed) lines are maximum (0.629) and minimum (0.260) values from the MILC Collaboration [100].
The expected and observed limits on the effective cutoff scale Λ as a function of the DM particle mass m χ are listed in Tables 4 and 5 for the operators D5 and D8. The values for the operators D9 and C3 are listed in Tables 6 and 7. The results are also shown in terms of limits on DMnucleon cross sections σ χN , to allow comparison with the results from direct searches for DM particles. Figure 8 shows the limits from operators D5 and D8 translated into upper limits on the DM annihilation rate σv relevant to indirect astrophysical searches [86], in which σ is the annihilation cross section, v is the relative velocity of the annihilating particles, and the quantity σv is averaged over the distribution of the DM velocity. In this paper, a particular astrophysical  Figure 7: The 90% CL upper limits on the DM-nucleon cross section as a function of the DM particle mass. Left: spin-dependent limits for axial-vector (D8) and tensor (D9) coupling of Dirac fermion DM candidates, together with direct search experimental results from the PICO [101], XENON100 [102], and IceCube [7] collaborations. Right: spin-independent limits for vector coupling of complex scalar (C3) and Dirac fermion (D5) DM candidates, together with CDMSlite [8], LUX [11], as well as Higgs-portal scalar DM results from CMS [96] with central (solid), minimum (dashed) and maximum (dot dashed) values of Higgs-nucleon couplings. Collider results from CMS monojet [14] and monophoton [16] searches, interpreted in both spin-dependent and spin-independent scenarios, are shown for comparison. The truncated limits for D5, D8, D9, and C3 with √ g q g χ = 1 are presented with dashed lines in same shade as the untruncated ones. environment with v 2 = 0.24 is considered, which corresponds to the epoch of the early universe when DM froze out, producing the thermal relic abundance. A 100% branching fraction of DM annihilating to quarks is assumed. The corresponding truncated limits for D5 and D8 with coupling √ g q g χ = 1 are also presented with dashed lines in same shade as the untruncated ones. The value required for DM particles to make up the relic abundance is labeled "Thermal relic value" and shown as a red dotted line. With this constraint on annihilation rate, we can conclude that Dirac fermion DM is ruled out at 95% CL for m χ < 6 GeV in the case of vector coupling and m χ < 30 GeV in the case of axial-vector coupling. Indirect search results from H.E.S.S [103] and Fermi-LAT [104] are also shown for comparison. These results have been multiplied by a factor of two since they assume Majorana rather than Dirac fermions.

Unparticle interpretation
In the scenario of the unparticle model, the 95% CL upper limits on the coupling constant λ between the unparticle and the SM fields with fixed effective cutoff scales Λ U = 10 TeV and 100 TeV, as functions of the scaling dimension d U , are shown on the left of Fig. 9. The right hand plot of Fig. 9 presents 95% CL lower limits on the effective cutoff scale Λ U with a fixed coupling λ = 1, and compares the result with the limits obtained from the CMS monojet search [14] and reinterpretation of LEP searches [37]. The search presented in this paper (labeled "monoZ") gives the most stringent limits. Tables 8 and 9 show the 95% CL upper limits on the coupling λ  Figure 8: The 95% CL upper limits on the DM annihilation rate σv for χχ → qq as a function of the DM particle mass for vector (D5) and axial-vector (D8) couplings of Dirac fermion DM. A 100% branching fraction of DM annihilating to quarks is assumed. Indirect search experimental results from H.E.S.S [103] and Fermi-LAT [104] are also plotted. The value required for DM particles to account for the relic abundance is labeled "Thermal relic value" and is shown as a red dotted line. The truncated limits for D5 and D8 with √ g q g χ = 1 are presented with dashed lines in same shade as the untruncated ones.
between unparticles and the SM fields for values of the scaling dimension d U in the range from 1.01 to 2.2, and fixed effective cutoff scales of 10 TeV and 100 TeV. Lower limits at 95% CL on the effective cutoff scale Λ U are given in    Figure 9: Left: 95% CL upper limits on the coupling λ between the unparticle and SM fields with fixed effective cutoff scales Λ U = 10 and 100 TeV. The plot inserted provides an expanded view of the limits at low scaling dimension. Right: 95% CL lower limits on unparticle effective cutoff scale Λ U with a fixed coupling λ = 1. The results from CMS monojet [14] and reinterpretation of LEP searches [37] are also shown for comparison. The excluded region is indicated by the shading.

Model-independent limits
As an alternative to the interpretation of the results in specific models, a single-bin analysis is applied to obtain model-independent expected and observed 95% CL upper limits on the

Summary
A search for evidence for particle dark matter (DM) and unparticle production at the LHC has been performed in events containing two charged leptons, consistent with the decay of a Z boson, and large missing transverse momentum. The study is based on a data set corresponding to an integrated luminosity of 19.7 fb −1 of pp collisions collected by the CMS detector at a center-of-mass energy of 8 TeV. The results are consistent with the expected standard model contributions. These results are interpreted in two scenarios for physics beyond the standard model: dark matter and unparticles. Model independent 95% confidence level upper limits are also set on contributions to the visible Z+E miss T cross section from sources beyond the standard model. Upper limits at 90% confidence level are set on the DM-nucleon scattering cross sections as a function of DM particle mass for both spin-dependent and spin-independent cases. Limits are also set on the DM annihilation rate assuming a branching fraction of 100% for annihilation to quarks, and on the effective cutoff scale. In addition, the most stringent limits to date at 95% confidence level on the coupling between unparticles and the standard model fields as well as the effective cutoff scale as a function of the unparticle scaling dimension are obtained in this analysis.

Acknowledgments
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses.    INFN Sezione di Napoli a , Università di Napoli 'Federico II' b , Napoli, Italy, Università della