W+W− boson pair production in proton-proton collisions

A measurement of the W+W− boson pair production cross section in proton-proton collisions at √ s = 13 TeV is presented. The data used in this study are collected with the CMS detector at the CERN LHC and correspond to an integrated luminosity of 35.9 fb−1. The W+W− candidate events are selected by requiring two oppositely charged leptons (electrons or muons). Two methods for reducing background contributions are employed. In the first one, a sequence of requirements on kinematic quantities is applied allowing a measurement of the total production cross section: 117.6± 6.8 pb, which agrees well with the theoretical prediction. Fiducial cross sections are also reported for events with zero or one jet, and the change in the zerojet fiducial cross section with the jet transverse momentum threshold is measured. Normalized differential cross sections are reported within the fiducial region. A second method for suppressing background contributions employs two random forest classifiers. The analysis based on this method includes a measurement of the total production cross section and also a measurement of the normalized jet multiplicity distribution in W+W− events. Finally, a dilepton invariant mass distribution is used to probe for physics beyond the standard model in the context of an effective field theory, and constraints on the presence of dimension-6 operators are derived. ”Published in Physical Review D as doi:10.1103/PhysRevD.102.092001.” © 2020 CERN for the benefit of the CMS Collaboration. CC-BY-4.0 license *See Appendix A for the list of collaboration members ar X iv :2 00 9. 00 11 9v 2 [ he pex ] 1 0 N ov 2 02 0


Introduction
The standard model (SM) description of electroweak and strong interactions can be tested through measurements of the W + W − boson pair production cross section at a hadron collider. Aside from tests of the SM, W + W − production represents an important background for new particle searches. The W + W − cross section has been measured in proton-antiproton collisions at √ s = 1.96 TeV [1, 2] and in proton-proton (pp) collisions at 7 and 8 TeV [3-6]. More recently, the ATLAS Collaboration published measurements with pp collision data at 13 TeV [7].
The SM production of W + W − pairs proceeds mainly through three processes: the dominant qq annihilation process; the gg → W + W − process, which occurs at higher order in perturbative quantum chromodynamics (QCD); and the Higgs boson process H → W + W − , which is roughly ten times smaller than the other processes and is considered a background in this analysis. A calculation of the W + W − production cross section in pp collisions at √ s = 13 TeV gives the value 118.7 +3.0 −2.6 pb [8]. This calculation includes the qq annihilation process calculated at next-to-next-to-leading order (NNLO) precision in perturbative QCD and a contribution of 4.0 pb from the gg → W + W − gluon fusion process calculated at leading order (LO). The uncertainties reflect the dependence of the calculation on the QCD factorization and renormalization scales. For the analysis presented in this paper, the gg → W + W − contribution is corrected by a factor of 1.4, which comes from the ratio of the gg → W + W − cross section at next-to-leading order (NLO) to the same cross section at LO [9]. A further adjustment of −1.2% for the qq annihilation process is applied to account for electroweak corrections [10]. Our evaluation of uncertainties from parton distribution functions (PDFs) and the strong coupling α S amounts to 2.0 pb. Taking all corrections and uncertainties together, the theoretical cross section used in this paper for the inclusive W + W − production at √ s = 13 TeV is σ NNLO tot = 118.8 ± 3.6 pb.
This paper reports studies of W + W − production in pp collisions at √ s = 13 TeV with the CMS detector at the CERN LHC. Two analyses are performed using events that contain a pair of oppositely charged leptons (electrons or muons); they differ in the way background contributions are reduced. The first method is based on techniques described in Refs. [4][5][6]; the analysis based on this method is referred to as the "sequential cut analysis." A second, newer approach makes use of random forest classifiers [11][12][13] trained with simulated data to differentiate signal events from Drell-Yan (DY) and top quark backgrounds; this analysis is referred to as the "random forest analysis." The two methods complement one another. The sequential cut analysis separates events with same-flavor (SF) or different-flavor (DF) lepton pairs, and also events with zero or one jet. As a consequence, background contributions from the Drell-Yan production of lepton pairs can be controlled. Furthermore, the impact of theoretical uncertainties due to missing higher-order QCD calculations is kept under control through access to both the zero-and one-jet final states. The random forest analysis does not separate SF and DF lepton pairs and does not separate events with different jet multiplicities. Instead, it combines kinematic and topological quantities to achieve a high sample purity. The contamination from top quark events, which is not negligible in the sequential cut analysis, is significantly smaller in the random forest analysis. The random forest technique allows for flexible control over the top quark background contamination, which is exploited to study the jet multiplicity in W + W − signal events. However, the sensitivity of the random forest to QCD uncertainties is significantly larger than that of the sequential cut analysis, as discussed in Section 9.1.
Total W + W − production cross sections are reported in Section 9.1 for both analyses based on fits to the observed yields. Cross sections in a specific fiducial region are reported in Section 9.2 for the sequential cut analysis; these cross sections are separately reported for W + W − → + ν − ν events with zero or one jet ( refers to electrons and muons). Also, the change in the zero-jet W + W − cross section with variations in the jet transverse momentum (p T ) threshold is measured.
Normalized differential cross sections within the fiducial region are also reported in Section 10. The normalization reduces both theoretical and experimental uncertainties. The impact of experimental resolutions is removed using a fitting technique that builds templates of reconstructed quantities mapped onto generator-level quantities. Comparisons to NLO predictions are presented.
The distribution of exclusive jet multiplicities for W + W − production is interesting given the sensitivity of previous results to a "jet veto" in which events with one or more jets were rejected [2-4, 6]. In Section 11, this paper reports a measurement of the normalized jet multiplicity distribution based on the random forest analysis.
Finally, the possibility of anomalous production of W + W − events that can be modeled by higher-dimensional operators beyond the dimension-4 operators of the SM is probed using events with an electron-muon final state. Such operators arise in an effective field theory expansion of the Lagrangian and each appears with its own Wilson coefficient [14,15]. Distributions of the electron-muon invariant mass m eµ are used because they are robust against mismodeling of the W + W − transverse boost, and are sensitive to the value of the Wilson coefficients associated with the dimension-6 operators. The observed distributions provide no evidence for anomalous events. Limits are placed on the coefficients associated with dimension-6 operators in Section 12.

The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity (η) coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. The first level of the CMS trigger system [16], composed of custom hardware processors, is designed to select the most interesting events within a time interval less than 4 µs, using information from the calorimeters and muon detectors, with the output rate of up to 100 kHz. The high-level trigger processor farm further reduces the event rate to about 1 kHz before data storage. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [17].

Data and simulated samples
A sample of pp collision data collected in 2016 with the CMS experiment at the LHC at √ s = 13 TeV is used for this analysis; the total integrated luminosity is 35.9 ± 0.9 fb −1 .
Events are stored for analysis if they satisfy the selection criteria of online triggers [16] requiring the presence of one or two isolated leptons (electrons or muons) with high p T . The lowest p T thresholds for the double-lepton triggers are 17 GeV for the leading lepton and 12 (8) GeV when the trailing lepton is an electron (muon). The single-lepton triggers have p T thresholds of 25 and 20 GeV for electrons and muons, respectively. The trigger efficiency is measured using Z → + − events and is larger than 98% for W + W − events with an uncertainty of about 1%.
Several Monte Carlo (MC) event generators are used to simulate the signal and background processes. The simulated samples are used to optimize the event selection, evaluate selection efficiencies and systematic uncertainties, and compute expected yields. The production of W + W − events via qq annihilation (qq → W + W − ) is generated at NLO precision with POWHEG V2 [18][19][20][21][22][23], and W + W − production via gluon fusion (gg → W + W − ) is generated at LO using MCFM v7.0 [24]. The production of Higgs bosons is generated with POWHEG [23] and H → W + W − decays are generated with JHUGEN V5.2.5 [25]. Events for other diboson and triboson production processes are generated at NLO precision with MADGRAPH5 aMC@NLO 2.2.2 [26]. The same generator is used for simulating Z+jets, which includes Drell-Yan production, and Wγ * event samples. Finally, the top quark final states tt and tW are generated at NLO precision with POWHEG [27,28]. The PYTHIA 8.212 [29] package with the CUETP8M1 parameter set (tune) [30] and the NNPDF 2.3 [31] PDF set are used for hadronization, parton showering, and the underlying event simulation. For top quark processes, the NNPDF 3.0 PDF set [32] and the CUETP8M2T4 tune [33] are used.
The quality of the signal modeling is improved by applying weights to the W + W − POWHEG events such that the NNLO calculation [8] of transverse momentum spectrum of the W + W − system, p WW T , is reproduced.
For all processes, the detector response is simulated using a detailed description of the CMS detector, based on the GEANT4 package [34]. Events are reconstructed with the same algorithms as for data. The simulated samples include additional interactions per bunch crossing (pileup) with a vertex multiplicity distribution that closely matches the observed one.

Event reconstruction
Events are reconstructed using the CMS particle-flow (PF) algorithm [35], which combines information from the tracker, calorimeters, and muon systems to create objects called PF candidates that are subsequently identified as charged and neutral hadrons, photons, muons, and electrons.
The primary pp interaction vertex is defined to be the one with the largest value of the sum of p 2 T for all physics objects associated with that vertex. These objects include jets clustered using the jet finding algorithm [36,37] with the tracks assigned to the primary vertex as inputs, and the associated missing transverse momentum vector. All neutral PF candidates and charged PF candidates associated with the primary vertex are clustered into jets using the anti-k T clustering algorithm [36] with a distance parameter of R = 0.4. The transverse momentum imbalance p miss T is the negative vector sum of the transverse momenta of all charged and neutral PF candidates; its magnitude is denoted by p miss T . The effects of pileup are mitigated as described in Ref. [38,39].
Jets originating from b quarks are identified by a multivariate algorithm called the combined secondary vertex algorithm CSV v2 [40,41], which combines information from tracks, secondary vertices, and low-momentum electrons and muons associated with the jet. Two working points are used in this analysis for jets with p T > 20 GeV. The "loose" working point has an efficiency of approximately 88% for jets originating from the hadronization of b quarks typical in tt events and a mistag rate of about 10% for jets originating from the hadronization of lightflavor quarks or gluons. The "medium" working point has a b tagging efficiency of about 64% for b jets in tt events and a mistag rate of about 1% for light-flavor quark and gluon jets.
Electron candidates are reconstructed from clusters in the ECAL that are matched to a track reconstructed with a Gaussian-sum filter algorithm [42]. The track is required to be consistent with originating from the primary vertex. The sum of the p T of PF candidates within a cone of size ∆R = √ (∆η) 2 + (∆φ) 2 < 0.3 around the electron direction, excluding the electron itself, is required to be less than about 6% of the electron p T . Charged PF candidates are included in the isolation sum only if they are associated with the primary vertex. The average contribution from neutral PF candidates not associated with the primary vertex, estimated from simulation as a function of the energy density in the event and the η direction of the electron candidate, is subtracted before comparing to the electron momentum.
Muon candidates are reconstructed by combining signals from the muon subsystems together with those from the tracker [43,44]. The track reconstructed in the silicon pixel and strip detector must be consistent with originating from the primary vertex. The sum of the p T of the additional PF candidates within a cone of size ∆R < 0.4 around the muon direction is required to be less than 15% of the muon p T after applying a correction for neutral PF candidates not associated with the primary vertex, analogous to the electron case.

Event selection
The key feature of the W + W − channel is the presence of two oppositely charged leptons that are isolated from any jet activity and have relatively large p T . The two methods for isolating a W + W − signal, the sequential cut method and the random forest method, both require two oppositely charged, isolated electrons or muons that have sufficient p T to ensure good trigger efficiency. The lepton reconstruction, selection, and isolation criteria are the same for the two methods as are most of the kinematic requirements detailed below.
The largest background contributions come from the Drell-Yan production of lepton pairs and tt events in which both top quarks decay leptonically. Drell-Yan events can be suppressed by selecting events with one electron and one muon (i.e., DF leptons) and by applying a veto of the Z boson resonant peak in events with SF leptons. Contributions from tt events can be reduced by rejecting events with b-tagged jets.
Another important background contribution arises from events with one or more jets produced in association with a single W boson. A nonprompt lepton from a jet could be selected with charge opposite to that of the prompt lepton from the W boson decay. This background contribution is estimated with two techniques based on specially selected events. In the sequential cut analysis, the calculation hinges on the probability for a nonprompt lepton to be selected, whereas in the random forest selection, it depends on a sample of events with two leptons of equal charge.
Except where noted, W + W − events with τ leptons decaying to electrons or muons are included as signal.

Sequential cut selection
The sequential cut selection imposes a set of discrete requirements on kinematic and topological quantities and on a multivariate analysis tool to suppress Drell-Yan background in events with SF leptons.
The lepton p T requirements ensure a good reconstruction and identification efficiency: the lead-ing lepton must have p max T > 25 GeV, and the trailing lepton must have p min T > 20 GeV. Pseudorapidity ranges are designed to cover regions of good reconstruction quality: for electrons, the ECAL supercluster must satisfy |η| < 1.479 or 1.566 < |η| < 2.5 and for muons, |η| < 2.4. To avoid low-mass resonances and leptons from decays of hadrons, the dilepton invariant mass must be large enough: m > 20 GeV. The transverse momentum of the lepton pair is required to satisfy p T > 30 GeV to reduce background contributions from nonprompt leptons. Events with a third, loosely identified lepton with p T > 10 GeV are rejected to reduce background contributions from WZ and ZZ (i.e., VZ) production.
The missing transverse momentum is required to be >20 GeV. In order to make the analysis insensitive to instrumental p miss T caused by mismeasurements of the lepton momenta, a socalled "projected p miss T ", denoted p miss,proj T , is defined as follows. The lepton closest to the p miss T vector is identified and the azimuthal angle ∆φ between the p T of the lepton and p miss T is computed. The quantity p miss,proj T is the perpendicular component of p miss T with respect to p T . When |∆φ| < π/2, p miss,proj T is required to be larger than 20 GeV. The same requirement is imposed using the projected p miss T vector reconstructed from only the charged PF candidates associated with the primary vertex: p miss,track proj T > 20 GeV.
The selection criteria are tightened for SF final states where the contamination from Drell-Yan events is much larger. Events with m within 15 GeV of the Z boson mass m Z are discarded, and the minimum m is increased to 40 GeV. The p miss T requirement is raised to 55 GeV. Finally, a multivariate classifier called DYMVA [45, 46] based on a boosted decision tree is used to discriminate against the Drell-Yan background.
Only events with zero or one reconstructed jet with p J T > 30 GeV and |η J | < 4.7 are used in the analysis. Jets falling within ∆R < 0.4 of a selected lepton are discarded. To suppress top quark background contributions, events with one or more jets tagged as b jets using the CSV v2 loose working point and with p b T > 20 GeV are also rejected. Table 1 summarizes the event selection criteria and Table 2 lists the sample composition after the fits described in Section 7 have been executed. Example kinematic distributions are shown in Fig. 1 for events with no jets and in Fig. 2 for events with exactly one jet. The simulations reproduce the observed distributions well.

Random forest selection
A random forest (RF) classifier is an aggregate of binary decision trees that have been trained independently and in parallel [11]. Each individual tree uses a random subset of features which mitigates against overfitting, a problem that challenges other classifiers based on decision trees. The random forest classifier is effective if there are many trees, and the aggregation of many trees averages out potential overfitting by individual trees. A random forest classifier is expected to improve monotonically without overfitting [12] in contrast to other methods. Building a random forest classifier requires less tuning of hyperparameters compared, for example, with boosted decision trees, and its performance is as good [13].
The random forest analysis begins with a preselection that is close to the first set of requirements in the sequential cut analysis. The selection of electrons and muons is identical. To avoid low-mass resonances and leptons from decays of hadrons, m > 30 GeV is required for both DF and SF events. To suppress the large background contribution from Z boson decays, events with SF leptons and with m within 15 GeV of the Z boson mass are rejected. Events with a third, loosely identified lepton with p T > 10 GeV are rejected to reduce backgrounds from VZ production. Finally, events with one or more b-tagged jets (p b T > 20 GeV and medium working point) are rejected, since the background from tt production is characterized by the presence of b jets whereas the signal is not. These requirements are known as the preselection requirements.
After the preselection, the largest background contamination comes from Drell-Yan production of lepton pairs and tt production with both top quarks producing prompt leptons. To reduce these backgrounds, two independent random forest classifiers are constructed: an anti-Drell-Yan classifier optimized to distinguish Drell-Yan and W + W − signal events, and an antitt classifier optimized to distinguish tt and W + W − events. The classifiers produce scores, S DY and S tt , arranged so that signal appears mainly at S DY ≈ 1 and S tt ≈ 1 while backgrounds appear mainly at S DY ≈ 0 and S tt ≈ 0. Figure 3 shows the distributions of the scores for the two random forest classifiers. The signal region is defined by the requirements S DY > S min DY and S tt > S min tt . For the cross section measurement, the specific values S min DY = 0.96 and S min tt = 0.6 are set by simultaneously minimizing the uncertainty in the cross section and maximizing the purity of the selected sample. For measuring the jet multiplicity, a lower value of S min tt = 0.2 is used, which increases the efficiency for W + W − events with jets. A Drell-Yan control region is defined by S DY < 0.6 and S tt > 0.6 and a tt control region is defined by S DY > 0.6 and S tt < 0.6. The event selection used in this measurement is summarized in Table 1.
The architecture of the two random forest classifiers is determined through an optimization of hyperparameters explored in a grid-like fashion. The optimal architecture for this problem has 50 trees with a maximum tree depth of 20; the minimum number of samples per split is 50 and the minimum number of samples for a leaf is one. The maximum number of features seen by any single tree is the square-root of the total number of features (ten for the DY random forest and eight for the tt random forest). Table 2: Sample composition for the sequential cut and random forest selections after the fits described in Section 7 have been executed; the uncertainties shown are based on the total uncertainty obtained from the fit. The purity is the fraction of selected events that are W + W − signal events. "Observed" refers to the number of events observed in the data. The random forest classifier takes as input some of the kinematic quantities listed in Table 1 and several other event features as listed in Table 3. These include the invariant mass of the two leptons and the missing momentum vector m p miss T , the azimuthal angle between the lepton pair and the missing momentum vector ∆φ p miss T , the smallest azimuthal angle between either lepton and any reconstructed jet ∆φ J , and the smallest azimuthal angle between the missing momentum vector and any jet ∆φ p miss T J . The random forest classifier also makes use of the scalar sum of jet transverse momenta H T , and of the vector sum of the jet transverse momenta, referred to as the recoil in the event.
The sample composition for the signal region is summarized in Table 2. The signal efficiency and purity are higher than in the sequential cut analysis.

Background estimation
A combination of methods based on data control samples and simulations are used to estimate background contributions. The methods used in the sequential cut analysis and the random forest analysis are similar. The differences are described below.
The largest background contribution comes from tt and single top production which together are referred to as top quark production. This contribution arises when b jets are not tagged either because they fall outside the kinematic region where tagging is possible or because they receive low scores from the CSV v2 b tagging algorithm. The sequential cut analysis defines a control region by requiring at least one b-tagged jet. The normalization of the top quark background in the signal region is set according to the number of events in this control region.
Similarly, the random forest analysis defines a top quark control region on the basis of scores: S DY > 0.6 and S tt < 0.6. Many kinematic distributions are examined and all show good agreement between simulation and data in this control region. This control region is used to set the normalization of the top quark background in the signal region.
The next largest background contribution comes from the Drell-Yan process, which is larger in the SF channel than in the DF channel. The nature of these contributions is somewhat different. The SF contribution arises mainly from the portion of Drell-Yan production that falls below or above the Z resonance peak. The sequential analysis calibrates this contribution using the observed number of events in the Z peak and the ratio of numbers of events inside and outside the peak, as estimated from simulations. The DF contribution arises from Z → τ + τ − production with both τ leptons decaying leptonically. The sequential cut analysis verifies the Z → τ + τ − background using a control region defined by m eµ < 80 GeV and inverted p T requirements. The random forest analysis defines a Drell-Yan control region by S DY < 0.6 and S tt > 0.6, which includes both SF and DF events. Simulations of kinematic distributions for events in this region match the data well, and the yield of events in this region is used to normalize the Drell-Yan background contribution in the signal region.
The next most important background contribution comes mainly from W boson events in which a nonprompt lepton from a jet is selected in addition to a lepton from the W boson decay. Monte Carlo simulation cannot be used for an accurate estimate of this contribution, but it can be used to devise and evaluate an estimate based on control samples. In the sequential cut analysis, a "pass-fail" control sample is defined by one lepton that passes the lepton selection criteria and another that fails the criteria but passes looser criteria. The misidentification rate f for a jet that satisfies loose lepton requirements to also pass the standard lepton requirements is determined using an event sample dominated by multijet events with nonprompt leptons. This misidentification rate is parameterized as a function of lepton p T and η and used to compute weights f /(1 − f ) in the pass-fail sample that are used to determine the contribution of nonprompt leptons in the signal region [46,47]. The random forest analysis uses a different method based on a control region in which the two leptons have the same charge. This control region is dominated by W+jets events with contributions from diboson and other events. The transfer factor relating the number of same-sign events in the control region to the number of opposite-sign events in the signal region is based on two methods relying on data control samples and which are validated using simulations. One method uses events with DF leptons and low p miss T and the other uses events with an inverted isolation requirement. Both methods yield values for the transfer factor that are consistent at the 16% level.
Background contamination from Wγ * events with low-mass γ * → + − can satisfy the signal event selection when the transverse momenta of the two leptons are very different [46]. The predicted contribution in the signal region is normalized to the number of events in a control region with three muons satisfying p T > 10, 5, and 3 GeV and for m γ * < 4 GeV. In this control region, the requirement p miss T < 25 GeV is imposed in order to suppress non-Wγ * events.
The remaining minor sources of background, including diboson and triboson final states and Higgs-mediated W + W − production, are evaluated using simulations normalized to the most precise theoretical cross sections available.

Signal extraction
The cross sections are obtained by simultaneously fitting the predicted yields to the observed yields in the signal and control regions. In this fit, a signal strength parameter modifies the predicted signal yield defined by the central value of the theoretical cross section, σ NNLO tot = 118.8 ± 3.6 pb. The fitted value of the signal strength is expected to be close to unity if the SM is valid, and the measured cross section is the product of the signal strength and the theoretical cross section. Information from control regions is incorporated in the analysis through additional parameters that are free in the fit; the predicted background in the signal region is thereby tied to the yields in the control regions. In the sequential cut analysis, there is one control region enriched in tt events; the yields in the signal and this one control region are fit simultaneously. Since the selected event sample is separated according to SF and DF, 0-and 1-jet selections, there are eight fitted yields. In the random forest analysis there are three control regions, one for Drell-Yan background, a second for tt background, and a third for events with nonprompt leptons (e.g., W+jets). Since SF and DF final states are analyzed together, and the selection does not explicitly distinguish the number of jets, there are four fitted yields in the random forest analysis. In both analyses, the yields in the control regions effectively constrain the predicted backgrounds in the signal regions.
Additional nuisance parameters are introduced in the fit that encapsulate important sources of systematic uncertainty, including the electron and muon efficiencies, b tagging efficiencies, the jet energy scale, and the predicted individual contributions to the background. The total signal strength uncertainty, including all systematic uncertainties, is determined by the fit with all parameters free; the statistical uncertainty is determined by fixing all parameters except the signal strength to their optimal values.

Systematic uncertainties
Experimental and theoretical sources of systematic uncertainty are described in this section. A summary of all systematic uncertainties for the cross section measurement is given in Table 4. These sources of uncertainty impact the measurements of the cross section through the normalization of the signal. Many of them also impact kinematic distributions that ultimately can alter the shapes of distributions studied in this analysis. Both normalization and shape uncertainties are evaluated.

Experimental sources of uncertainty
There are several sources of experimental systematic uncertainties, including the lepton efficiencies, the b-tagging efficiency for b quark jets and the mistag rate for light-flavor quark and gluon jets, the lepton momentum and energy scales, the jet energy scale and resolution, the modeling of p miss T and of pileup in the simulation, the background contributions, and the integrated luminosity.
The sequential cut and the random forest analyses both use control regions to estimate the background contributions in the signal region. The uncertainties in the estimates are determined mainly by the statistical power of the control regions, though the uncertainty of the theoretical cross sections and the shape of the Z resonant peak also play a role. Sources of systematic uncertainty of the estimated Drell-Yan background include the Z resonance line shape and the performance of the DYMVA classifier for different p miss T thresholds. These uncertainties are propagated directly to the predicted SF and DF background estimates. The contribution from nonprompt leptons is entirely determined by the methods based on data control regions, described in Section 6; typically these contributions are uncertain at approximately the 30% level. The contribution from the Wγ * final state is checked using a sample of events with three well-identified leptons including a low-mass, opposite-sign pair of muons. The comparison of the MC prediction with the data has an uncertainty of about 20%. The other backgrounds are estimated using simulations and their uncertainties depend on the uncertainties of the theoretical cross sections, which are typically below 10%. Statistical uncertainties from the limited number of MC events are taken into account, and have a very small impact on the result.
Small differences in the lepton trigger, reconstruction, and identification efficiencies for data and simulation are corrected by applying scale factors to adjust the efficiencies in the simulation. These scale factors are obtained using events in the Z resonance peak region [42, 43] recorded with unbiased triggers. They vary with lepton p T and η and are within 3% of unity. The uncertainties of these scale factors are mostly at the 1-2% level.
Differences in the probabilities for b jets and light-flavor quark and gluon jets to be tagged by the CSV v2 algorithm are corrected by applying scale factors to the simulation. These scale factors are measured using tt events with two leptons [40]. These scale factors are uncertain at the percent level and have relatively little impact on the result because the signal includes mainly light-flavor quark and gluon jets, which have a low probability to be tagged, and the top quark background is assessed using appropriate control regions.
The jet energy scale is set using a variety of in situ calibration techniques [48]. The remaining uncertainty is assessed as a function of jet p T and η. The jet energy resolution in simulated events is slightly different than that measured in data. The differences between simulation and data lead to uncertainties in the efficiency of the event selection because the number of selected jets, their transverse momenta, and also p miss T play a role in the event selection.
The lepton energy scales are set using the position of the Z resonance peak; the uncertainties are very small and have a negligible impact on the measurements reported here.
The modeling of pileup depends on the total inelastic pp cross section [49]. The pileup uncertainty is evaluated by varying this cross section up and down by 5%.
The statistical uncertainties from the limited number of events in the various control regions lead to a systematic uncertainty from the background predictions. It is listed as part of the experimental systematic uncertainty in Table 4.
The uncertainty in the integrated luminosity measurement is 2.5% [50]. It contributes directly to the cross section and also to the uncertainty in the minor backgrounds predicted from simulation.

Theoretical sources of uncertainty
The efficiency of the event selection is sensitive to the number of hadronic jets in the event. The sequential cut analysis explicitly singles out events with zero or one jet, and the random forest classifiers utilize quantities, such as H T , that tend to correlate with the number of jets. As a consequence, the efficiency of the event selection is sensitive to higher-order QCD corrections that are adequately described by neither the matrix-element calculation of POWHEG nor by the parton shower simulation. The uncertainty reflecting these missing higher orders is evaluated by varying the QCD factorization and renormalization scales independently up and down by a factor of two but excluding cases in which one is increased and the other decreased simultaneously. A change in measured cross sections is evaluated by applying appropriate weights to the simulated events.
Some of the higher-order QCD contributions to W + W − production have been calculated using the p T -resummation [51, 52] and the jet-veto resummation [53] techniques. The results from these two approaches are compatible [54]. The transverse momentum p WW T of the W + W − pair is used as a proxy for these higher-order corrections; the p WW T spectrum from POWHEG is reweighted to match the analytical prediction obtained using the p T -resummation at nextto-next-to-leading logarithmic accuracy [51]. Uncertainties in the theoretical calculation of the p WW T spectrum lead to uncertainties in the event selection efficiency that are assessed for the qq → W + W − process by independently varying the resummation, the factorization, and the renormalization scales in the analytical calculation [52]. The uncertainty in the gg → W + W − component is determined by the variation of the renormalization and factorization scales in the theoretical calculation of this process [9].
Additional sources of theoretical uncertainties come from the PDFs and the assumed value of α S . The PDF uncertainties are estimated, following the PDF4LHC recommendations [55], from the variance of the values obtained using the set of MC replicas of the NNPDF3.0 PDF set. The variation of both the signal and the backgrounds with each PDF set and the value of α S is taken into account.
The uncertainty from the modeling of the underlying event is estimated by comparing the signal efficiency obtained with the qq → W + W − sample described in Section 3 to alternative samples that use different generator configurations.
The branching fraction for leptonic decays of W bosons is taken to be B(W → ν) = 0.1086 ± 0.0009 [56], and lepton universality is assumed to hold. The uncertainty coming from this branching fraction is not included in the total uncertainty; it would amount to 1.8% of the cross section value.

The W + W − cross section measurements
Two measurements of the total production cross section are reported in this section: the primary one coming from the sequential cut analysis and a secondary measurement coming from the random forest analysis. In addition, measurements of the fiducial cross section are re- ported, based on the sequential cut analysis, including the change of the zero-jet cross section with variations of the jet p T threshold.

Total production cross section
Both the sequential cut and random forest analyses provide precise measurements of the total production cross section. Since the techniques for selecting signal events are rather different, both values are reported here. The measurement obtained with the sequential cut analysis is the primary measurement of the total production cross section because it is relatively insensitive to the uncertainties in the corrections applied to the p WW T spectrum. The overlap of the two sets of selected events is approximately 50%. A combination of the two measurements is not carried out because the reduction in the uncertainty would be minor.
The sequential cut (SC) analysis makes a double-dichotomy of the data: selected events are separated if the leptons are DF or SF (DF is purer because of a smaller Drell-Yan contamination), and these are further subdivided depending on whether there is zero or one jet (0-jet is purer because of a smaller top quark contamination). The comparison of the four signal strengths provides an important test of the consistency of the measurement; the cross section value is based on the simultaneous fit of DF & SF and 0-jet & 1-jet channels. The result is σ tot SC = 117.6 ± 1.4 (stat) ± 5.5 (syst) ± 1.9 (theo) ± 3.2 (lumi) pb = 117.6 ± 6.8 pb, which is consistent with the theoretical prediction σ NNLO tot = 118.8 ± 3.6 pb. A summary of the measured signal strengths and the corresponding cross sections is given in Table 5.
The random forest analysis isolates a purer signal than the sequential cut analysis (see Table 2); however, its sensitivity is concentrated at relatively low p WW T as shown in Fig. 4. This Table 5: Summary of the signal strength and total production cross section obtained in the sequential cut analysis. The uncertainty listed is the total uncertainty obtained from the fit to the yields.

Category
Signal  . The sequential cut analysis includes 0-and 1-jet events from both DF and SF lepton combinations, for which the contributions from 0-and 1-jet are shown separately. The efficiency curve for S min tt = 0.2 is also shown; this value is used in measuring the jet multiplicity distribution.
region corresponds mainly to events with zero jets; the random forest classifier uses observables such as H T that correlate with jet multiplicity and reduce top quark background contamination by favoring events with a low jet multiplicity. As a consequence, the random forest result is more sensitive to uncertainties in the theoretical corrections to the p WW T spectrum than the sequential cut analysis. The signal strength measured by the random forest analysis is 1.106 ± 0.073 which corresponds to a measured total production cross section of σ tot RF = 131.4 ± 1.3 (stat) ± 6.0 (syst) ± 5.1 (theo) ± 3.5 (lumi) pb = 131.4 ± 8.7 pb. The difference with respect to the sequential cut analysis reflects the sensitivity of the random forest analysis to low p WW T .

Fiducial cross sections
The sequential cut analysis is used to obtain fiducial cross sections. The definition of the fiducial region is similar to the requirements described in Section 5.1 above. The generated event record must contain two prompt leptons (electrons or muons) with p T > 20 GeV and |η| < 2.5. Decay products of τ leptons are not considered part of the signal in this definition of the fiducial region. Other kinematic requirements are applied: m > 20 GeV, p T > 30 GeV, and p miss T > T is calculated using the momenta of the neutrinos emitted in the W boson decays). When categorizing events with zero or more jets, a jet is defined using stable particles but not neutrinos. For the baseline measurements, the jets must have p T > 30 GeV and |η| < 4.5 and be separated from each of the two leptons by ∆R > 0.4.
The fiducial cross section is obtained by means of a simultaneous fit to the DF and SF, 0-and 1-jet final states. The measured value is σ fid = 1.529 ± 0.020 (stat) ± 0.069 (syst) ± 0.028 (theo) ± 0.041 (lumi) pb = 1.529 ± 0.087 pb, which agrees well with the theoretical value σ fid NNLO = 1.531 ± 0.043 pb. These values are corrected to the fiducial region with all jet multiplicities.
The fiducial cross sections for the production of W + W − boson pairs with zero or one jet are of interest because some of the earlier measurements were based on the 0-jet subset only, i.e., a jet veto was applied [2-4, 6]. The sequential cut analysis provides the following values based on the combination of the DF and SF categories: σ fid (0-jet) = 1.61 ± 0.10 pb and σ fid (1-jet) = 1.35 ± 0.11 pb for a jet p T threshold of 30 GeV. These fiducial cross section values pertain to the definition given above, in particular, they pertain to all jet multiplicities.
The fiducial cross section for W + W − + 0-jets production is also measured as a function of the jet p T threshold in the range 25-60 GeV with the results listed in Table 6 and displayed in Fig. 5. The cross section is expected to increase with jet p T threshold because the phase space for zero jets increases. Table 6: Fiducial cross section for the production of W + W − + 0-jets as the p T threshold for jets is varied. The fiducial region is defined by two opposite-sign leptons with p T > 20 GeV and |η| < 2.5 excluding the products of τ lepton decay, and m > 20 GeV, p T > 30 GeV, and p miss T > 30 GeV. Jets must have p T above the stated threshold, |η| < 4.5, and be separated from each of the two leptons by ∆R > 0.4. The total uncertainty is reported.

Normalized differential cross section measurements
Differential cross sections are measured for the fiducial region defined above using the sequentialcut, DF event selection. The random forest selection is unsuitable for measuring these differential cross sections because some of these kinematic quantities are used as inputs to the random forest classifiers. These differential cross sections are normalized to the measured integrated fiducial cross section, which for the DF final state (0-and 1-jet) is 0.782 ± 0.053 pb corresponding to a signal strength of 1.022 ± 0.069.
For each differential cross section, a simultaneous fit to the reconstructed distribution is performed in the following manner. An independent signal strength parameter is assigned to each generator-level histogram bin. For the MC simulated events falling within a given generatorlevel bin, a template histogram of the reconstructed kinematic quantity is formed. The detector resolution is good for the quantities considered, so the template histogram has a peak corresponding to the given generator-level bin; the contents of all bins below and above the given generator-level bin are relatively low. When the fit is performed, the signal strengths are Theo. uncertainty Theo. prediction / measurement Figure 5: The upper panel shows the fiducial cross sections for the production of W + W − + 0jets as the p T threshold for jets is varied. The fiducial region is defined by two opposite-sign leptons with p T > 20 GeV and |η| < 2.5 excluding the products of τ lepton decay, and m > 20 GeV, p T > 30 GeV, and p miss T > 30 GeV. Jets must have p T above the stated threshold, |η| < 4.5, and be separated from each of the two leptons by ∆R > 0.4. The lower panel shows the ratio of the theoretical prediction to the measurement. In both the upper and lower panels, the error bars on the data points represent the total uncertainty of the measurement, and the shaded band depicts the uncertainty of the MC prediction.
allowed to vary independently. The correlations among bins in the distribution of the reconstructed quantity are taken into account. The fitted values of the signal strength parameters are applied to the generator-level differential cross section to obtain the measured differential cross section.

Jet multiplicity measurement
A measurement of the jet multiplicity tests the accuracy of theoretical calculations and event generators. Signal W + W − events are characterized by a low jet multiplicity in contrast to tt background events, which typically have two or three jets. The sequential event selection exploits this difference by eliminating events with more than one jet and by separating 0-and 1-jet event categories. The random forest selection, in contrast, places no explicit requirements on the number of jets (N J ) in an event, and the separation of signal W + W − events and tt background utilizes other event features listed in Table 3. As a consequence, a precise measurement of the fractions of events with N J = 0, 1, or ≥ 2 jets can be made. For this measurement, jets have p T > 30 GeV and |η| < 2.4, and must be separated from each of the selected leptons by ∆R > 0.4. The rejection of events with one or more b-tagged jets is still in effect; however, the impact on the signal is very small. , and dilepton azimuthal angular separation ∆φ , compared to POWHEG predictions. The lower panels show the ratio of the theoretical predictions to the measured values. The meaning of the error bars and the shaded bands is the same as in Fig. 5.
The anti-tt random forest produces a continuous score, S tt , in the range 0 ≤ S tt ≤ 1, as explained in Section 5.2. For the measurement of the jet multiplicity presented in this section, the criterion against tt background is loosened to S min tt = 0.2 while S min DY = 0.96 remains. This looser requirement leads to a signal efficiency for the random forest selection with a relatively gentle variation with N J as shown in Table 7, and also a more even variation of the efficiency as a function of p WW T , as shown in Fig. 4. These efficiencies are defined for the events passing the random forest selection with respect to those passing the preselection requirements. The efficiency for the preselection is essentially independent of N J . Table 7: Efficiency for the random forest selection with respect to preselected events as a function of jet multiplicity. The stated uncertainties are statistical only.
Number of jets 0 1 ≥ 2 Efficiency 0.555 ± 0.003 0.448 ± 0.004 0.290 ± 0.004 Background contributions are subtracted from the observed numbers of events as a function of N J and then corrections are applied for the random forest efficiencies shown in Table 7. The observed jet multiplicity suffers from the migration of events from one N J bin to another due to two experimental effects: first, pileup can produce extra jets (pileup), and second, jet energy mismeasurements can lead to jets with true p T below the 30 GeV threshold being accepted and others with true p T above 30 GeV being rejected. Pileup jets only increase the number of jets in an event while energy calibration and resolution leads to both increases and decreases in N J . Because of the falling jet p T distribution, the jet energy resolution leads to increases in N J more often than to decreases.
The two sources of event migration are corrected in two distinct steps. The signal MC event sample is used to build two response matrices: R PU for pileup and R det for detector effects, in particular, jet energy resolution. The reconstructed jet multiplicity for the signal process is given by v = R PU R det t where v and t are vectors representing the multiplicity distribution; t represents the MC "truth" as inferred from generator-level jets and v is the reconstructed distribution. Generator-level jets are reconstructed from generated stable particles, excluding neutrinos, with the clustering algorithm used to reconstruct jets in data. These jets must satisfy p T > 30 GeV and |η| < 2.4 and must be separated by ∆R > 0.4 from both of the two leptons from W boson decays. Reconstructed and generator-level jets are said to match if they have ∆R < 0.4. On the basis of the simulated signal event sample, the two response matrices are close to being diagonal: Here, the columns correspond to N J = 0, 1, ≥ 2 for generator-level jets, and the rows to the same for reconstructed jets.
The response matrices are used to unfold the distribution of jet multiplicities according to u = R DET −1 R PU −1 v. No regularization procedure is applied. The fractions of events with N J = 0, 1, ≥ 2 jets are obtained by normalizing u to unit norm: the unfolded result is w = u/| u|.
All systematic uncertainties are reevaluated for the jet multiplicity measurement. Since the observables are essentially yields normalized to the total number of events, systematic uncertainties from the integrated luminosity and lepton efficiency are negligible. The statistical uncertainty in the response matrix is also negligible. Nonnegligible uncertainties are obtained for the jet energy scale and resolution, for pileup reweighting, and for reweighting of the p WW T spectrum. The total relative uncertainties for the elements of the response matrix are: Although the relative uncertainty of the off-diagonal matrix elements is large, those elements themselves are small, so a precise measurement is still achievable. Table 8 reports the measured fractions of events with N J jets. The fractions before unfolding for pileup and jet energy resolution are listed, as well as the prediction based on POWHEG weighted to correct the W + W − p T spectrum. Figure 7 shows a comparison of the measured fractions and the prediction from POWHEG. For this prediction, the p WW T spectrum is reweighted as described in Section 8.2. Table 8: Fractions of events with N J = 0, 1, ≥ 2 jets. The first uncertainty is statistical and the second combines systematic uncertainties from the response matrix and from the background subtraction.

Limits on dimension-6 Wilson coefficients
In the framework of effective field theory, new physics can be described in terms of an infinite series of new interaction terms organized as an expansion in the mass dimension of the corresponding operators [57]. The dimension-4 operators of the SM comprise the zeroth term of the expansion. The series can be understood as coming from the integration of heavy fields in an ultraviolet-complete theory, which itself is renormalizable and unitary. When testing for the presence of these higher-dimensional operators, it is assumed that just one or two operators have nonvanishing coefficients in order to reduce the computational burden. A truncated series, e.g., a series including the SM and dimension-6 operators only, is not renormalizable and will violate tree-level unitarity at some energy scale. Consequently, the truncated series is useful only when the scale of new physics is large compared to the energies accessible in the given final state, in which case terms including higher-dimensional operators are suppressed.
In the electroweak sector of the SM, the first higher-dimensional operators containing only massive boson fields are dimension-6 [15,58]: The gauge group indices are suppressed for clarity and the mass scale Λ has been factored out from the Wilson coefficients c and c. The tensor W µν is the SU(2) field strength, B µν is the U(1) field strength, Φ is the Higgs doublet, and operators with a tilde are the magnetic duals of the field strengths. The first three operators are CP conserving, while the last two are not. In this analysis, only the CP conserving operators are considered.
These operators contribute to several multiboson scattering processes at tree level. The operator O WWW modifies vertices with three to six vector bosons, while O W and O B modify both HVV vertices and vertices with three or four vector bosons. The focus in this analysis is on modifications to the vertices HW + W − , γW + W − , and ZW + W − since they lead to deviations of the pp → W + W − cross section via diagrams of the kind shown in Fig. 8.  The analysis is based on the DF event sample selected in the sequential cut analysis. The SF event sample is not used because the contamination from Drell-Yan processes is larger and the selected event sample itself is smaller. The 0-and 1-jet categories are analyzed separately. The signal region and the top quark control region are both included in the analysis.
The invariant mass m eµ distribution is used to test for dimension-6 operators. The quantity m eµ is well measured and is not sensitive to higher-order QCD effects and jet energy calibration issues. Furthermore, the m eµ distribution is more sensitive to higher-dimensional operators than other observables based on lepton kinematics. In order to suppress the Higgs boson contribution and enhance the sensitivity to higher-dimensional operators, the requirement m eµ > 100 GeV is imposed. The remaining Higgs boson contributions are considered part of the signal. Variations of the relatively small VZ background processes due to dimension-6 operators are neglected.
The measurement of the Wilson coefficients uses templates of m eµ with the following bins: [100, 200, 300, 400, 500, 600, 700, 750, 800, 850, 1000, ∞] GeV; the last bin contains all events with m eµ > 1 TeV. This choice minimizes the expected 95% confidence level (CL) intervals for all coefficients (with fixed mass scale Λ) while populating each bin adequately. The highest bin is the one with the greatest statistical power largely because of the presence of multiple momentum factors in the Feynman diagrams associated with the higher-dimensional operators (Fig. 8).
In order to construct the m eµ templates, the weights calculated for each event are used to build a parametrized model of the expected yield in each bin as a function of the coefficients (with fixed Λ). More precisely, for each bin, a fit is performed of a second-order polynomial to the ratios of the expected signal yield with nonzero coefficients to the one without (SM). When only one coefficient is taken to be nonzero, then the fit is performed to five points, and when two coefficients are taken to be nonzero, the fit is performed to a 5 × 5 grid. These fits are carried out for the 0-and 1-jet categories separately.
A binned maximum likelihood fit of the m eµ templates to the data is carried out. The likelihood is computed using the Poisson probability for each bin i with N exp i expected events and N obs i observed events. Each source of uncertainty is modeled with a log-normal distribution π ij (θ j ) where θ j is the nuisance parameter for a source of uncertainty j as discussed in Section 8. The expected yields N exp i are functions of the nuisance parameters θ j . The likelihood is computed from the product over all bins i: where the N! term has been neglected. The nuisance parameters for the systematic uncertainties are profiled for each dimension-6 operators hypothesis. Figure 9 shows the results of the template fits to the observed m eµ distributions. The expected signal distributions for three values of the coefficients close to the 95% CL expected limits on those coefficients are also shown (not stacked); the largest impact of nonzero coefficients is seen for m eµ > 850 GeV. Figure 10 (left) shows the curves of −2∆ ln L = −2(ln L − ln L min ) for the three dimension-6 operators considered here; the 0-and 1-jet categories have been combined. The corresponding 68 and 95% CL intervals are reported in Table 9. The observed limits are stronger than expected due to a deficit of events at high m eµ . In all cases, they are within two standard deviations of the expected limits as determined by pseudo-experiments. The observed limits are about a factor of two more stringent than recent results reported by the ATLAS Collaboration  In the plot on the right, the decrease in the non-SM contribution at low m eµ is not statistically significant and results from limited precision in the subtraction of two large yields (SM and SM+non-SM). The last bin contains all events with reconstructed m eµ > 1 TeV. The error bars on the data points represent the statistical uncertainties for the data, and the hatched areas represent the total uncertainty for the predicted yield in each bin. to c WWW and c W is similar to the CMS WZ analysis [59] and is much better for c B . Finally, the sensitivity is slightly weaker than for the CMS analysis of W + W − and WZ production in lepton and jets events [60]. Figure 10 (right) shows the expected and observed 68 and 95% confidence level contours for pairs of Wilson coefficients.

Summary
Measurements of W + W − boson pair production in proton-proton collisions at √ s = 13 TeV was performed. The analysis is based on data collected with the CMS detector at the LHC corresponding to an integrated luminosity of 35.9 fb −1 . Candidate events were selected that have two leptons (electrons or muons) with opposite charges. Two analysis methods were described. The first method imposes a sequence of requirements on kinematic quantities to suppress backgrounds, while the second uses a pair of random forest classifiers. The total production cross section is σ tot SC = 117.6 ± 1.4 (stat) ± 5.5 (syst) ± 1.9 (theo) ± 3.2 (lumi) pb = 117.6 ± 6. 8 Figure 10: On the left, the expected and observed −2∆ ln L curves for the c WWW /Λ 2 , c W /Λ 2 , and c B /Λ 2 combining the 0-and 1-jet categories. On the right, the expected and observed 68 and 95% confidence level contours in the (c WWW /Λ 2 , c W /Λ 2 ), (c WWW /Λ 2 , c B /Λ 2 ), and (c W /Λ 2 , c B /Λ 2 ) planes combining the 0-and 1-jet categories. grated luminosity; this measured value is consistent with the next-to-next-to-leading-order theoretical prediction 118.8 ± 3.6 pb. Fiducial cross sections are also measured including the change in the 0-jet fiducial cross section with jet transverse momentum threshold. Normalized differential cross sections are measured and compared with next-to-leading-order SM predictions. Good agreement is observed. The normalized jet multiplicity distribution in W + W − events is measured. Finally, bounds on coefficients of dimension-6 operators in the context of an effective field theory are set using electron-muon invariant mass distributions.

Acknowledgments
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses.     [6] ATLAS Collaboration, "Measurement of total and differential W + W − production cross sections in proton-proton collisions at √ s = 8 TeV with the ATLAS detector and limits on anomalous triple-gauge-boson couplings", JHEP 09 (2016) 029, doi:10.1007/JHEP09(2016)029, arXiv:1603.01702.