Measurement of the isolated diphoton cross-section in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

The ATLAS experiment has measured the production cross-section of events with two isolated photons in the final state, in proton-proton collisions at sqrt(s) = 7 TeV. The full data set acquired in 2010 is used, corresponding to an integrated luminosity of 37 pb^-1. The background, consisting of hadronic jets and isolated electrons, is estimated with fully data-driven techniques and subtracted. The differential cross-sections, as functions of the di-photon mass, total transverse momentum and azimuthal separation, are presented and compared to the predictions of next-to-leading-order QCD.


I. INTRODUCTION
The production of di-photon final states in protonproton collisions may occur through quark-antiquark tchannel annihilation, qq → γγ, or via gluon-gluon interactions, gg → γγ, mediated by a quark box diagram. Despite the higher order of the latter, the two contributions are comparable, due to the large gluon flux at the LHC. Photon-parton production with photon radiation also contributes in processes such as qq, gg → gγγ and qg → qγγ. During the parton fragmentation process, more photons may also be produced. In this analysis, all such photons are considered as signal if they are isolated from other activity in the event. Photons produced after the hadronization by neutral hadron decays, or coming from radiative decays of other particles, are considered as part of the background.
The measurement of the di-photon production crosssection at the LHC is of great interest as a probe of QCD, especially in some particular kinematic regions. For instance, the distribution of the azimuthal separation, ∆φ γγ , is sensitive to the fragmentation model, especially when both photons originate from fragmentation. On the other hand, for balanced back-to-back di-photons (∆φ γγ ≃ π and small total transverse momentum, p T,γγ ) the production is sensitive to soft gluon emission, which is not accurately described by fixed-order perturbation theory.
Di-photon production is also an irreducible background for some new physics processes, such as the Higgs decay into photon pairs [1]: in this case, the spectrum of the invariant mass, m γγ , of the pair is analysed, searching for a resonance. Moreover, di-photon production is a characteristic signature of some exotic models beyond the Standard Model. For instance, Universal Extra-Dimensions (UED) predict non-resonant di-photon production associated with significant missing transverse energy [2,3]. * Full author list given at the end of the article.
Other extra-dimension models, such as Randall-Sundrum [4], predict the production of gravitons, which would decay into photon pairs with a narrow width.
Recent cross-section measurements of di-photon production at hadron colliders have been performed by the DØ [5] and CDF [6] collaborations, at the Tevatron proton-antiproton collider with a centre-of-mass energy √ s = 1.96 TeV.
In this document, di-photon production is studied in proton-proton collisions at the LHC, with a centre-ofmass energy √ s = 7 TeV. After a short description of the ATLAS detector (Section II), the analysed collision data and the event selection are detailed in Section III, while the supporting simulation samples are listed in Section IV. The isolation properties of the signal and of the hadronic background are studied in Section V. The evaluation of the di-photon signal yield is obtained by subtracting the backgrounds from hadronic jets and from isolated electrons, estimated with data-driven methods as explained in Section VI. Section VII describes how the event selection efficiency is evaluated and how the final yield is obtained. Finally, in Section VIII, the differential cross-section of di-photon production is presented as a function of m γγ , p T,γγ and ∆φ γγ .

II. THE ATLAS DETECTOR
The ATLAS detector [7] is a multipurpose particle physics apparatus with a forward-backward symmetric cylindrical geometry and near 4π coverage in solid angle. ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y axis points upward. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). The transverse momentum is defined as p T = p sin θ = p/ cosh η, and a similar definition holds for the transverse energy E T .
The inner tracking detector (ID) covers the pseudorapidity range |η| < 2.5, and consists of a silicon pixel detector, a silicon microstrip detector, and a transition radiation tracker in the range |η| < 2.0. The ID is surrounded by a superconducting solenoid providing a 2 T magnetic field. The inner detector allows an accurate reconstruction of tracks from the primary proton-proton collision region, and also identifies tracks from secondary vertices, permitting the efficient reconstruction of photon conversions in the inner detector up to a radius of ≈ 80 cm.
The electromagnetic calorimeter (ECAL) is a leadliquid argon (LAr) sampling calorimeter with an accordion geometry. It is divided into a barrel section, covering the pseudorapidity region |η| < 1.475, and two end-cap sections, covering the pseudorapidity regions 1.375 < |η| < 3.2. It consists of three longitudinal layers. The first one, in the ranges |η| < 1.4 and 1.5 < |η| < 2.4, is segmented into high granularity "strips" in the η direction, sufficient to provide an event-by-event discrimination between single photon showers and two overlapping showers coming from a π 0 decay. The second layer of the electromagnetic calorimeter, which collects most of the energy deposited in the calorimeter by the photon shower, has a thickness of about 17 radiation lengths and a granularity of 0.025 × 0.025 in η × φ (corresponding to one cell). A third layer is used to correct leakage beyond the ECAL for high energy showers. In front of the accordion calorimeter a thin presampler layer, covering the pseudorapidity interval |η| < 1.8, is used to correct for energy loss before the calorimeter.
A three-level trigger system is used to select events containing two photon candidates. The first level trigger (level-1) is hardware based: using a coarser cell granularity (0.1 × 0.1 in η × φ), it searches for electromagnetic deposits with a transverse energy above a programmable threshold. The second and third level triggers (collectively referred to as the "high-level" trigger) are implemented in software and exploit the full granularity and energy calibration of the calorimeter.

III. COLLISION DATA AND SELECTIONS
The analysed data set consists of proton-proton collision data at √ s = 7 TeV collected in 2010, corresponding to an integrated luminosity of 37.2 ± 1.3 pb −1 [8]. The events are considered only when the beam condition is stable and the trigger system, the tracking devices and the calorimeters are operational.

A. Photon reconstruction
A photon is defined starting from a cluster in the ECAL. If there are no tracks pointing to the cluster, the object is classified as an unconverted photon. In case of converted photons, one or two tracks may be associated to the cluster, therefore creating an ambiguity in the classification with respect to electrons. This is addressed as described in Ref [9].
A fiducial acceptance is required in pseudorapidity, |η γ | < 2.37, with the exclusion of the barrel/endcap transition 1.37 < |η γ | < 1.52. This corresponds to the regions where the ECAL "strips" granularity is more effective for photon identification and jet background rejection [9]. Moreover, photons reconstructed near to regions affected by read-out or high-voltage failures are not considered.
In the considered acceptance range, the uncertainty on the photon energy scale is estimated to be ∼ ±1%.
The energy resolution is parametrized as σ E /E ≃ a/ E [GeV] ⊕ c, where the sampling term a varies between 10% and 20% depending on η γ , and the constant term c is estimated to be 1.1% in the barrel and 1.8% in the endcap. Such a performance has been measured in Z → e + e − events observed in proton-proton collision data in 2010.

B. Photon selection
The photon sample suffers from a major background due to hadronic jets, which generally produce calorimetric deposits broader and less isolated than electromagnetic showers, with sizable energy leaking to the HCAL. Most of the background is reduced by applying requirements (referred to as the loose selection, L) on the energy fraction measured in the HCAL, and on the shower width measured by the second layer of the ECAL. The remaining background is mostly due to photon pairs from neutral hadron decays (mainly π 0 ) with a small opening angle, and reconstructed as single photons. This background is further reduced by applying a more stringent selection on the shower width in the second ECAL layer, together with additional requirements on the shower shape measured by the first ECAL layer: a narrow shower width and the absence of a second significant maximum in the energy deposited in contiguous strips. The combination of all these requirements is referred to as the tight selection (T). Since converted photons tend to have broader shower shapes than unconverted ones, the cuts of the tight selection are tuned differently for the two photon categories. More details on these selection criteria are given in Ref [10].
To reduce the jet background further, an isolation requirement is applied: the isolation transverse energy E iso T , measured by the calorimeters in a cone of angular radius R = (η − η γ ) 2 + (φ − φ γ ) 2 < 0.4, is required to satisfy E iso T < 3 GeV (isolated photon, I). The calculation of E iso T is performed summing over ECAL and HCAL cells surrounding the photon candidate, after removing a central core that contains most of the photon energy. An out-of-core energy correction [10] is applied, to make E iso T essentially independent of E γ T , and an ambient energy correction, based on the measurement of soft jets [11,12] is applied on an event-by-event basis, to remove the contribution from the underlying event and from additional proton-proton interactions ("in-time pile-up").

C. Event selection
The di-photon candidate events are selected according to the following steps: • The events are selected by a di-photon trigger, in which both photon candidates must satisfy the trigger selection and have a transverse energy E γ T > 15 GeV. To select genuine collisions, at least one primary vertex with three or more tracks must be reconstructed.
• The event must contain at least two photon candidates, with E γ T > 16 GeV, in the acceptance defined in Section III A, and passing the loose selection. If more than two such photons exist, the two with highest E γ T are chosen. • To avoid a too large overlap between the two isolation cones, an angular separation ∆R γγ = (η γ • Both photons must satisfy the tight selection (TT sample).
• Both photons must satisfy the isolation requirement E iso T < 3 GeV (TITI sample). In the analysed data set, there are 63673 events where both photons satisfy the loose selection and the ∆R γγ separation requirement. Among these, 5365 events belong to the TT sample, and 2022 to the TITI sample.

IV. SIMULATED EVENTS
The characteristics of the signal and background events are investigated with Monte Carlo samples, generated using Pythia 6.4.21 [13]. The simulated samples are generated with pile-up conditions similar to those under which most of the data were taken. Particle interactions with the detector materials are modelled with Geant4 [14] and the detector response is simulated. The events are reconstructed with the same algorithms used for collision data. More details on the event generation and simulation infrastructure are provided in Ref [15].
The di-photon signal is generated with Pythia, where photons from both hard scattering and quark bremsstrahlung are modelled. To study systematic effects due to the generator model, an alternative di-photon sample has been produced with Sherpa [16].
The background processes are generated with the main physical processes that produce (at least) two sizable calorimetric deposits: these include di-jet and photonjet final states, but minor contributions, e.g. from W, Z bosons, are also present. Such a Monte Carlo sample, referred to as "di-jet-like", provides a realistic mixture of the main final states expected to contribute to the selected data sample. Moreover, dedicated samples of W → eν and Z → e + e − simulated events are used for the electron/photon comparison in isolation and background studies.

V. PROPERTIES OF THE ISOLATION TRANSVERSE ENERGY
The isolation transverse energy, E iso T , is a powerful discriminating variable to estimate the jet background contamination in the sample of photon candidates. The advantage of using this quantity is that its distribution can be extracted directly from the observed collision data, both for the signal and the background, without relying on simulations.
Section V A describes a method to extract the distribution of E iso T for background and signal, from observed photon candidates. An independent method to extract the signal E iso T distribution, based on observed electrons, is described in Section V B. Finally, the correlation between isolation energies in events with two photon candidates is discussed in Section V C.

A. Background and signal isolation from photon candidates
For the background study, a control sample is defined by reconstructed photons that fail the tight selection but pass a looser one, where some cuts are released on the shower shapes measured by the ECAL "strips". Such photons are referred to as non-tight. A study carried out on the "di-jet-like" Monte Carlo sample shows that the E iso T distribution in the non-tight sample reproduces that of the background, as shown in Figure 1(a).
The tight photon sample contains a mixture of signal and background. However, a comparison between the shapes of the E iso T distributions from tight and nontight samples (Figure 1(b)) shows that for E iso T > 7 GeV there is essentially no signal in the tight sample. Therefore, the background contamination in the tight sample can be subtracted by using the non-tight sample, normalized such that the integrals of the two distributions are equal for E iso T > 7 GeV. The E iso T distribution of the signal alone is thus extracted. Figure 1(c) shows the result, for photons in the "di-jet-like" Monte Carlo sample.
In collision data, events with two photon candidates are used to build the tight and non-tight samples, for the leading and subleading candidate separately. The points in Figure 2 display the distribution of E iso T for the leading and sub-leading photons. In each of the two distributions, one bin has higher content, reflecting opposing fluctuations in the subtracted input distributions in those bins. The effect on the di-photon cross-section measurement is negligible.
The main source of systematic error comes from the definition of the non-tight control sample. There are three sets of strips cuts that could be released: the first set concerns the shower width in the core, the second tests for the presence of two maxima in the cluster, and the third is a cut on the full shower width in the strips. The choice adopted is to release only the first two sets of cuts, as the best compromise between maximizing the statistics in the control sample, while keeping the background E iso T distribution fairly unbiased. To test the effect of this choice, the sets of released cuts have been changed, either by releasing only the cuts on the shower core width in the strips, or by releasing all the strips cuts. A minor effect is also due to the choice of the region E iso T > 7 GeV, to normalize the non-tight control sample: the cut has therefore been moved to 6 and 8 GeV.
More studies with the "di-jet-like" Monte Carlo sample have been performed, to test the robustness of the E iso T extraction against model-dependent effects such as: (i) signal leakage into the non-tight sample; (ii) correlations between E iso T and strips cuts; (iii) different signal composition, i.e. fraction of photons produced by the hard scattering or by the fragmentation process; (iv) different background composition, i.e. fraction of photon pairs from π 0 decays. In all cases, the overall systematic error, computed as described above, covers the differences between the true and data-driven results as evaluated from these Monte Carlo tests.

B. Signal isolation from electron extrapolation
An independent method of extracting the E iso T distribution for the signal photons is provided by the "electron extrapolation". In contrast to photons, it is easy to select a pure electron sample from data, from W ± → e ± ν and Z → e + e − events [17]. The main differences between the electron and photon E iso T distributions are: (i) the electron E iso T in the bulk of the distribution is slightly larger, because of bremsstrahlung in the material upstream of the calorimeter; (ii) the photon E iso T distribution exhibits a larger tail because of the contribution of the photons from fragmentation, especially for the sub-leading photon. Such differences are quantified with W ± → e ± ν, Z → e + e − and γγ Monte Carlo samples by fitting the E iso T distributions with Crystal Ball functions [18] and comparing the parameters. Then, the electron/photon differences are propagated to the selected electrons from collision data. The result is shown by the continuous lines in Figure 2, agreeing well with the E iso T distributions obtained from the non-tight sample subtraction (circles).

C. Signal and background isolation in events with two photon candidates
In events with two photon candidates, possible correlations between the two isolation energies have been investigated by studying the signal and background E iso T distributions of a candidate ("probe") under different isolation conditions of the other candidate ("tag"). The signal E iso T shows negligible dependence on the tag conditions. In contrast, the background E iso T exhibits a clear positive correlation with the isolation transverse energy of the tag: if the tag passes (or fails) the isolation requirement, the probe background candidate is more (or less) isolated. This effect is visible especially in di-jet final states, which can be directly studied in collision data by requiring both photon candidates to be non-tight, and is taken into account in the jet background estimation (Section VI A).
This correlation is also visible in the "di-jet-like" Monte Carlo sample.

VI. BACKGROUND SUBTRACTION AND SIGNAL YIELD DETERMINATION
The main background to selected photon candidates consists of hadronic jets. This is reduced by the photon tight selection described in Section III B. However a significant component is still present and must be subtracted. The techniques to achieve this are described in Section VI A.
Another sizable background component comes from isolated electrons, mainly originating from W and Z decays, which look similar to photons from the calorimetric point of view. The subtraction of such a contamination is addressed in Section VI B.
The background due to cosmic rays and to beam-gas collisions has been studied on dedicated data sets, se-lected by special triggers. Its impact is found to be negligible.

A. Jet background
The jet background is due to photon-jet and di-jet final states. This section describes three methods, all based on the isolation transverse energy, E iso T , aiming to separate the TITI sample into four categories: according to their physical final states -γj and jγ differ by the jet faking respectively the sub-leading or the leading photon candidate. The signal yield N TITI γγ is evaluated in bins of the three observables m γγ , p T,γγ , ∆φ γγ , as in Figure 3. Due to the dominant back-to-back topology of di-photon events, the kinematic selection produces a turn-on in the distribution of the di-photon invariant mass, at m γγ > ∼ 2E cut T (E cut T = 16 GeV being the applied cut on the photon transverse energy), followed by the usual decrease typical of the continuum processes. The region at lower m γγ is populated by di-photon events with low ∆φ.
The excess in the mass bin 80 < m γγ < 100 GeV, due to a contamination of electrons from Z-decays, is addressed in Section VI B.
From the evaluation of the background yields (N TITI γj + N TITI jγ and N TITI jj ), the average fractions of photon-jet and di-jet events in the TITI sample are ∼ 26% and ∼ 9% respectively.
The three results shown in Figure 3 are compatible. This suggests that there are no hidden biases induced by the analyses. However, the three measures cannot be combined, as all make use of the same quantities -E iso T and shower shapes -and use the non-tight background control region, so they may have correlations. None of the methods has striking advantages with respect to the others, and the systematic uncertainties are comparable. The "event weighting" method (VI A 1) is used for the cross-section evaluation, since it provides event weights that are also useful in the event efficiency evaluation, and its sources of systematic uncertainties are independent of those related to the signal modelling and reconstruction.

Event weighting
Each event satisfying the tight selection on both photons (sample TT) is classified according to whether the photons pass or fail the isolation requirement, resulting in a PP, PF, FP, or FF classification. These are translated into four event weights W γγ , W γj , W jγ , W jj , which describe how likely the event is to belong to each of the four final states. A similar approach has already been used by the DØ [5] and CDF [6] collaborations. , as a function of the three observables mγγ , pT,γγ , ∆φγγ , obtained with the three methods. In each bin, the yield is divided by the bin width. The vertical error bars display the total errors, accounting for both the statistical uncertainties and the systematic effects. The points are artificially shifted horizontally, to better display the three results.
The connection between the pass/fail outcome and the weights, for the k-th event, is: If applied to a large number of events, the quantities S XY would be the fractions of events satisfying each pass/fail classification, and the weights would be the fractions of events belonging to the four different final states. On an event-by-event approach, S (k) XY are boolean status variables (e.g. for an event where both candidates are isolated, S (k) The quantity E (k) is a 4×4 matrix, whose coefficients give the probability that a given final state produces a certain pass/fail status. If there were no correlation between the isolation transverse energies of the two candidates, it would have the form: where ǫ i and f i (i = 1, 2 for the leading/sub-leading candidate) are the probabilities that a signal or a fake photon respectively pass the isolation cut. These are obtained from the E iso T distributions extracted from collision data, as described in Section V A. The value of ǫ is essentially independent of E γ T and changes with η γ , ranging between 80% and 95%. The value of f depends on both E T and η and takes values between 20% and 40%. Given such dependence on the kinematics, the matrix E (k) is also evaluated for each event.
Due to the presence of correlation, the matrix coefficients in equation (2) actually involve conditional probabilities, depending on the pass/fail status of the other candidate (tag) of the pair. For instance, the first two coefficients in the last column become: where the superscriptsP andF denote the pass/fail status of the tag. The ambiguity in the choice of the tag is solved by taking both choices and averaging them. All the conditional (ǫP ,F i , fP ,F i ) probabilities are derived from collision data, as discussed in Section V C.
The signal yield in the TITI sample can be computed as a sum of weights running over all events in the TT sample: where the weight w (k) for the k-th event is: and the sum over k is carried out on the events in a given bin of the variable of interest (m γγ , p T,γγ , ∆φ γγ ). The result is shown in Figure 3, the normalization of the non-tight sample: +0 −2% ; (iii) the statistics used to compute the E iso T distributions, and hence the precision of the matrix coefficients: ±9%. Effects (i) and (ii) are estimated as explained in Section V A. Effect (iii) is quantified by increasing and decreasing the ǫ, f parameters by their statistical errors, and recomputing the signal yield: the variations are then added in quadrature.

Two-dimensional fit
From all the di-photon events satisfying the tight selection (sample TT), the observed 2-dimensional distribution F obs (E iso T,1 , E iso T,2 ) of the isolation energies of the leading and sub-leading photons is built. Then, a linear combination of four unbinned probability density functions (PDFs), F γγ , F γj , F jγ , F jj , describing the 2-dimensional distributions of the four final states, is fit to the observed distribution. For the γγ, γj, jγ final states, the correlation between E iso T,1 and E iso T,2 is assumed to be negligible, therefore the 2-dimensional PDFs are factorized into the leading and sub-leading PDFs. The leading and sub-leading photon PDFs F γ1 , F γ2 are obtained from the electron extrapolation, as described in Section V B. The background PDF F j2 for γj events is obtained from the non-tight sample on the sub-leading candidate, for events where the leading candidate satisfies the tight selection. The background PDF F j1 for jγ events is obtained in a similar way. Both background PDFs are then smoothed with empirical parametric functions. The PDF for jj events cannot be factorized, due to the sizable correlation between the two candidates. Therefore, a 2dimensional PDF is directly extracted from events where both candidates belong to the non-tight sample, then smoothed.
The four yields in the TT sample come from an extended maximum likelihood fit of: . Figure 4 shows the fit result for the full TT data set. The yields in the TITI sample are evaluated by multiplying N TT γγ by the integral of the 2-dimensional signal PDF in the region defined by E iso T,1 < 3 GeV and E iso T,2 < 3 GeV. The procedure is applied to the events belonging to each bin of the observables m γγ , p T,γγ , ∆φ γγ . The result is displayed in Figure 3, by the open triangles.

0%
. Effect (i) is estimated by changing the number of released strips cuts, as explained in Section V A. Effect (ii) has been estimated by artificially setting the fraction of fragmentation photons to 0% or to 100%. Effect (iii) has been quantified by repeating the e → γ extrapolation based on Monte Carlo samples with a distorted geometry. Effects (iv, v) have been estimated by randomly varying the parameters of the smoothing functions, within their covariance ellipsoid, and repeating the 2-dimensional fit. Effect (vi) has been estimated by randomly extracting a set of (E iso T,1 , E iso T,2 ) pairs, comparable to the experimental statistics, from the smoothed F jj PDF, then re-smoothing the obtained distribution and repeating the 2-dimensional fit. Effect (vii) has been estimated by taking the signal contamination from simulation -neglected when computing the central value. and tight identification (y-axis) criteria for the classification of the leading photon candidate. When the leading photon belongs to region A, the same classification is applied to the sub-leading photon, as described by the bottom plane.

Isolation vs identification sideband counting (2D-sidebands)
This method has been used in ATLAS in the inclusive photon cross-section measurement [10] and in the background decomposition in the search for the Higgs boson decaying into two photons [19].
The base di-photon sample must fulfil the selection with the strips cuts released, defined by the union of tight and non-tight samples and here referred to as loose' (L'). The leading photons in the L'L' sample are divided into four categories A, B, C, D, depending on whether they satisfy the tight selection and/or the isolation requirement -see Figure 5 (top). The signal region, defined by tight and isolated photons (TI), contains N A candidates, whereas the three control regions contain N B , N C , N D candidates. Under the hypothesis that regions B, C, D are largely dominated by background, and that the isolation energy of the background has little dependence on the tight selection (as discussed in Section V A), the number of genuine leading photons N sig A in region A, coming from γγ and γj final states, can be computed [10] by solving the equation: (5) Here, c 1 and c 2 are the signal fractions failing respectively the isolation requirement and the tight selection. The former is computed from the isolation distributions, as extracted in Section V A; the latter is evaluated from Monte Carlo simulation, after applying the corrections to adapt it to the experimental shower shapes distributions [10]. The parameter R bkg = measures the degree of correlation between the isolation energy and the photon selection in the background: it is set to 1 to compute the central values, then varied according to the "di-jet-like" Monte Carlo prediction for systematic studies.
When the leading candidate is in the TI region, the sub-leading one is tested, and four categories A ′ , B ′ , C ′ , D ′ are defined, as in the case of the leading candidate -see Figure 5 (bottom). The number of genuine sub-leading photons N ′ A sig , due to γγ and jγ final states, is computed by solving an equation analogous to (5).
N sig A and N ′ A sig are related to the yields by: is the probability that a subleading photon satisfies the tight selection and isolation requirement, while f ′ is the analogous probability for a jet faking a sub-leading photon. The di-photon yield is therefore computed as: and f ′ can be computed from the observed quantities to be f ′ = The parameter α is defined as the fraction of photon-jet events in which the jet fakes the leading photon, α = N TITI jγ N TITI γj +N TITI jγ , whose value is taken from the Pythia photon-jet simulation.
The counts and hence the yield, can be computed for all events entering a given bin of m γγ , p T,γγ , ∆φ γγ . The result is displayed in Figure 3, by the open squares.
The main source of systematic error is the definition of the non-tight sample: it induces an error of +7% −10% . The other effects come from the uncertainties of the parameters entering equation (6). The main effects come

B. Electron background
Background from isolated electrons contaminates mostly the selected converted photon sample. The contamination in the di-photon analysis comes from several physical channels: (i) e + e − final states from Drell-Yan processes, Z → e + e − decay, W + W − → e + e − νν; (ii) γe ± final states from di-boson production, e.g. γW ± → γe ± ν, γZ → γe + e − . The effect of the Z → e + e − contamination is visible in Figure 3 in the mass bin 80 < m γγ < 100 GeV.
Rather than quantifying each physical process separately, a global approach is chosen. The events reconstructed with γγ, γe and ee final states are counted, thus obtaining counts N γγ , N γe and N ee . Only photons and electrons satisfying a tight selection and the calorimetric isolation E iso T < 3 GeV are considered, and electrons are counted only if they are not reconstructed at the same time as photons. Such counts are related to the actual underlying yields N true γγ , N true γe , N true ee , defined as the number of reconstructed final states where both particles are correctly classified. Introducing the ratio f e→γ = Ne→γ Ne→e between genuine electrons that are wrongly and correctly classified, and likewise f γ→e = Nγ→e Nγ→γ for genuine photons, the relationship between the N and N true quantities is described by the following linear system: which can be solved for the unknown N true γγ . The value of f e→γ is extracted from collision data, as f e→γ = Nγe 2Nee , from events with an invariant mass within ±5 GeV of the Z mass. The continuum background is removed using symmetric sidebands. The result is f e→γ = 0.112 ± 0.005(stat) ± 0.003(syst), where the systematic error comes from variations of the mass window and of the sidebands. This method has been tested on "di-jet-like" and Z → e + e − Monte Carlo samples and shown to be unbiased. The value of f γ→e is taken from the "di-jet-like" Monte Carlo: f γ→e = 0.0077. To account for imperfect modelling, this value has also been set to 0, or to three times the nominal value, and the resulting variations are considered as a source of systematic error.
The electron contamination is estimated for each bin of m γγ , p T,γγ and ∆φ γγ , and subtracted from the diphoton yield. The result, as a function of m γγ , is shown in Figure 6. The fractional contamination as a function of p T,γγ and ∆φ γγ is rather flat, amounting to ∼ 5%.

VII. EFFICIENCIES AND UNFOLDING
The signal is defined as a di-photon final state, which must satisfy precise kinematic cuts (referred to as "fiducial acceptance"): • both photons must have a transverse momentum p γ T > 16 GeV and must be in the pseudorapidity acceptance |η γ | < 2.37, with the exclusion of the region 1.37 < |η γ | < 1.52; • the separation between the two photons must be • both photons must be isolated, i.e. the transverse energy flow E iso(part) T due to interacting particles in a cone of angular radius R < 0.4 must be E iso(part) T < 4 GeV.
These kinematic cuts define a phase space similar to the experimental selection described in Section III. In particular, the requirement on E iso(part) T has been introduced to match approximately the experimental cut on E iso T . The value of E iso(part) T is corrected for the ambient energy, similarly to what is done for E iso T . From studies on a Pythia di-photon Monte Carlo sample, there is a high correlation between the two variables, and E iso T = 3 GeV corresponds to E iso(part) T ≃ 4 GeV. A significant number of di-photon events lying outside the fiducial acceptance pass the experimental selection because of resolution effects: these are referred to as "below threshold" (BT ) events.
The background subtraction provides the di-photon signal yields for events passing all selections (TITI). Such yields are called N TITI i , where the index i flags the bins of the reconstructed observable X rec under consideration (X being m γγ , p T,γγ , ∆φ γγ ). The relationship between N TITI i and the true yields n α (α being the bin index of the true value X true ) is: where N II i is the number of reconstructed isolated diphoton events in the i-th bin, and: • ǫ trigger is the trigger efficiency, computed for events where both photons satisfy the tight identification and the calorimetric isolation; • ǫ TT i is the efficiency of the tight identification, for events where both photons satisfy the calorimetric isolation; • f BT i is the fraction of "below-threshold" events; • M iα is a "migration probability", i.e. the probability that an event with X true in bin-α is reconstructed with X rec in bin-i; • ǫ RA α accounts for both the reconstruction efficiency and the acceptance of the experimental cuts (kinematics and calorimetric isolation).

A. Trigger efficiency
The trigger efficiency is computed from collision data, for events containing two reconstructed photons with transverse energy E γ T > 16 GeV, both satisfying the tight identification and the calorimetric isolation requirement (TITI). The computation is done in three steps.
First, a level-1 e/γ trigger with an energy threshold of 5 GeV is studied: its efficiency, for reconstructed TI photons, is measured on an inclusive set of minimumbias events: for E γ T > 16 GeV it is ǫ 0 = 100.0 +0.0 −0.1 % -therefore such a trigger does not bias the sample. Next, a high-level photon trigger with a 15 GeV threshold is studied, for reconstructed TI photons selected by the level-1 trigger: its efficiency is ǫ 1 = 99.1 +0. 3 −0.4 % for E γ T > 16 GeV. Finally, di-photon TITI events with the sub-leading photon selected by a high-level photon trigger are used to compute the efficiency of the diphoton 15 GeV-threshold high-level trigger, obtaining ǫ 2 = 99.4 +0.5 −1.0 %. The overall efficiency of the trigger is therefore ǫ trigger = ǫ 0 ǫ 1 ǫ 2 = (98.5 +0.6 −1.0 ± 1.0)%. The first uncertainty is statistical, the second is systematic and accounts for the contamination of photon-jet and di-jet events in the selected sample.

B. Identification efficiency
The photon tight identification efficiency ǫ T|I , for photon candidates satisfying the isolation cut E iso T < 3 GeV, is computed as described in Ref [10], as a function of η γ and E γ T . The efficiency is determined by applying the tight selection to a Monte Carlo photon sample, where the shower shape variables have been shifted to better reproduce the observed distributions. The shift factors are obtained by comparing the shower shapes of photon candidates from a "di-jet-like" Monte Carlo sample to those observed in collision data. To enhance the photon component in the sample -otherwise overwhelmed by the jet background -only the photon candidates satisfying the tight selection are considered. This procedure does not bias the bulk of the distribution under test appreciably, since the cuts have been tuned to reject only the tails of the photons' distributions. However, to check the systematic effect due to the selection, the shift factors are also recomputed applying the loose selection.
Compared to Ref [10], the photon identification cuts have been re-optimized to reduce the systematic errors, and converted and unconverted photons treated separately. The photon identification efficiency is η γdependent, and increases with E γ T , ranging from ∼ 60% for 16 < E γ T < 20 GeV, to > ∼ 90% for E γ T > 100 GeV. The overall systematic error is between 2% and 10%, the higher values being applicable at lower E γ T and for converted photons. The main sources of systematic uncertainty are: (i) the systematic error on the shift factors; (ii) the knowledge of the detector material; (iii) the failure to detect a conversion, therefore applying the wrong tight identification.
Rather than computing an event-level identification efficiency for each bin of each observable, the photon efficiency can be naturally accommodated into the event weights described in Section VI A 1, by dividing the weight w (k) of equation (4) by the product of the two photon efficiencies: where the sum is extended over all events in the TT sample and in the i-th bin. Here the identification efficiencies of the two photons are assumed to be uncorrelated -which is ensured by the separation cut ∆R > 0.4, and by the binning in η γ and E γ T . The event efficiency, ǫ TT , is essentially flat at ∼ 60% in ∆φ γγ , and increases with m γγ and p T,γγ , ranging from ∼ 55% to ∼ 80%. Its total systematic error is ∼ 10%, rather uniform over the m γγ , p T,γγ , ∆φ γγ ranges.

C. Reconstruction, acceptance, isolation and unfolding
The efficiency ǫ RA α accounts for both the reconstruction efficiency and the acceptance of the experimental selection. It is computed for each bin of X true , with Monte Carlo di-photon events generated with Pythia in the fiducial acceptance, as the fraction of events where both photons are reconstructed, pass the acceptance cuts and the calorimetric isolation. The value of ǫ RA α ranges between 50% and 60%. The two main sources of inefficiency are the local ECAL read-out failures (∼ −18%) and the calorimetric isolation (∼ −20%).
The energy scale differences between Monte Carlo and collision data -calibrated on Z → e + e − events -are taken into account. The uncertainties on the energy scale and resolution are propagated as systematic errors through the evaluation: the former gives an effect between +3% and −1% on the signal rate, while the latter has negligible impact.
In Monte Carlo, the calorimetric isolation energy, E iso T , needs to be corrected to match that observed in collision data. The correction is optimized on tight photons, for which the background contamination can be removed (see Section V A), then it is applied to all photons in the Monte Carlo sample. The E iso T difference observed between Monte Carlo simulation and collision data may be entirely due to inaccurate Geant4/detector modelling, or it can also be a consequence of the physical model in the generator (e.g. kinematics, fragmentation, hadronization). From the comparison between collision data and simulation, the two effects cannot be disentangled. To compute the central values of the results, the difference between simulation and collision data is assumed to be entirely due to the detector simulation. As a crosscheck, the opposite case is assumed: that the difference is entirely due to the generator model. In this case, the particle-level isolation E iso(part) T should also be corrected, using the E iso(part) T → E iso T relationship described by the detector simulation. This modifies the definition of fiducial acceptance, and hence the values of ǫ RA α , resulting in a cross-section variation of ∼ −7%, which is handled as an asymmetric systematic uncertainty.
The fraction of events "below threshold", f BT i , is computed from the same Pythia signal Monte Carlo sample, for each bin of X rec . Its value is maximum (∼ 12%) for m γγ about twice the E T cut, and decreases to values < 5% for m γγ > 50 GeV.
The "migration matrix", M iα , is filled with Pythia Monte Carlo di-photon events in the fiducial acceptance, that are reconstructed, pass the acceptance cuts and the calorimetric isolation. The inversion of this matrix is performed with an unfolding technique, based on Bayesian iterations [20]. The systematic uncertainties of the procedure have been estimated with a large number of toy datasets and found to be negligible. The result has also been tested to be independent of the initial ("prior") distributions. Moreover, it has been checked that a simpler bin-by-bin unfolding yields compatible results.
As the evaluation of ǫ RA α , f BT i , M iα may strongly depend on the simulation modelling, two additional Monte Carlo samples have been used, the first with more material modelled in front of the calorimeter, and the second with a different generator (Sherpa): the differences on the computed signal rates are ∼ +10% and < ∼ +5% respectively, and are treated as systematic errors.

VIII. CROSS-SECTION MEASUREMENT
The di-photon production cross-section is evaluated from the corrected binned yields n α , divided by the integrated luminosity Ldt = (37.2 ± 1.3) pb −1 [8]. The results are presented as differential cross-sections, as functions of the three observables m γγ , p T,γγ , ∆φ γγ , for a phase space defined by the fiducial acceptance cuts in Section VII. In Table I, the differential cross-section is quoted for each bin, with its statistical and systematic uncertainty. In Table II, all the considered sources of systematic errors are listed separately.
The experimental measurement is compared with theoretical predictions from the DIPHOX [21] and ResBos [22] NLO generators in Figures 7, 8 and 9. The DIPHOX and ResBos evaluation has been carried out using the NLO fragmentation function [23] and the CTEQ6.6 parton density function (PDF) set [24]. The fragmentation, normalization and factorization scales are set equal to m γγ . The same fiducial acceptance cuts introduced in the signal definition (Section VII) are applied. Since neither generator models the hadronization, it is not possible to apply a requirement on E iso(part) T : the closest isolation variable available in such generators is the "partonic isolation", therefore this is required to be less then 4 GeV. The computed cross-section shows a weak dependence on the partonic isolation cut: moving it to 2 GeV or 6 GeV produces variations within 5%, smaller than the theoretical systematic errors.
The theory uncertainty error bands come from scale and PDF uncertainties evaluated from DIPHOX: (i) variation of renormalization, fragmentation and factorization scales: each is varied to 1 2 m γγ and 2m γγ , and the envelope of all variations is assumed as a systematic error; (ii) variation of the eigenvalues of the PDFs: each is varied by ±1σ, and positive/negative variations are summed in quadrature separately. As an alternative, the MSTW 2008 PDF set has been used: the difference with  respect to CTEQ6.6 is an overall increase by ∼ 10%, which is covered by the CTEQ6.6 total systematic error.
The measured distribution of dσ/d∆φ γγ (Figure 9) is clearly broader than the DIPHOX and ResBos predictions: more photon pairs are seen in data at low ∆φ γγ values, while the theoretical predictions favour a larger back-to-back production (∆φ γγ ≃ π). This result is qualitatively in agreement with previous measurements at the Tevatron [5,6]. The distribution of dσ/dm γγ (Figure 7) agrees within the assigned uncertainties with both the DIPHOX and ResBos predictions, apart from the region m γγ < 2E cut T (E cut T = 16 GeV being the applied cut on the photon transverse momenta): as this region is populated by events with small ∆φ γγ , the poor quality of the predictions can be related to the discrepancy observed in the ∆φ γγ distribution. The result for dσ/dp T,γγ (Figure 8) is in agreement with both DIPHOX and ResBos: the maximum deviation, about 2σ, is observed in the region 50 < p T,γγ < 60 GeV.

IX. CONCLUSIONS
This paper describes the measurement of the production cross-section of isolated di-photon final states in proton-proton collisions, at a centre-of-mass energy √ s = 7 TeV, with the ATLAS experiment. The full data sample collected in 2010, corresponding to an integrated luminosity of 37.2 ± 1.3 pb −1 , has been analysed.
The selected sample consists of 2022 candidate events containing two reconstructed photons, with transverse momenta p T > 16 GeV and satisfying tight identification and isolation requirements. All the background sources have been investigated with data-driven techniques and subtracted. The main background source, due to hadronic jets in photon-jet and di-jet events, has been estimated with three computationally independent analyses, all based on shower shape variables and isolation, which give compatible results. The background due to isolated electrons from W and Z decays is estimated with collision data, from the proportions of observed ee, γe and γγ final states, in the Z-mass region and elsewhere.
The result is presented in terms of differential crosssections as functions of three observables: the invariant mass m γγ , the total transverse momentum p T,γγ , and the azimuthal separation ∆φ γγ of the photon pair. The experimental results are compared with NLO predictions obtained with DIPHOX and ResBos generators. The observed spectrum of dσ/d∆φ γγ is broader than the NLO predictions. The distribution of dσ/dm γγ is in good agreement with both the DIPHOX and ResBos predictions, apart from the low mass region. The result for dσ/dp T,γγ is generally well described by DIPHOX and ResBos.

X. ACKNOWLEDGEMENTS
We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently.
We  I. Binned differential cross-sections dσ/dmγγ , dσ/dpT,γγ , dσ/d∆φγγ for di-photon production. For each bin, the differential cross-section is quoted with its statistical and systematic uncertainties (symmetric and asymmetric, respectively). Values quoted as 0.000 are actually less than 0.0005 in absolute value.  Breakdown of the total cross-section systematic uncertainty, for each bin of mγγ , pT,γγ and ∆φγγ . The meaning of each column is as follows: "T" is the definition of the non-tight control sample; "Ĩ" is the choice of the E iso T region used to normalize the non-tight sample; "matrix" refers to the statistical uncertainty of the matrix coefficients used by the event weighting; "e → γ" is the total systematic coming from the electron fake rate; "ID" is the overall uncertainty coming from the method used to derive the identification efficiency; "material" is the effect of introducing a detector description with distorted material distribution; "generator" shows the variation due to the usage of a different generator (Sherpa instead of Pythia); "σE" and "E-scale" are due to uncertainties on energy resolution and scale; "E iso(part) T " is the effect of smearing the particlelevel isolation E iso(part) T ; " Ldt" is the effect due to the total luminosity uncertainty. Values quoted as 0.000 are actually less than 0.0005 in absolute value.