Measurement of Atmospheric Neutrino Oscillations at 6-56 GeV with IceCube DeepCore

We present a measurement of the atmospheric neutrino oscillation parameters using three years of data from the IceCube Neutrino Observatory. The DeepCore infill array in the center of IceCube enables detection and reconstruction of neutrinos produced by the interaction of cosmic rays in the Earth's atmosphere at energies as low as $\sim5$ GeV. That energy threshold permits measurements of muon neutrino disappearance, over a range of baselines up to the diameter of the Earth, probing the same range of $L/E_\nu$ as long-baseline experiments but with substantially higher energy neutrinos. This analysis uses neutrinos from the full sky with reconstructed energies from $5.6$ - $56$ GeV. We measure $\Delta m^2_{32}=2.31^{+0.11}_{-0.13} \times 10^{-3}$ eV$^2$ and $\sin^2 \theta_{23}=0.51^{+0.07}_{-0.09}$, assuming normal neutrino mass ordering. These results are consistent with, and of similar precision to, those from accelerator and reactor-based experiments.

We present a measurement of the atmospheric neutrino oscillation parameters using three years of data from the IceCube Neutrino Observatory. The DeepCore infill array in the center of IceCube enables detection and reconstruction of neutrinos produced by the interaction of cosmic rays in the Earth's atmosphere at energies as low as ∼ 5 GeV. That energy threshold permits measurements of muon neutrino disappearance, over a range of baselines up to the diameter of the Earth, probing the same range of L/Eν as long-baseline experiments but with substantially higher energy neutrinos. This analysis uses neutrinos from the full sky with reconstructed energies from 5.6 -56 GeV. We measure ∆m 2 32 = 2.31 +0.11 −0.13 × 10 −3 eV 2 and sin 2 θ23 = 0.51 +0.07 −0.09 , assuming normal neutrino mass

INTRODUCTION
It is well established that the neutrino mass eigenstates do not correspond to the neutrino flavor eigenstates, leading to flavor oscillations as neutrinos propagate through space [1,2]. After traveling a distance L a neutrino of energy E may be detected with a different flavor than it was produced with. In particular, the muon neutrino survival probability is described approximately by where U µ3 = sin θ 23 cos θ 13 is one element of the PMNS [3,4] matrix U expressed in terms of the mixing angles θ 23 and θ 13 , ∆m 2 32 = m 2 3 − m 2 2 is the splitting of the second and third neutrino mass states that drives oscillation on the length and energy scales relevant to this analysis. In addition to the parameters shown in Eq. (1), neutrino oscillations also depend on the parameters θ 12 , ∆m 2 21 and δ CP , but these have a negligible effect on the data presented in this paper.
Interactions of cosmic rays in the atmosphere [5][6][7] provide a large flux of neutrinos traveling distances ranging from L ∼ 20 km (vertically down-going) to L ∼ 1.3 × 10 4 km (vertically up-going) to a detector near the Earth's surface. For up-going neutrinos, there is complete muon neutrino disappearance at energies as high as ∼ 25 GeV. Given the density of material traversed by these neutrinos, matter effects alter Eq. (1) slightly and must be taken into account [8][9][10][11].
In this letter, we report our measurement of θ 23 and ∆m 2 32 , using the IceCube Observatory to observe oscillation-induced patterns in the atmospheric neutrino flux coming from all directions between 5.6 GeV and 56 GeV. The results presented here complement other leading experiments [12][13][14][15][16] in two ways. Long-baseline experiments with baselines of a few hundred kilometers and Super-Kamiokande observe much lower energy events (primarily charged-current quasi-elastic and resonant scattering), while our measurement relies on higher energy deep inelastic scattering events and is thus subject to different sources of systematic uncertainty [17]. In addition, the higher energy range of IceCube neutrinos provides complementary constraints on potential new physics in the neutrino sector [18][19][20][21][22][23][24][25][26][27].
The IceCube detector was fully commissioned in 2011 and we previously reported results [28] using data from May 2011 through April 2014. Those results were obtained using reconstruction tools that relied on unscattered Cherenkov photons and therefore were less susceptible to detector noise. The results presented here use a new reconstruction that includes scattered photons and retains an order of magnitude more events per year. Because the detector's noise rates were still stabilizing during the first year of operation, and the new reconstruction is more susceptible to noise, we chose before unblinding to use data from April 2012 through May 2015.

THE ICECUBE DEEPCORE DETECTOR
The IceCube In-Ice Array [29] is composed of 5160 downward-looking 10" photomultiplier tubes (PMTs) embedded in a 1 km 3 volume of the South Pole glacial ice at depths between 1.45 and 2.45 km. The PMTs and associated electronics are enclosed in glass pressure spheres to form digital optical modules (DOMs) [30,31]. The DOMs are deployed on 86 vertical strings of 60 modules each. Of these strings, 78 are deployed in a triangular grid with horizontal spacing of about 125 m between strings. These DOMs are used primarily as an active veto to reject atmospheric muon events in this analysis. The remaining 8 strings fill a more densely instrumented ∼ 10 7 m 3 volume of ice in the bottom center of the detector, called DeepCore, enabling detection of neutrinos with energies down to ∼5 GeV [32].
Neutrino interactions in DeepCore are simulated with GENIE [33]. Hadrons produced in these interactions are simulated using GEANT4 [34], as are electromagnetic showers below 100 MeV. At higher energies, shower-toshower variation is small enough to permit use of standardized light emission templates [35] based on GEANT4 simulations to reduce computation time. Muons energy losses in the ice are simulated using the PROPOSAL package [36]. Cherenkov photons produced by showers and muons are tracked individually using GPU-based software to simulate scattering and absorption [37].

RECONSTRUCTION AND EVENT SELECTION
The event reconstruction used in this analysis models the scattering of Cherenkov photons in the ice surrounding our DOMs [38] to calculate the likelihood of the observed photoelectrons as a function of the neutrino interaction position, direction, and energy. Given the complexity of this likelihood space, the MultiNest algorithm [39] is used to find the global maximum. This reconstruction is run under two different event hypotheses: first a ν µ charged-current (CC) interaction comprising a hadronic shower and collinear muon track emerging from the interaction vertex, and then with only a shower at the vertex (i.e., a nested hypothesis with zero muon track length). The latter model incorporates ν e and most ν τ CC interactions as well as neutral current interactions,  as we do not attempt to separate electromagnetic showers produced by a leading lepton from hadronic showers produced by the disrupted nucleus.
The ν µ CC reconstruction is used to estimate the direction and energy of the neutrino. The difference in best-fit likelihoods between the two hypotheses is used to classify our events as "track-like," if inclusion of a muon track improves the fit substantially, or "cascade-like," if the event is equally well fit without a muon. The reconstructed neutrino energy (E reco ) distributions of events in each of these categories after final selection are shown in Fig. 1, along with the corresponding predicted distributions broken down by event type. The track-like sample is enriched in ν µ CC events (68% of sample), especially at higher energies where muons are more likely detected, while the cascade-like sample is evenly divided between ν µ CC and interactions without a muon in the final state. The angular and energy resolutions provided by the reconstruction are energy-dependent, with median resolutions of 10 • (16 • ) in zenith angle and 24% (29%) in neutrino energy for track-like (cascade-like) events at E ν = 20 GeV.
The event selection in this analysis uses the DOMs surrounding the DeepCore region to veto atmospheric muons. The first criteria remove accidental triggers caused by dark noise by demanding a minimum amount of light detected in the DeepCore volume, with timing and spatial scale consistent with a particle emitting Cherenkov radiation. Events in which photons are observed outside the DeepCore volume before the light de-tected inside DeepCore, in a time window consistent with atmospheric muons penetrating to the fiducial volume, are then rejected. These are followed by a boosted decision tree (BDT) [40] which further reduces the background of atmospheric muons. The BDT uses the timing and spatial scale of the detected photoelectrons to select events with substantial charge deposition at the beginning of the event, indicative of a neutrino interaction vertex. It also considers how close the event is to the border of the DeepCore volume and the results of several fast directional reconstructions [41] in determining whether the event may be an atmospheric muon. Finally, we demand that the interaction vertex reconstructed by the likelihood fit described above be contained within Deep-Core and the end of the reconstructed muon be within the first row of DOMs outside DeepCore, which further reduces atmospheric muon contamination and improves reconstruction accuracy.
As these selection criteria reduce the atmospheric muon rate by a factor of approximately 10 8 , it is challenging to simulate enough atmospheric muons to obtain a reliable prediction for the distribution of the remaining muons, especially in the presence of systematic uncertainties. We instead use a data-driven estimate of the shape of the muon background distributions, with the normalization free to float. This approach is based on tagging events that would have been accepted except for a small number of photons detected in the veto region, similar to the procedure in Ref. [28]. The uncertainty in the background shape is estimated using two different criteria for tagging these events, and was compared to the currently available muon Monte Carlo. This uncertainty is added in quadrature to the statistical uncertainties in the tagged background event sample and the neutrino Monte Carlo, to provide the total uncorrelated statistical uncertainty σ uncor ν+µatm. in the expected distribution shown in Fig. 1.

ANALYSIS
The final fit of the data is done using an 8 × 8 × 2 binned histogram, with 8 bins in log 10 E reco , 8 bins in the cosine of the reconstructed neutrino zenith direction (cos θ z,reco ), one track-like bin and one cascade-like. The bins are equally spaced with cos θ z,reco ∈ [−1, 1] and log 10 E reco ∈ [0.75, 1.75]. The fit assumes three-flavor oscillations with ∆m 2 21 = 7.53×10 −5 eV 2 , sin 2 θ 12 = 0.304, sin 2 θ 13 = 2.17 × 10 −2 , and δ CP = 0 • . We use MINUIT2 [42] to minimize a function where n ν+µatm i is the number of events expected in the i th bin, which is the sum of neutrino events weighted to the desired oscillation parameters using Prob3++ [43] and the atmospheric muon background. The number of events observed in the i th bin is n data i , with Poisson uncertainty σ data i = n data i , and σ uncor ν+µatm,i is the uncertainty in the prediction of the number of events of the i th bin. σ uncor ν+µatm includes both effects of finite MC statistics and uncertainties in our data-driven muon background estimate. The second term of Eq. (2) is a penalty term for our nuisance parameters, where s j is the value of j th systematic,ŝ j is the central value andσ 2 sj is the Gaussian width of the j th systematic prior.
The analysis includes eleven nuisance parameters describing our systematic uncertainties, summarized in Table I. Seven of these are related to systematic uncertainties in the atmospheric neutrino flux and interaction cross sections. Since only the event rate is observed directly, some uncertainties in flux and cross section have similar effects on the data. In these cases, the degenerate effects are combined into a single parameter. Because analytical models of these effects are available, these parameters can be varied continuously by reweighting simulated events.
The first nuisance parameter is the overall normalization of the event rate. It is affected by uncertainties in the atmospheric neutrino flux and the neutrino interaction cross section, and by the possibility of accidentally vetoing neutrino events due to unrelated atmospheric muons detected in the veto volume. This last effect is expected to reduce the neutrino rate by several percent, but is not included in the present simulations. Because of this and the fact it encompasses several effects, no prior is used for this parameter.
A second parameter allows an energy-dependent shift in the event rate. This can arise from uncertainties in either the spectral index of the atmospheric flux (nominally γ = −2.66 at the relevant energies in our neutrino flux model [7]), or the deep inelastic scattering (DIS) cross section. A prior ofσ s = 0.10 is placed on the spectral index to describe the range of these uncertainties.
Several uncertainties on the DIS cross section were implemented in the fit, but found either to have negligible impact or to be highly degenerate with the normalization and spectral index parameters over the energy range of this analysis. These include variation of parameters of the Bodek-Yang model [44] used in GENIE, uncertainties in the differential DIS cross-section, and hadronization uncertainties for high-W DIS events [45]. As these effects are captured by the first two nuisance parameters, the additional parameters were not used.
One neutrino cross-section uncertainty was not well described by these parameters: the uncertainty of the axial mass form factor for resonant events. The default value of 1.12 GeV and prior of 0.22 GeV were taken from GENIE [33]. Uncertainties in CCQE interactions were also investigated but had no impact on the analysis due to the small percentage of CCQE events at these energies. The normalizations of ν e +ν e events and NC events, defined relative to ν µ +ν µ CC events, are both assigned an uncertainty of 20%. Uncertainties in hadron production (especially pions and kaons) in air showers affect the predicted flux, in particular the ratio of neutrinos to antineutrinos. We model these hadronic flux effects with two parameters, one dependent on neutrino energy and the other on the zenith angle, chosen to reproduce the uncertainties estimated in Ref. [46]. Their total uncertainty varies from 3%-10% depending on the energy and zenith angle, so the fit result is given in units of σ as calculated by Barr et  Systematics related to the response of the detector itself, including photon propagation through the ice and the anisotropic sensitivity of the DOMs, have the largest impact on this analysis. Their effects are estimated by Monte Carlo simulation at discrete values, with the contents of each bin in the (energy, direction, track/cascade) analysis histogram determined by linear interpolation between the discrete simulated models, following the approach of Ref. [27,28].
Uncertainties in the efficiency of photon detection are driven by the formation of bubbles in the refrozen ice columns in the holes where the IceCube strings were deployed. A prior with a width of 10% was applied to the overall photon collection efficiency [29], parametrized using seven MC data sets ranging from 88% to 112% of the nominal optical efficiency. In addition to modifying the absolute efficiency, these bubbles can scatter Cherenkov photons near the DOMs, modulating the relative optical efficiency as function of the incident photon angle. The effect of the refrozen ice column is modeled by two effective parameters controlling the shape of the DOM angular acceptance curve.
The first parameter controls the lateral angular acceptance (i.e., relative sensitivity to photons traveling roughly 20 • above versus below the horizontal) and is fairly well constrained by LED calibration data. Five MC data sets were generated covering the −1σ to +1σ uncertainty from the LED calibration, and parametrized in the same way as the overall optical efficiency described above. A Gaussian prior based on the LED data is used.
The second parameter controls sensitivity to photons traveling vertically upward and striking the DOMs headon, which is not well constrained by string-to-string LED calibration. That effect is modeled using a dimensionless parameter ranging from -5 (corresponding to a bubble column completely obscuring the DOM face for vertically incident photons) to 2.5 (no obscuration). Zero corresponds to constant sensitivity for angles of incidence from 0 • to 30 • from vertical. Six MC sets covering the range from -5 to 2 were used to parametrize this effect. No prior is applied to this parameter due to lack of information from calibration data.
The last nuisance parameter controls the level of atmospheric muon contamination in the final sample. As described above, the shape of this background in the analysis histogram, including binwise uncertainties, is derived from data. Since the absolute efficiency for tagging background events with this method is unknown, the normalization of the muon contribution is left free in the fit.
In addition to the systematic uncertainties discussed above, we have considered the impact of seed dependence in our event reconstruction, different optical models for both the undisturbed ice and the refrozen ice columns, and an improved detector calibration currently being prepared. In all these cases the impact on the final result was found to be minor, and they were thus omitted from the fit and the error estimate.

RESULTS AND CONCLUSION
The analysis procedure described above gives a best fit of ∆m 2 32 = 2.31 +0.11 −0.13 ×10 −3 eV 2 and sin 2 θ 23 = 0.51 +0.07 −0.09 , assuming normal neutrino mass ordering (NO). For the inverted mass ordering (IO), the best fit shifts to ∆m 2 32 = −2.32 × 10 −3 eV 2 and sin 2 θ 23 = 0.51. The pulls on the nuisance parameters are shown in Table I. Though IceCube's current sensitivity to the mass ordering is low, dedicated analyses are underway to measure this.
The data agree well with the best-fit MC data set, with χ 2 = 117.4 for both neutrino mass orderings. This corresponds to a p-value of 0.52 given the 119 effective degrees of freedom estimated via toy MCs, following the procedure described in Ref. [27].
To better visualize the fit, Fig. 2 shows the results of the fit projected onto a single L/E axis, for both the track-like and cascade-like events. The two peaks in each   distribution correspond to down-going and up-going neutrino trajectories. Up-going ν µ +ν µ are strongly suppressed in the track-like channel due to oscillations. Some suppression of up-going cascade-like data is also visible, due to disappearance of lower-energy ν µ which are not tagged as track-like by our reconstruction. Figure 3 shows the region of sin 2 θ 23 and ∆m 2 32 allowed by our analysis at 90% C.L., along with our best fit and several other leading measurements of these parameters [12][13][14]16]. The contours are calculated using the approach of Feldman and Cousins [47] to ensure proper coverage.
Our results are consistent with those from other experiments [12][13][14][15][16], but using significantly higher energy neutrinos and subject to a different set of systematic uncertainties. Our data prefer maximal mixing, similar to the result from T2K [13]. The best-fit values from the NOνA experiment [14] are disfavored by ∆χ 2 = 8.9 (first octant) or ∆χ 2 = 8.8 (second octant), corresponding to a significance of 2.6σ using the method of Feldman and Cousins, although there is considerable overlap in the 90% confidence regions of the two measurements. Further improvements to our analysis are underway, including the incorporation of additional years of data, extensions of our event selections, and improved calibration of the detector response.

SUPPLEMENTAL MATERIAL
Control of systematic uncertainties in this analysis relies fundamentally on the use of the full 3D space of neutrino energy, arrival direction (correlated with path length through the Earth), and particle type to disentangle systematic uncertainties from the neutrino oscillation physics of interest. Oscillation effects have a distinctive shape in the L/E ν space and primarily affect ν µ CC events, while systematic effects have a much broader impact on the data. A complete description of our methodology will be included in a more detailed forthcoming paper which extends this analysis to measure the rate of ν τ appearance. In this supplement, we provide an abbreviated discussion to illustrate the approach.
This analysis divides the data into 128 bins in three dimensions: eight bins in log 10 (E ν,reco ), eight bins in cos(θ z,reco ), and two bins for particle identification, track-like and cascade-like. Ideally, ν µ CC events are classified as track-like while ν e ν τ , and NC events should be classified as cascade-like; in practice there is leakage between the samples due to imperfect particle identification. In our letter, we provide projections of the underlying data so that the oscillatory behavior in reconstructed L/E ν and the energy range of the data set may be seen clearly. The full 3D distribution of the data is shown in Fig. S1.
Oscillations affect upward-going ν µ CC events, which are enriched in the track-like sample but also contribute to the cascade-like sample (especially at lower reconstructed energy). Downward-going neutrinos with baselines too short for oscillations to occur and cascade-like events provide constraints on systematic uncertainties related to the atmospheric flux, neutrino interactions, and detector response. Our measurement of the oscillation parameters is obtained mainly from the higher energy bins, as shown in Figs. S2 and S3, due to the improved an-  gular and energy resolution and particle identification accuracy we obtain at higher energies. These figures shows how the χ 2 sum would change if the values of ∆m 2 32 and sin 2 (θ 23 ) were increased by 1σ from their best-fit values. The relative effects on the event rate itself are shown in Figs. S4 and S5. Note that while a number of ν µ CC events are incorrectly reconstructed as cascade-like, as shown in Fig. 2 of the letter, their impact on the measurement of the oscillation parameters is relatively small compared to the high-energy track-like sample.
The patterns shown in Figs. S2 and S3 are the oscillation signature that must be distinguished from the effects of systematics. There are eleven different systematic uncertainties implemented in this analysis, each with their own distinctive signature. Crucially, all of these effects have a much broader impact on the data set than the oscillation physics -large numbers of bins in both the cascade-like and track-like samples are affected.
As an example, the impact of a shift in the overall optical efficiency of the DOMs, and the range of bins which constrain this effect, are shown in Figs. S6 and S7.  The figure shows how a 10% increase in collection efficiency for Cherenkov photons (the width of the a priori uncertainty in this parameter) would affect the event distribution, with all other parameters held fixed. If only upward-going track-like events in the 10-40 GeV range were considered, there would be strong correlation with sin 2 (θ 23 ): higher optical efficiency produces an increase in the event rate, which partly fills in the disappearance minimum. However, the effect of varying optical efficiency extends throughout both the cascade-like and track-like distributions, affecting both upward-going and downward-going events at all energies, while the effect of oscillations is well localized to the 10-40 GeV range at angles of cos(θ z,reco ) < −0.6, as shown in Fig. S5. Consideration of the full energy vs. angle distribution of both samples thus allows the two effects to be disentangled effectively. As shown in Fig. S7, the constraint on this systematic arises almost entirely from the cascade sample. Due to the correlations with oscillation parameters, the fitter does not obtain constraints on this systematic from up-going high-energy ν µ tracks.  Affecting the measurement of ∆m 2 32 is particularly complicated, since it requires shifting the position of the oscillation dip. Such a shift requires both increases and decreases in event rate in neighboring bins, while leaving most of the distribution unaffected, as shown in Fig. S4. This complexity permits relatively tight constraints on that parameter. The optical efficiency of the DOMs is the leading contributor to the uncertainty budget for this parameter, and in fact is the systematic which is most closely correlated individually with the oscillation parameters (see Fig. S8 below), but even so, perfect knowledge of the optical efficiency would only improve the precision of the measurement of this parameter by about 20% given current angular and energy resolutions and current knowledge of other systematic effects.
In practice, all eleven systematic uncertainties and both oscillation parameters are fitted simultaneously. The optimization routine explores all possible combinations of systematics that could mimic the more tightly localized effect of oscillation physics at specific ranges of energy and zenith angle. The correlation matrix shown in Fig. S8 illustrates the correlations between the systematic parameters and the oscillation parameters. While some of the systematics are correlated with each other, the wide ranges of baselines and energies over which we observe neutrinos enable us to disentangle these effects from the distortions of the observed atmospheric neutrino flux produced by neutrino oscillations.