New dark matter analysis of Milky Way dwarf satellite galaxies with MADHATv2

We obtain bounds on dark matter annihilation using 14 years of publicly available Fermi-LAT data from a set of 54 dwarf spheroidal galaxies, using spectral information from 16 energy bins. We perform this analysis using our updated and publicly available code MADHATv2, which can be used to test a variety of models for dark matter particle physics and astrophysics in an accessible manner. In particular, we note that including Carina III in the analysis strengthens constraints on $s$-wave annihilation into two-body Standard Model final states by a factor of $\sim 3$ but broadens the error on the constraint due to the large uncertainty of its $J$-factor. Our findings illustrate the importance of verifying if Carina III is in fact a dwarf spheroidal galaxy and measuring more precisely its $J$-factor. More generally, they highlight the significance of forthcoming discoveries of nearby ultra-faint dwarfs for dark matter indirect detection.


I. INTRODUCTION
A key strategy for searching for dark matter (DM) interactions with the Standard Model is indirect detection [1,2].Indirect searches seek to observe the Standard Model products from DM annihilation (or decay) in astrophysical systems.Dwarf spheriodal galaxies (dSphs) in the Local Group are among the best systems for gamma-ray searches, given their proximity and high mass-to-light ratios.These DM-dominated systems are nearly free of intrinsic backgrounds due to their low gas content and lack of recent star formation [3].
Indirect detection analyses involve searching for an excess of gamma rays over the emission of diffuse Galactic foregrounds, extragalactic backgrounds, and point sources from the direction of dSphs.In order to account for these collective backgrounds, they either need to be theoretically modeled [4,5] or determined empirically [6][7][8].In the absence of an excess, robust constraints can be placed on the DM annihilation cross section1 as a function of DM mass.In the GeV to TeV mass range, there are strong limits on DM annihilation from gamma-ray searches using Cherenkov telescopes [9][10][11][12][13][14][15][16] and the Large Area Telescope (LAT) on the Fermi Gamma-Ray Space Telescope [4,5,[17][18][19][20].
In this work, we present up-to-date limits on DM annihilation using Fermi-LAT data associated with 54 dSphs.We use an updated version of our publicly available code, MADHAT [21], which models astrophysical backgrounds empirically [6][7][8] using Fermi-LAT data taken slightly off-axis from each dSph.The original version of MADHAT performed a simple stacked analysis of the dSphs.MADHATv2 incorporates additional features presented in Ref. [7]: it performs a weighted stacked analysis, in which photons from each dSph and energy bin are assigned an independent weight.We can then choose a test statistic with the maximum power to distinguish a model of DM particle physics (including particle mass and annihilation channel) and astrophysics (encoded in the J-factors) from the backgroundonly hypothesis.Furthermore, MADHATv2 is written in Python (whereas MADHAT is written in C++), incorporates 3 more years of Fermi-LAT data, employs an updated Fermi source catalog (updated from 3FGL [22] to 4FGL [23,24]), and increases the number of targets from 58 to 93 confirmed and candidate dSphs.
The analysis we present here is for 54 dSphs, which represent the set of targets that are confirmed or likely dSphs for which a determination of the J-factors (with uncertainties) has been made (excluding the very recently discovered Ursa Major III [25]).We include Carina III in our analysis but note that there is still significant uncertainty regarding whether or not Carina III is a dSph, since stellar velocity information has been obtained from only four member stars.But because it is relatively nearby, its estimated J-factor is the largest of any dSph in the sample we consider.We find that including Carina III in our analysis strengthens our bounds on DM annihilation by a factor of ∼ 3 (compared to an analysis with only the other 53 dSphs), but the error on the bound is larger due to the uncertainty in the J-factor.
MADHATv2 is computationally efficient and easily updated as more dSphs are found and more Fermi-LAT data [26,27] is taken.It can produce optimized constraints for any model of DM particle physics and any specified set of J-factors.The Vera Rubin Observatory is expected to discover many new dSphs in the upcoming years [28][29][30], some of which may be nearby.Our improved results from including Carina III motivate the continued development of MADHAT, since we can rapidly incorporate new dSphs and facilitate future indirect DM searches for the community.
Recent work has also performed a stacked dSph analysis with the latest Fermi-LAT data, but the astrophysical backgrounds are theoretically modeled, and the analysis employs the full framework of Fermitools [31].Our analysis derives the background distributions empirically, independent of detailed astrophysical modeling, and is thus complementary.Furthermore, MADHATv2 provides the background distributions to enable easier analyses of various particle physics models by removing the need to interface with Fermitools.
The plan of this paper is as follows.In Section II, we describe the formalism of our statistical analysis.In Section III, we use this formalism to provide up-to-date constraints on a variety of DM models.We conclude with a discussion of our results in Section IV.

II. GENERAL FORMALISM
In this section, we describe our procedure for placing constraints on DM annihilation using gamma-ray observations of dSphs, based on a method originally developed in Refs.[6,7] and implemented in Refs.[8,21].We consider the detection of gamma-ray photons arising from a region of interest (ROI) defined by a cone with a 1 • opening angle, centered at the location of a dSph.The photons are collected over a time T i for the ith dSph and binned over an energy range bounded by E min and E max .We assume the effective area A eff of the observing instrument is insensitive to the photon energy in the energy range of interest.The number of photons that are observed from the ROI for an exposure (A eff T ) i of the ith dSph and fall in the jth energy bin is , where S and B refer to the contributions from the DM annihilation signal and the background, respectively.

A. Probability mass functions
The expected number of signal photons in the jth energy bin due to DM annihilation in the ith dSph is where F j is the fraction of the photons produced in DM annihilation that lie in the jth energy bin and satisfies j F j = 1.Φ P P is a normalization factor that depends on DM particle physics and is independent of the choice of dSph, and J i is the J-factor for the ith dSph.We consider DM composed of a single species of self-conjugate particles of mass m X with an annihilation cross section given by σv = (σv) 0 (v/c) n , where v is the relative velocity of the annihilating particles and (σv) 0 is a constant.For velocity-dependent annihilation, we need to consider an effective J-factor that accounts for different annihilation rates in different regions of the halo [32][33][34][35].We then have where D i is the distance to the ith dSph, dN/dE γ is the photon spectrum per annihilation, and f i (⃗ r, ⃗ v) is the DM velocity distribution of the ith dSph.The velocity distribution is normalized such that , where ρ i (⃗ r) is the DM density profile [32][33][34][35].In the case that DM annihilation is s-wave (i.e., σv is independent of velocity, with n = 0), Eq. ( 3) reduces to the more familiar form . The volume integration is taken over a conical region of the sky, with a 1 • opening angle.
We assume the probability mass function (PMF) for the number of photons N S i,j from DM annihilation arriving from the ith dSph in the jth energy bin is given by a Poisson distribution: where we explicitly note the dependence on Φ P P through N S i,j in Eq. ( 1).Given a photon count map, we can obtain an empirical background PMF P B i,j N B i,j for the number of background photons N B i,j .The PMF is a normalized histogram of the photon counts in the jth energy bin taken from 10 5 randomly chosen sample regions (conical regions with a 1 • opening angle, which matches the size of the ROI) lying within 10 • of the ith dSph.The sample regions are uniformly distributed on the 2-sphere of the sky, and we discard sample regions that overlap with the ROI, are not fully contained in the 10 • sampling region, or come within 0.8 • of a known point source.The mean number of background photons is where the sum over x runs over all values of photon counts observed in the sample regions.

B. Applying weights
We expand our analysis beyond MADHAT to assign weights w i,j to the number of photons from the direction of the ith dSph and in the jth energy bin.The weighted number of photon counts is where S, B, and O refer to signal, background, and observed events, respectively.The weights w i,j are real, nonnegative constants."Floor" rounds the argument down to the nearest integer, and it is incorporated to ensure our distribution functions remain discrete, which significantly improves computational efficiency.
The weighted photon count PMF, P S,B i,j (N S,B i,j ), is given by where the sum is over all N S,B i,j with different values of i (denoting the dSph) and j (denoting the energy bin) is the convolution of the individual PMFs, assuming the individual PMFs are statistically independent. 2More generally, we let {k} and {k ′ } refer to two disjoint sets of {(i, j)} pairs and denote by P {k} (N {k} ) the probability distribution for N {k} ≡ (i,j)∈{k} N i,j .The convolution of the PMFs associated with {k} and {k ′ } is By repeatedly convolving PMFs to incorporate all dSphs and all energy bins, we construct the combined weighted photon count PMFs P S,B (N S,B ) for the signal and background.The total weighted PMF P O (N O ) for the observed weighted photon count N O ≈ N S + N B is (approximately) the convolution of P B (N B ) and P S (N S ).These relations are exact if the weights are integers; otherwise, there is a small round-off error due to the "Floor" function in Eq. ( 6).We discuss the details of our numerical approach and the various approximations we make in Appendix B.
By adjusting the value of Φ P P [or equivalently, (σv) 0 for a given DM annihilation model], we change the form of P O (N O ).Note that N O increases monotonically with the addition of each N S i,j , and N S i,j increases monotonically with Φ P P .Therefore, we can use N O as a test statistic to place upper limits on Φ P P , given the observed value of N O .The β-confidence level upper bound on Φ P P satisfies In other words, for Φ P P > Φ bound P P (β), the probability for obtaining a value of N O greater than what is observed is β (up to the small round-off error).The value Φ P P would be disfavored by observations at a confidence level β.

C. Optimal choice of weights
Any (non-negative) choice of the weights produces a valid β confidence level upper bound on Φ P P , but some choices may be more useful and/or intuitive than others.For example, choosing w i,j = 1 for all i and j weights all photons equally, and N S,B,O are simply the total number of signal, background, and observed photons, respectively.This simple case corresponds to a stacked analysis, which is used in MADHAT [21].
For our analysis with MADHATv2, we choose weights to maximize the probability that a false model is excluded if the true model is the background-only hypothesis.We employ the "signal-to-noise method" in Ref. [7], which gives The weights depend on the value of Φ P P through N S i,j and thus give preference to photons from dSphs with large J-factors and in energy ranges where photons from DM annihilation are expected to lie.However, since N S i,j is proportional to Φ P P for any i and j, rescaling Φ P P rescales all weights equally and does not change the resulting bound on Φ P P [modulo the rounding error associated with Eq. ( 6)].We prefer this flexibility over the slightly stronger statistical power of the "likelihood ratio method" in Ref. [7], for which the weights w i,j = log(1 + N S i,j / N B i,j ) depend on the value of Φ P P , requiring an iterative analysis.Moreover, the likelihood-ratio weights approach Eq. ( 11) in the limit of a small signal ( N S i,j ≪ N B i,j ), which we expect here.Generically, the weights in Eq. ( 11) are not all integers.Thus, our test statistic does not necessarily wield the optimal power to reject a false model, even with optimal weights, due to the small rounding error from Eq. ( 6).Compared to the uncertainties in the J-factors, we expect the rounding error to have a negligible impact in determining the bound on Φ P P .We note that the rounding error may be reduced by rescaling all weights by a large number, but doing so leads to a loss of computational efficiency, as we discuss more in Appendix B.

III. CONSTRAINTS ON DM ANNIHILATION IN DSPHS
We present the details of our analyses that place constraints on the DM annihilation cross section for different particle physics models.We perform weighted stacked dSph analyses with recent Fermi-LAT data using MADHATv2.Appendix A provides a brief description of how to run MADHATv2.

A. Fermi-LAT data
We use Fermi-LAT Pass 8 Release 4 data taken during the 14 years corresponding to the mission elapsed time range 239557417 -681169985 seconds.We select data using Fermi Science Tools 2.2.0 and FermiPy 1.1.6with the specifications evclass=128, evtype=3, and zmax=100, as well as the filter '(DATA QUAL>0)&&(LAT CONFIG==1)'.For the instrument response function, we use P8R3 SOURCE V3.
For the 93 confirmed and candidate dSphs listed in Appendix C, Table II, we obtain the photon count maps to generate the background PMFs.To mask sources while generating the background PMFs, we use the 4FGL-DR4 source list (gll psc v32.fit) [23,24].We obtain the Fermi-LAT exposure to each target from the exposure map generated by Fermi Science Tools 2.2.0.We consider photons in the energy range from E min = 1 GeV to E max = 100 GeV, for which the Fermi-LAT effective area is relatively constant.This energy range is divided into 16 equally spaced logarithmic bins.For each target i listed in Appendix C, we provide the Fermi-LAT exposure, the mean number N B i = j N B i,j of background photons (summed over all photon energies), and the observed number of photons from the ROI.

B. J-factors for dSphs
We quote the J-factors we use for our analyses in Appendix C. Since not all 93 targets listed in Appendix C have published J-factors, we consider the set of 55 dSphs that do have published s-wave J-factors with uncertainties [5,[35][36][37][38][39], including the recently discovered Ursa Major III [25,40].These J-factors assume s-wave annihilation and are integrated over a 1 • degree opening angle.
Appendix C also lists the J-factor uncertainties, and we estimate their effect on DM annihilation bounds by repeating our analysis, simultaneously varying all dSph J-factors upward (downward) by their respective 1σ uncertainties (but without changing the weights).This procedure produces a band for the limit on the DM annihilation cross section.
We include Carina III, even though it is not yet clear if Carina III is in fact a dSph, since stellar velocity information is obtained from only four member stars [37].However, because Carina III is relatively close to the Sun (D = 27.8 kpc), it has the largest estimated J-factor of the dSphs we consider.As a result, the inclusion of Carina III as a dSph with such a large J-factor should have a significant impact on DM annihilation constraints.We perform analyses both with and without Carina III to gauge the effect on our resulting constraints.
We also consider the recently-discovered candidate object Ursa Major III [25].Similar to Carina III, stellar velocity information for Ursa Major III is determined from a small number of stars, in this case eleven.As the velocities of two of those eleven are outliers, the uncertainties on the J-factor for Ursa Major III are larger relative to other targets considered here.As such, our primary analyses involve a set of 53 (without Carina III) or 54 (with Carina III) dSphs, not including Ursa Major III.We perform a separate, stand-alone analysis for Ursa Major III.
We note that, aside from Carina III and Ursa Major III, there are other objects used in this analysis for which there are significant issues.For example, Bootes III and Willman I show evidence of non-equilibrium dynamics, in which case estimates of their DM content may not be reliable.Bootes I, Crater II, and Horologium I lie near blazars or blazar candidates, though these blazars have not been identified as gamma-ray sources in the 4FGL catalog.For Hydra II, Triangulum II, and Tucana III, we use the same J-factors used in the analysis of Ref. [5], which were obtained from scaling relations, though these J-factors are in some tension with upper bounds obtained through spectroscopic measurements.For a more detailed discussion of these issues, see Ref. [31] and references therein.Similar issues arise for some objects that are not used in our analysis, but we provide their background PMFs and observed photon counts for possible future use.

C. Applications and results
We apply our analysis to place constraints on a variety of particle physics models for DM annihilation.We first consider s-wave annihilation to the two-body Standard Model final states b b, W + W − , µ + µ − , and τ + τ − .The photon spectra for these channels are obtained from Ref. [41,42].Assuming a branching fraction of unity to each of these final states, we show the 95% confidence level (CL) bounds on (σv) 0 in the left (right) panel of Fig. 1 for an analysis without (with) Carina III.Including Carina III strengthens the bound on (σv) 0 by a factor of ∼ 3 but widens the error band, emphasizing the importance of confirming if Carina III is a dSph and determining its J-factor more precisely.
Focusing on s-wave DM annihilation to b b, the left panel of Fig. 1 shows that DM annihilation at the thermal cross section3 can be ruled out at 95% CL for m X ≲ 60 GeV, though J-factor uncertainties can substantially weaken this constraint.In particular, once the uncertainties are accounted for, DM interpretations of the Galactic Center (GC) GeV excess [43][44][45][46] involving the s-wave annihilation of ∼ 30 − 50 GeV dark matter particles to bb with (σv) 0 ∼ 3 × 10 −26 cm 3 /s cannot be ruled out by this analysis of dSphs.
We can apply our results in Fig. 1 to the case of Higgsino DM.For a mass in the 600 − 1400 GeV range, Higgsino DM is mildly preferred by Fermi-LAT photon data from the GC, relative to a fit without Higgsino DM [47].A Higgsino with a mass of ∼ 1 TeV could also be a thermal relic DM candidate.For this scenario, the dominant final states are W + W − and ZZ, with branching fractions of ∼ 60% and ∼ 40%, respectively.But since the photon spectra produced by the W + W − and ZZ final states are nearly identical, we can probe this scenario using the results in the left panel of Fig. 1.In general, heavy Higgsino DM annihilation can be Sommerfeld-enhanced, due to relatively long-range weak interactions [48]; however, for this mass range, the enhancement is neither near a resonance nor near the Coulomb limit.As a result, the Sommerfeld-enhancement of the tree-level cross section is relatively small and largely independent of velocity.Thus, the enhancement to the annihilation cross section in a dSph is the same as in the GC and can be absorbed by the constant prefactor of the cross section, allowing us to directly test scenarios that might match the Fermi-LAT data (⟨σv⟩ ∼ 10 −26 cm 3 /s).As we see from the left panel of Fig. 1, this scenario cannot yet be constrained by searches of dSphs, as expected [47].
As another example to illustrate the utility of the MADHATv2 analysis, we consider DM annihilation into a pair of scalars (XX → ϕϕ), with each scalar decaying to two photons (ϕ → γγ).This scenario produces a box-like photon spectrum with a width given by (m 2 X − m 2 ϕ ) 1/2 .Constraints on (σv) 0 at 95% CL are shown in Fig. 2 for the cases of m ϕ = 10 GeV and 60 GeV.For large m X ≫ m ϕ , the constraints on these two scenarios are indistinguishable, as their photon spectra are nearly identical.Note that the bounds on this scenario are much tighter than those obtained from an analysis using a single energy bin [8], since the shape of the signal photon spectrum is very different from that of the background.
Finally, we consider the recently-discovered object Ursa Major III.An analysis of DM annihilation in Ursa Major III was performed in Ref. [49], finding no significant excess of photons from that target.We analyze Ursa Major III alone using MADHATv2 and find constraints on DM annihilation to bb that are similar to those obtained in Ref. [49] (see Fig. 3).
FIG. 3. 95%-confidence bounds on (σv)0 for s-wave annihilation in Ursa Major III to the bb (red line) final state, using MADHATv2 and the J-factor given in Table II.The J-factor uncertainties given in Table II yield the light uncertainty band, while the dark band is obtained if one used ± log 10 J/[ GeV 2 /cm 5 ] = 0.6, following Refs.[36,49].The thermal cross section is included as a gray, dotted horizontal line.These constraints match those found in Ref. [49].

IV. CONCLUSION
We have performed a weighted stacked analysis of DM annihilation in 54 dSphs, using 14 years of Fermi-LAT data, binned in 16 energy bins.Our work represents the (currently) most complete analysis of gamma-ray signals of DM annihilation in dSphs.Our analysis algorithm is flexible, allowing us to test a variety of different DM models, including models with non-standard photon spectra, in a computationally efficient manner.
The analysis tools we have developed are publicly available on GitHub at https://github.com/MADHATdm.In this work, we use the MADHATv2 package, which is a significant update from its predecessor, MADHAT.Compared to MADHAT, MADHATv2 offers the option of maximizing the statistical power to reject a wrong model (if the true model is the background-only hypothesis), at a modest cost in computational efficiency.Additionally, MADHATv2 is coded in Python, whereas MADHAT is coded in C++.We incorporate more candidate dSphs targets and more Fermi-LAT data, and we use an updated point source catalog.
The flexibility of MADHATv2 allows users to easily incorporate different choices of dSph targets, as well as their J-factors.As an illustration, we obtain exclusion bounds with and without Carina III.We find that including Carina III strengthens the bound on the annihilation cross section by a factor ∼ 3 but increases the error on the bound, highlighting the importance of further studies of Carina III.MADHATv2 also includes relevant Fermi-LAT data from the recently discovered Ursa Major III, allowing that target to also be used as part of an analysis.Given its large estimated J-factor, along with its large J-factor uncertainty, further studies of Ursa Major III are also important.
Over the next several years, new observatories are expected to dramatically increase the number of identified dSphs (see, for example, [50]).As we have seen in the case of Carina III, if any of these new dSphs are relatively close by, their discovery could change the sensitivity of DM indirect detection searches significantly.As our analysis relies on publicly available Fermi-LAT data, it can be rapidly updated as new dSphs are found, allowing MADHATv2 or future versions to provide the most up-to-date analysis of gamma-ray production by DM annihilation in dSphs.

Adjustable parameters
There are several flags and parameters whose values are set at the top of the file madhat2.py.These flags and parameters control some aspects of the analysis as well as the extent to which approximations are used to improve computational efficiency.Further details on the algorithm and specific definitions of some of the parameters below are provided in Appendix B. The flags and parameters available to adjust in the top of the madhat2.pyfile are: • binning: takes the value 0 or 1.If binning = 1, then the analysis is performed for 16 equally-spaced logarithmic energy bins (see Table I).If binning = 0, then the analysis assumes a single energy bin containing all photons between 1 GeV and 100 GeV.
• weighting type flag: takes the value 0 or 1.If weighting type flag = 1, then optimal weights are implemented, as described in Section II C. If weighting type flag = 0, then weights must be provided by the user, specified in the array weights original.
• beta target (default = 0.95): sets the desired confidence level for the exclusion bound.
• beta tolerance (default = 0.001): takes a positive value less than 1.Decreasing beta tolerance improves accuracy, at the cost of computational efficiency (see Appendix B 2).
• weight raising amount (default = 2): takes a positive value.Decreasing weight raising amount improves accuracy, at the cost of computational efficiency (see Appendix B 2).
• bgd prob sum tolerance (default = 0.0001): takes a positive value less than 1.Decreasing bgd prob sum tolerance improves accuracy, at the cost of computational efficiency (see Eq. (B1)).
• P bar zero out threshold denom (default = 10 4 ): takes a positive value.Increasing P bar zero out threshold denom improves accuracy, at the cost of computational efficiency (see Eq. (B3)).
• energy fraction zero out threshold denom (default = 10 4 ): takes a positive value.Increasing energy fraction zero out threshold denom improves accuracy, at the cost of computational efficiency (see Eq. (B4)).
We have tested the effect of adjusting the parameters related to computational efficiency/accuracy.If one reduces weight raising amount by a factor √ 2, or rescales any of the other efficiency parameters by a factor of 10, bounds on a benchmark case (DM with m X = 100 GeV annihilating to the b b final state) shift by at most 2%, which is negligible in comparison to the systematic uncertainties in the J-factor.However, run times can increase by up to a factor of 3.
The remaining parameters define the names of the two input files the user must specify, as described in Appendix A 1: • model filename: takes a string.This specifies the name of the file containing information about the DM model, which we refer to in Appendix A 1 as DMmodel.dat.
• set filename: takes a string.This specifies the name of the file containing the information about the set of target objects, which we refer to in Appendix A 1 as dwarfset.dat.

Output files
Output files follow the same format as in MADHAT, in the interest of backward compatibility.The primary difference is that the Nbound column is always filled solely with 0s, as N bound has no meaning in the MADHATv2 analysis.The output files are named according to the following format: <DMmodel><dwarfset> <beta target>.out,for a DM model input file DMmodel.dat and a set of target objects specified in dwarfset.dat.The top of the header provides a copy of the contents of the dwarf set file used.The actual output data is organized into ten columns.From left to right, they are as follows: • Mass(GeV): DM particle mass in GeV • Spectrum: the annihilation spectrum integrated from 1-100 GeV (see Appendix A 1) • Beta: the target value of β (see beta target in Appendix A 2) • Nbound: relevant for MADHAT, not relevant for MADHATv2 • PhiPP(cm^3 s^-1 GeV^-2): β-level confidence upper bound on Φ P P in cm 3 s -1 GeV -2 • +dPhiPP: additive uncertainty for the β-level confidence upper bound on Φ P P in the same units as PhiPP(cm^3 s^-1 GeV^-2) • -dPhiPP: subtractive uncertainty for the β-level confidence upper bound on Φ P P in the same units as PhiPP(cm^3 s^-1 GeV^-2) • sigv(cm^3 s^-1): β-level confidence upper bound on (σv) 0 in cm 3 s -1 • +dsigv: additive uncertainty for the β-level confidence upper bound on (σv) 0 in the same units as sigv(cm^3 s^-1) • -dsigv: subtractive uncertainty for the β-level confidence upper bound on (σv) 0 in the same units as sigv(cm^3 s^-1)

Appendix B: Algorithm description
We present here a detailed description of how the analysis is implemented, including a description of various approximations that are used to improve computational efficiency.

Determining the combined signal and background PMFs
Determining the combined PMFs P S,B (N S,B ) for the summed weighted photon counts generally requires performing a nested sum over the photon counts in each dwarf target i and energy j pair.If there are K such pairs, the computation time for performing the K nested sums grows rapidly with K and becomes intractable.However, because we force the weighted photon counts in each bin to be non-negative integers in Eq. ( 6), the computation simplifies dramatically.As a result, we may construct P S,B (N S,B ) with K unnested summations over integer-valued weighted photon counts, as in Eq. 9.
If the weighted photon counts were not rounded, the combined PMFs would be invariant under a rescaling of all the weights.But as a result of rounding, the PMFs do change with an overall weight rescaling.The computation becomes more tractable if all weights are rescaled downwards, as doing so decreases all weighted photon counts and thus decreases the range over which the numerical sums in Eq. ( 9) must be evaluated.On the other hand, the statistical power of the analysis to reject a wrong model increases if the weights are uniformly scaled upwards, as the deviation from the optimal choice of weights resulting from rounding is reduced.Moreover, an upward rescaling of the weights reduces the effect of round-off error on the confidence interval.We must balance computational efficiency against statistical power.
We must specify a limit to the range of weighted photon counts in order for the sums in Eq. ( 9) to be tractable.To ensure tractability, P S,B (N S,B ) is set to zero whenever the probability of a given photon count is sufficiently small.This condition is implemented using two parameters, bgd prob sum tolerance and P sig ij tol denom, by the following method: • N B i,j(max) is defined as the minimal choice of N lim such that P B i,j (N ) ≥ 1 − bgd prob sum tolerance.(B1) ) is set to zero for N ′ > N B i,j(max) .
• N S i,j(max) is defined as the photon count which maximizes P S i,j .If P S i,j (N ′ ) < P S i,j N S i,j(max) P sig ij tol denom , (B2) then P S i,j (N ′ ) is set to zero.
The combined weighted photon count PMFs are themselves subject to a tolerance for the sake of computational efficiency.Indeed, if for some N ′ we have P S,B (N ′ ) < P S,B (N S,B (max) ) P bar zero out threshold denom , (B3) where N S,B (max) maximizes P S,B , then P S,B (N ′ ) is set to 0. Furthermore, if any energy bin fraction F j ′ satisfies the condition energy fraction zero out threshold denom , (B4) where F j (max) is the maximum energy bin fraction, then this energy bin is omitted in the analysis.Given a choice of weights and a desired confidence level β, our goal is to determine Φ bound P P (β).But, we are only able to do the opposite; that is, we can determine from Eq. ( 10) the confidence level with which any given value of Φ P P can be excluded.To find Φ bound P P (β), we scan over values of Φ P P with varying step sizes in order to find two values of Φ P P that can be excluded with confidence levels that lie above and below β but within beta tolerance of β.Φ bound P P (β) is then determined by linear interpolation.However, if any of the weights are large, this algorithm can be computationally expensive, since we must sum over a large range of weighted photon counts.To make the algorithm more efficient, we adopt an approach in which the weights are rescaled, which introduces round-off error but reduces computation time.In this approach, all weights are initially rescaled by a constant factor, so that the largest weight is 1.Φ bound P P (β) is then found using the approach described above.The weights are then uniformly rescaled upward by a factor chosen so that the largest weight increases by the additive factor weight raising amount, and Φ bound P P (β) is computed again using the new weights.This result will be slightly different due to the reduced round-off error.This process is repeated until the estimate for Φ bound P P (β) has converged.Convergence is determined using the following criterion.If Φ bound P P −A (β) and Φ bound P P −B (β)
for a given choice of weights

TABLE I .
Ranges for the 16 photon energy bins used in the Fermi-LAT data.