An Analytic Approach to Light Dark Matter Propagation

If dark matter interacts too strongly with nuclei, it could be slowed to undetectable speeds in Earth's crust or atmosphere before ever reaching a detector. For sub-GeV dark matter, analytic approximations appropriate for heavier dark matter fail, necessitating the use of computationally expensive simulations. We present a new method of modeling attenuation of light dark matter in the Earth, based on the approximation that the scattering is isotropic in the lab frame. We show that this approach agrees well with Monte Carlo results, and can be much faster when the number of scatterings becomes large, as the runtime for Monte Carlo methods increases exponentially with cross section. We use this method to model attenuation for sub-dominant dark matter--that is, particles that make up a small fraction of the dark matter density--and show that previous work on sub-dominant dark matter overestimates the sensitivity of direct detection experiments.

If dark matter interacts too strongly with nuclei, it could be slowed to undetectable speeds in Earth's crust or atmosphere before reaching a detector. For sub-GeV dark matter, approximations appropriate for heavier dark matter fail, necessitating the use of computationally expensive simulations. We present a new, analytic approximation for modeling attenuation of light dark matter in the Earth. We show that our approach agrees well with Monte Carlo results, and can be much faster at large cross sections. We use this method reanalyze constraints on sub-dominant dark matter.
Experimental sensitivity to both light and subdominant DM is limited to relatively large cross sections: for light DM, because the detectors sensitive to it are optimized for low energy threshold rather than large exposure, and for sub-dominant DM, because of its small flux. This makes attenuation of DM in the Earth's crust or atmosphere a serious concern in both cases. For heavy DM, modeling attenuation by assuming straight particle trajectories is a valid and common simplification [29,[74][75][76][77][78][79][80][81]. But for light DM, although the straight-line assumption may give reasonably accurate results in some cases [75,[82][83][84], this is not always true [85], and accurately modeling DM trajectories has previously required detailed Monte Carlo methods [25,81,83,[86][87][88][89][90][91]. Accurately assessing direct detection limits is crucial when * cvc1@queensu.ca proposing new DM searches, but can be computationally expensive when using Monte Carlo simulations.
In this Letter, we describe a novel method of modeling DM attenuation, which has applications to direct detection as well as DM capture in planets, the Sun, neutron stars, white dwarfs, and various other media . We start from the more accurate approximation that light DM particles do not follow straight trajectories, but scatter isotropically in the lab frame when colliding with nuclei. Our method computes probabilities of reaching detector depth analytically, without needing to simulate large numbers of particles, making it feasible in the limit of many scatterings. We validate this method through comparison with Monte Carlo results, and compute limits on sub-dominant DM, showing that direct detection experiments exclude significantly less parameter space than claimed in previous work.

A. Monte Carlo Approach
Monte Carlo codes model DM propagation by simulating the trajectories of individual particles. After a particle is initialized with speed and arrival direction drawn from the DM velocity distribution, its mean free path is computed, and its position incremented by a distance drawn from the corresponding path length distribution. When the particle scatters with a nucleus, the isotope and scattering angle are drawn from the relevant probability distributions, and the velocity is updated. This process is repeated until the particle either reaches detector depth (such codes often neglect backscattering from below the detector), scatters out of the atmosphere, or loses too much energy to be detected. This process is repeated for thousands or millions of particles, until enough statistics are collected to accurately represent the velocity distribution at the detector.
Sophisticated Monte Carlo codes such as DaMaS-CUS [86,90] model the three-dimensional geometry of the Earth and take the DM wind into account, though we neglect both of these effects in our calculations. Be-arXiv:2301.07728v2 [hep-ph] 28 Aug 2023 cause the rotation of the Earth produces a wide temporal variance in arrival direction [75], and light particles scatter through large angles, we expect this effect in the limit of many scatterings to be negligible.
The Monte Carlo approach accurately models the relevant physical processes on a particle-by-particle basis, accounting for fluctuations in scattering angle and number of collisions. The major weakness is that as the number of scatterings increases, the computational power needed to compute the DM event rate in a detector increases rapidly: not only must each particle be propagated through more collisions, but more particles must be propagated in order to obtain sufficient statistics. This causes the computation time to increase exponentially with cross section. This problem can be severe enough to necessitate the use of specialized rare-event techniques, such as importance sampling or importance splitting [85,87].

B. Convolutional Approach
We present an alternative to the Monte Carlo approach. Rather than simulating individual particles, we analytically compute the probability of reaching detector depth z in n scatterings. Our approach can be compared to Ref. [118], which conservatively modeled attenuation by considering only particles that do not scatter, and Ref. [119], which considered particles that scatter at most once. In contrast, this work computes the velocity distribution for all particles, up to an arbitrary number of collisions.
We start by asking what fraction of DM particles reach a depth z without scattering. For a mean free path l, the probability density for traveling a distance x before scattering is Assuming that DM arrives isotropically (from above), and integrating over the zenith angle, the probability of reaching depth z without scattering is where Γ(i, x) is the upper incomplete γ function, and Ei(x) is the exponential integral. Next, we need the probability density of reaching depth z after scattering exactly once. This is the probability of reaching an intermediate depth z ′ , times the probability of going from z ′ to z without scattering, integrated over all values of z ′ . In other words, a convolution of two probability distributions: We worked out P 0 (z) above, so we just need the second piece. As mentioned in the Introduction, we now make the simplifying assumption that DM scatters isotropically in the lab frame when colliding with nuclei. The relationship between the lab-frame and CM-frame (center of momentum) scattering angles is given by tan θ lab = sin θ CM cos θ CM +Mχ/M A , so as long as M χ ≪ M A , isotropic scattering in the CM frame leads to approximately isotropic scattering in the lab frame. By analogy with the case for an isotropic flux, the resulting form for P (z − z ′ ) is where the factor of 2 and the absolute value account for the fact that the DM could scatter in either the upward or downward direction. The resulting probability to reach detector depth in one scattering is This process can now be iterated: to compute the probability of reaching detector depth after n scatterings, we simply compute The Gamma function and probability distribution both fall off rapidly as z ′ → ∞, so the upper integration limit can be set to a finite value that is large compared to the detector depth with minimal loss of accuracy. Setting the upper limit to exactly the detector depth is equivalent to not considering backscattering from below the detector. Throughout this Letter we usually neglect backscattering, in order to properly compare with results that also neglected it, but we include backscattering for the CRESST underground run (see below). We now compute the energy distribution of DM particles after n scatterings. For spin-independent scattering, as long as the DM kinetic energy is low enough that the form factor is negligible, DM-nucleus scattering is isotropic in the c.m. frame, leading to a flat energy loss distribution: A realistic overburden consists of multiple nuclei. In this case, the energy loss distribution after one collision is a sum over different nuclei, weighted by the relative probability of scattering with nucleus of type A: Given an energy spectrum dN dE n−1 (E) before scattering, the spectrum after scattering, with the DM losing energy ∆E with probability P (∆E), is found through a change of variables and integration over the probability distribution: The energy spectrum of DM that reaches detector depth is then a weighted sum over the spectra after all scatterings: In practice, this sum can be truncated when additional scatterings produce a negligible flux above threshold. In this Letter, we truncate the sum at 300 scatterings, unless the cross section is small enough that over 90% of the incoming DM flux either reaches detector depth or is scattered out of the atmosphere in fewer scatterings, in which case it is truncated sooner. With this energy spectrum, one can compute nuclear recoil rates in a detector, and determine whether that detector is sensitive to the specified DM mass and cross section. Figure 1 shows the velocity distribution at a depth of 1400 meters for M χ = 100 MeV and σ χn = 5 × 10 −30 cm 2 . The multicolored curves show our computation of the cumulative velocity distributions of particles that scatter zero to 1000 times. That is, the lowest curve is the flux of particles that reach 1400 meters without scattering, the next is the flux of particles that scatter up to one time, etc. The solid black curve is the result of the DaMaSCUS-CRUST Monte Carlo code [83,91]. The high-velocity cutoff is not physical, but results from the simulation struggling to sample the extremely small flux at high velocity. For comparison, we also show the straight-line approximation used by, e.g., Refs. [29,72,75]. In this case, the high-velocity cutoff is a characteristic feature, as the method assumes that all particles of a given initial velocity lose the same amount of energy. The method of this Letter produces a much better fit than does the straight-line approximation. Figure 2 compares the runtime of our method with that of the DaMaSCUS-CRUST Monte Carlo code [83,91], for cross sections near the ceiling for the CRESST underground run (see Sec. IV), and depth of 1400 meters. The blue line is the runtime needed for our method to model particles through 100 scatterings. Increasing this number to 300 or 1000 scatterings has little effect on the high-velocity end of the spectrum, because after so many scatterings most particles would have lost most of their energy. The green curves are the time needed to obtain a sample of 100 particles capable of triggering CRESST-III using DaMaSCUS-CRUST. The solid curve uses the default settings, while the dashed curve uses importance splitting (see Ref. [85] for details). Although the Monte Carlo method is faster than ours at small cross sections, its runtime increases exponentially with σ χn , while the runtime for ours is flat.

IV. LOOSENING CONSTRAINTS ON SUB-DOMINANT DARK MATTER
Ceilings for numerous direct detection analyses have been computed using Monte Carlo and analytic methods [45, 46, 75, 81-83, 87, 118]. However, these analyses all assumed that the particle they consider makes up all of the DM (f DM = 1). Recent papers on sub-dominant DM have not computed detector ceilings numerically, instead using approximate methods or taking ceilings from existing references. Focusing on the CRESST surface run and the CRESST-III underground analysis [17,27], we show that the ceiling of such an exclusion region can change significantly for f DM ≪ 1. As a baseline, we compare our results to the exclusion regions shown in Ref. [72], for f DM = 1, 10 −3 , and 10 −6 . A. CRESST Surface Run Figure 3 shows our computation of the CRESST surface run exclusion region. We neglect backscattering from below detector depth, as was done in the Monte Carlo approach of Ref. [83]. The ceiling depends on the value of f DM because, at large cross sections, the probability of reaching the detector with high velocity becomes quite small (see Fig. 1). If the total DM flux is made smaller, then the probability of having high velocity must be made larger for the DM to be detectable, so the ceiling must be lower. The difference between the bottoms of our exclusion regions and those of Ref. [72] is that our treatment only considers DM arriving from above the horizon. The CRESST analysis (and by extension Ref. [72]) most likely assumed that the Earth was transparent to DM, which is reasonable near the bottom of their f DM = 1 exclusion region. But for the parameter space shown, DM would have to scatter O(1000) times to cross the whole Earth, so it is reasonable to assume that ∼1/2 of the DM flux is blocked. In contrast with Ref. [72], we find that the CRESST surface run cannot exclude parameter space for f DM = 10 −6 , because the flux is not sufficiently large to overcome attenuation.
There is a slight discrepancy between our ceiling for f DM = 1 and the original Monte Carlo result of Ref. [83]. However, improvements to the DaMaSCUS-CRUST code have since lowered the Monte Carlo ceiling by a few tens of percent at low masses, bringing these lines into good  [83] and used in Ref. [72]. Dashed and dotted black with a solid black ceiling are the exclusion regions for fDM = 10 −3 and 10 −6 , respectively, reported in Ref. [72]. Solid and dashed red are our computed regions for fDM = 1 and 10 −3 , respectively; we find that the CRESST surface run is not sensitive to fDM = 10 −6 . See Subsection IV A for a discussion of why fDM affects the ceiling. agreement ( [120], see recent updates to Ref. [91]).

B. CRESST Underground Limits
Ref. [72] also computed the ceiling for the CRESST-III underground run [27], for f DM = 1, using a straight-line approximation. Such approximations have been shown to be accurate [83] or even conservative [87] when compared to numerical simulations, although Ref. [85] comes to the opposite conclusion. Here we compute the ceiling for this detector using the method described in Sec. II, including backscattering, for different values of f DM . Figure 4 shows our computation of the CRESST-III exclusion region for f DM = 1, 10 −3 , and 10 −6 , compared to the straight-line approximation for f DM = 1. For f DM = 1, we see that while the results agree at large mass, the straight-line result underestimates the ceiling by a factor of a few below about 1 GeV. We also see that for f DM = 10 −6 , the straight-line result overestimates the ceiling by a factor of several. This means that, for highly subdominant DM, underground detectors constrain less parameter space than the straight-line approximation would suggest. Note that at the ceiling for f DM = 1, and M χ = 200 MeV, this Letter's method is faster than the Monte Carlo method with importance splitting by about an order of magnitude (see Fig. 2). In this Section, we have used the exclusion regions from Ref. [72] as a baseline for comparison, but our goal is not to criticize their approach. We only use them as an example of treatments that are common in the literature. In fact, our results show that the limits from Ref. [72] exceed the CRESST limits more than previously appreciated.

V. CONCLUSIONS
When searching for dark matter, it is crucial to understand what parameter space has already been excluded, in order to effectively design new searches and interpret existing results. However, treating attenuation with a detailed Monte Carlo approach can be computationally intensive, leading to the use of approximate methods or simplified rescaling of existing limits. In this Letter, we present a novel method for modeling dark matter attenuation, which is more appropriate than the straightline approximation for light dark matter, and faster than Monte Carlo methods for large cross sections. We reanalyze constraints on subdominant dark matter from multiple CRESST detectors, showing that they exclude less parameter space than previously thought.
As new technologies and analysis methods allow everlower energy thresholds, detectors will become sensitive to lighter dark matter, and to particles that have lost more energy in propagation. This will necessitate propagation methods that are valid for light dark matter, and feasible in the limit of many scatterings, two limits which our method is tailor made to handle. In principle, our approach can also be applied to DM interactions in neutron stars and white dwarfs, where the high density leads to many scatterings even for small cross sections. It could also be applied to other physical processes, such as neutrons propagating through dense media. ACKNOWLEDGMENTS I am extremely grateful to Timon Emken for detailed discussions about the DaMaSCUS Monte Carlo code, and to Ivan Esteban for suggestions for optimizing the integration performed in this work. I also thank Joseph Bramante, Marianne Moore, David Morrissey, and Aaron Vincent for helpful comments and discussions, and the anonymous referee for comments which made the text more clear. Computations were performed on equipment funded by the Canada Foundation for Innovation and the Ontario Government and operated by the Queen's Centre for Advanced Computing. C.V.C. was supported by the Arthur B. McDonald Canadian Astroparticle Physics Research Institute. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science, and Economic Development, and by the Province of Ontario.