Dipole portal to heavy neutral leptons

We consider generic neutrino dipole portals between left-handed neutrinos, photons, and right-handed heavy neutral leptons (HNL) with Dirac masses. The dominance of this portal significantly alters the conventional phenomenology of HNLs. We derive a comprehensive set of constraints on the dipole portal to HNLs by utilizing data from LEP, LHC, MiniBooNE, LSND as well as observations of Supernova 1987A and consistency of the standard Big Bang Nucleosynthesis. We calculate projected sensitivities from the proposed high-intensity SHiP beam dump experiment, and the ongoing experiments at the Short-Baseline Neutrino facility at Fermilab. Dipole mediated Primakoff neutrino upscattering and Dalitz-like meson decays are found to be the main production mechanisms in most of the parametric regime under consideration. Proposed explanations of LSND and MiniBooNE anomalies based on HNLs with dipole-induced decays are found to be severely constrained, or to be tested in the future experiments.


I. INTRODUCTION
The Standard Model of particles and fields (SM) shows remarkable resilience under the scrutiny of numerous particle physics experiments. In particular, the LHC experiments have put significant constraints on new hypothetical colored states, pushing their masses to a TeV scale and beyond. At the same time, owing to its smaller production cross sections, the electroweak extensions of the SM are far less constrained, and a plethora of new models may be hiding at energies of a few hundred GeV and beyond. If such sectors are considered to be heavy, their impact on the SM physics can be encoded in the higherdimensional extensions of the SM. Moreover, the electroweak singlet components of such sectors can be light, and still coupled to the SM states. In the last few years, significant attention has been paid to the models containing new singlet fermionic states N (often referred to as heavy neutral leptons) that can couple to the SM leptons L and Higgs field H via the so-called neutrino portal coupling, N LH (see e.g. [1,2]). Owing to the neutrality of N , its mass m N is a free parameter with a wide range of possibilities from the sub-eV scale and up, all the way to the Planck scale. This range is somewhat narrower if N is indeed taking part in generating masses for the light active neutrino species. A great deal of experimental activity is devoted to searches of N particles, that may show up in cosmological data, in neutrino oscillation experiments, in meson decays, beam dump experiments and at high energy colliders. (For a recent overview of neutrino portal see e.g. [3].) Given large interests in searches of heavy neutral leptons, in this work we will analyze a less conventional case of N particles coupled to the SM via the so-called dipole portal encoded in the following effective Lagrangian, Here F µν is the electromagnetic field strength tensor, and ν L is a SM neutrino field. This is an effective Lagrangian that needs to be UV completed at energy scales not much larger than Λ ∼ d −1 . We are going to stay on the effective field theory grounds, noting that since our results show the sensitivity to d to be much better than TeV −1 , the UV completion scale can be raised above the electroweak scale. For now, Eq. (1) is also applicable only at energies below the weak scale, as it does not respect the full SM gauge invariance. Indeed, F µν should be a part of the U(1) and/or SU(2) field strength, and the insertion of the Higgs field H is also required, so that d ∝ H Λ −2 . For most of our analyses we will be interested in values of m N in the interval from 1 MeV to 100 GeV, and at relatively small energies, so that a treatment using Eq. (1) is indeed sufficient.
The main assumption made in Eq. (1) is the absence, or subdominance, of the mass mixing operator N LH. When the mass mixing operator is dominant, the production and decay of N particles is mostly governed by its interaction with the SM particles via weak bosons. The phenomenological consequences of these minimally coupled particles N is well understood. In contrast, if the leading order operator is suppressed, the dipole operator offers novel signatures and features in the production and decay of N , such as a much enhanced role of electromagnetic interactions in the production and decay of N . This case has so far being addressed only in a handful of works [4][5][6][7][8][9], and here we would like to present a comprehensive analysis of the dipole N portal, and derive constrains on d that result from a variety of different experiments, both  Table II for an explanation of the labels. Each curve is discussed and presented in the paper.
at high and medium energies. Previously dipole interactions of neutrinos have been studied in several specific contexts (that we are aware of). If the SM neutrinos have a large flavor off-diagonal EM dipole moment, the interaction of solar and reactor neutrinos may get enhanced. This provides stringent limits on dipole moments of SM neutrinos [10]. Some theoretical and phenomenological aspects of the Dirac HNL dipole operator were discussed in Refs. [11,12] (see also a more recent general discussion of dimension 5 effective operators in the neutrino sector [13]). Another prominent place where the transitional ν − N dipole appears is the literature on searches of sterile neutrino dark matter via a dipole-induced decay N → νγ ( [14] and references therein). A more closely related case to the topic of our study has arisen as a consequence of trying to accommodate MiniBoone and LSND anomalies, that we would like to discuss now in more detail.
While there is an overall theoretical/experimental consistency for the three-neutrino oscillation picture, there are several experimental results that do not fit in. Two notable exceptions are the anomalies observed at the intensity frontier experiments LSND and MiniBooNE [15,16]. In these experiments, an excess of low energy electron (anti-)neutrinos have been observed, the source of which is currently unknown. Conceivably, there are two possibilities: new physics or some unaccounted SM processes. Thus, for example, single photons produced via poorly understood SM neutrino interactions with nuclei [17] might lead to some partial explanation of the anomalies. (At the signal level, a single photon cannot be distinguished from charged-current quasi-elastic events by MiniBooNE's Cherenkov detector.) The most popular proposal is the existence of a light (m ∼ eV) sterile neutrino ( [18] and references therein), which mediates the anomalous oscillation required to explain the observed excess signal. A possibility of eV sterile neutrinos being at the origin of the MiniBooNE and LSND oscillation results is strongly challenged by cosmological data. Indeed, the required parameters for mass splitting and mixing angle will lead to a complete thermalization of a new sterile species via oscillation mechanism. This stands in sharp disagreement with cosmological data (in particular, cosmic microwave background (CMB), Big Bang Nucleosynthesis (BBN) and late-time cosmology) that constrain not only the total number of thermally populated relativistic degrees of freedom in the early Universe, but also limits the total neutrino mass m ν ≤ 0.17 eV at 95%CL [19]. Consequently, a single eV sterile neutrino is not con-sistent with cosmology in the absence of new physics. At the very least, the minimal model would need to be modified to suppress the oscillations in the early Universe, which is usually achieved at the expense of significantly enlarging the sterile neutrino sector e.g. by new types of interactions with dark matter and/or baryons [20,21]. Thus, the sterile neutrino solution to the MiniBooNE and LSND anomalies naturally leads to the idea of a dark sector, with new matter and interaction states.
An alternative attempt to accommodate the anomalies without using eV-scale sterile neutrinos requires some dark sector states comparable in mass to the lightest mesons. Thus, it has been noted that the presence of a new sub-GeV neutral fermion N may mimic the signals observed at MiniBooNE and LSND [4,5]. The necessary ingredient of this proposal is a new fermionic state N in the 10-to-few-100 MeV mass range and the dipole coupling in Eq. (1). This coupling mediates a relatively prompt decay of N to a normal neutrino and a photon, a signature that can be confused with the "normal" electron or positron final state in charged current events [4,5]. Whether this model can simultaneously account for both anomalies without running into problems with other constraints remain an open issue (see the discussions in Refs. [4][5][6][7][8][9]). At the same time the model has a clear advantage over the eV sterile neutrino model, as it creates no problems with cosmology, as N states will decay to the SM at early times before the neutrino decoupling.
Continuing investment in neutrino physics will eventually lead to better understanding of the origin of these two anomalies. The Short-Baseline Neutrino program (SBN) [22] is going to be instrumental in testing the MiniBooNE anomaly. The design consists of three Liquid Argon time projection chamber (LAr-TPC) detectors that overcome the difficulties present at MiniBooNE by providing excellent photon/electron discrimination. Furthermore, the SBN program will use a near detector (SBND) to control systematic errors related to the neutrino beam content. Being close to the proton target, SBND will see a much larger neutrino flux than the mid-range detectors and will allow a more accurate measurement of the neutrinos before oscillation. In addition, a further increase in sensitivity may result from a proposed new experiment at CERN, Search for Hidden Particles (SHIP) [3], that will be able to significantly advance the probes to N states, and should also test their dipole interactions. For an analysis of a more conventional CC-dominated model of HNLs in application to Fermilab experiments we refer the reader to a recent paper [23].
Motivated by the relative simplicity of the neutrino dipole portal model and its potential applicability to neutrino anomalies, it is very useful to have a comprehensive survey of the model over a large region of parameter space. We therefore consider the energy, in-tensity and astrophysics frontiers, where this portal can be probed. A plot summarizing our results is shown in Fig. 1, and the rest of the paper considers each probe individually. The existing constraints from previous dark matter experiments can be improved by the SBN and SHiP. From astrophysics, MeV HNLs could contribute to the supernova cooling, in particular that of Supernova 1987A (SN 1987A). This happens when the coupling d is large enough so that the star can produce N in sufficient quantity, but small enough so that N can escape and cool down the star without being significantly impeded. For lifetimes longer than 0.1s − 1s, N is relevant for, and can modify predictions of, BBN. The late decays of HNLs would modify the proton to neutron ratio, and with some reasonable assumptions about the initial cosmological temperatures being high, this puts an upper bound on the lifetime of N . We find that there is significant overlap of this region with SN constraints. Lastly, for above GeV masses, we turn to particle colliders and recast existing searches from the LHC and LEP. Going to particle colliders allows us to probe simple completions of the model which preserve the SU(2) × U(1) structure of the SM. In these extended models, we have additional production channels stemming from Z and W bosons.
The paper is organized as follows. In Section 2, we provide more details on the model including the possible SM gauge invariant completions and the connections to neutrino masses. In section 3, 4 and 5, we consider the intensity, energy and astrophysical frontiers respectively. Finally, we conclude in section 6 with general remarks.

II. GENERIC FEATURES OF NEUTRINO DIPOLE PORTALS
A. Main qualitative features of dipole portal The consequences of the dipole portal in Eq. (1) can be easily understood by considering the four vertex alignments presented in Fig. 2. The presence of an electromagnetic coupling to neutrinos allows for mesons to decay in two novel ways: Dalitz-like decays mediated by off-shell photons and neutrinoless weak decays with a single photon in the final state. In terms of producing N , incident neutrinos can upscatter via the dipole portal, which can be a more efficient production process than mass mixing mechanisms that have been traditionally considered. The decay of an HNL in our model will be dominated by single photon production, and for the values of d's we consider in this paper, will occur much more rapidly than in mechanisms that are mediated by the weak force. This single photon signature was identified in Refs. [4,5] as a promising signal, however the production mechanisms outlined above were not included.
We now here our discussion to beam dump experiments. There, production of N will dominantly proceed via neutrino upscattering, wherein an incoming neutrino scatters via a photon to produce N . If the incoming FIG. 2. Dipole portal processes for N : (a) Production of N from off-shell neutrinos arising from weak meson decays (e.g. from π, K → µ + ν * ); (b) Production of N from offshell photons arising from Dalitz-like meson decays (e.g. from π 0 , η → γ * γ); (c) Production of N from on-shell neutrinos via Primakoff-type upscattering (via photon exchange with the nucleus); (d) Decays of N to single photon final states (the main signal studied in this paper). Processes (a) and (b) are important for production of low mass N at neutrino experiments. Process (c) dominates production in supernovas at lower N masses, and at neutrino experiments. Process (d) is relevant for energy injection at BBN, for neutrino beam dump experiments, and controls the escape probabilities in supernova for large N masses.
neutrino scatters off the whole nucleus and the process happens coherently (i.e. σ ∝ Z 2 ), we can get a crude estimate for the sensitivity one can achieve. In the limit of infinite mass of the nucleus, the problem reduces to the scattering in the external EM field A µ = (A 0 ( q), 0) created by the nucleus. Calculating the cross section to logarithmic accuracy for − 1 Therefore for masses m N = 50 MeV, an incoming neutrino energy of 1 GeV and R −2 nuc ∼ 0.3 GeV 2 , we can expect a production cross section per nucleus of roughly It is worth noting that the d 2 scaling of the cross section makes it also a relatively mild, logarithmic function of energy, provided that N is kinematically accessible.
For characteristic values of d suggested in Eq. (3) and small masses, we can also expect N to be long-lived. This opens up the possibility of HNLs being produced outside of the detector. For example, they could be produced in the dirt or line of sight leading up to the detector, and/or via mesons from the protons-on-target via Dalitzlike decays. Meson production via the dipole portal is an interesting new production mechanism we will discuss, and from dimensional arguments it is clear that the scaling of the meson decay branching to N will occur via Br M →N ∝ d 2 m 2 M , where m M is the mass of the decaying meson.
The decay length associated with the N → νγ process is another very important quantity. Given a decay rate of and an HNL energy of E N = 1 GeV m N , the decay length and lifetime of N scale as This turns out to be a very convenient length scale for beam dump experiments, if m N and d have the fiducial values suggested above.

B. Dirac vs Majorana masses and gauge invariant completions
If N D is a Dirac fermion, composed of two Weyl fields one of which is completely decoupled from the SM, then the HNL is decoupled from the mechanism that generates active neutrino masses. Thus, we assume both the absence of mass mixing between ν and N , and a vanishing Majorana mass for N . This choice is technically natural and can be achieved by-for example-assigning N the same lepton number as the SM leptons. If such a symmetry is not imposed, and a sizeable Majorana mass term, m N , is present then the process shown in Fig. 3 can take place. Naive counting of divergences shows that the induced Majorana mass for the neutrinos, m ν will scale as m ν ∼ d 2 Λ 2 m N /16π 2 , where Λ is the cutoff scale associated with the UV completion of the model, which can be as high as d −1 . This contribution, despite all the uncertainties, will be much larger than the required mass scale for the neutrinos, unless N is Dirac, or quasi-Dirac with a small Majorana-type mass splitting satisfying m N m N . Quasi-Dirac N would typically lead to larger values of d than otherwise would be suggested by a simple application of the see-saw relation. Consider a model where the SM neutrinos couple to N via a Loop level contribution to the ν mass mixing matrix in the presence of a Majorana mass term for the heavy neutral lepton N . With only Dirac masses, such diagrams will not be generated.
mass mixing interaction of the form m νN νN . This naturally generates dipole couplings between the SM neutrinos, sterile neutrino and the photon via a loop diagram. The dipole coupling generated is given in [24,25] as The strength of this radiatively generated dipole portal is dictated by the mass mixing with the active neutrinos, and therefore constrained by patterns of the neutrino mass matrices. In particular, in the case of a type-I see-saw mechanism with the Majorana mass of m N = 50 MeV, observed neutrino masses would imply m νN ∼ keV and consequently d ∼ 10 −13 GeV −1 . We do not impose such a stringent constraint and consider d to be an independent parameter. In fact, the size of d can be much larger if the effective mixing angle between ν and N is much larger than the naive see-saw relation implies. This may happen, for example, within an inverse see-saw model [26,27], where a mostly Dirac fermion N is supplemented with a small Majorana mass, so that the mass mixing parameter m νN is much larger than naively implied.
Above the electroweak scale an SU(2) × U(1) interpretation of d would require a Higgs insertion, so that the dipole interaction is really a dimension 6 operator. Therefore, in the limit of large Λ the maximum expected d is where strong dynamics at the scale Λ is presumed, and v is the Higgs field vacuum expectation value. Otherwise, if the new sector is perturbative, we would expect a loop factor, and d max, pert ∼ GeV/Λ 2 . To consider neutrino dipole couplings which respect the full gauge symmetries of the Standard Model, we write down the Lagrangian whereH = iσ 2 H * and τ a = σ a /2. After spontaneous symmetry breaking of the Higgs, one obtains where The dipole couplings in the broken phase are related to those in the unbroken phase via where the additional factor of √ 2 in the expression for d W is a consequence of the normalization of Note that the three "dipole moments" in the broken phase d γ , d Z and d W are determined by only two parameters in the unbroken phase d W and d B ; they are linearly dependent. Notice that the normalization of the photon field strength term in Eq. (10) matches that of Eq. (1).
Although we have suppressed the relevant indices, the dipole coupling can be flavor dependent. Experiments at SBN will constrain d e B and d µ B . SHiP in addition will be sensitive to ν τ , and thus an ideal setting to study all "dipole couplings". For both LHC and LEP, we turn on only the d µ γ,B,W coupling for simplicity. One can also turn on d e γ,B,W and d τ γ,B,W that have an O(1) effect on the result.
Having established that a neutrino dipole portal is ultimately a dimension 6 operator, one might wonder if there are any non renormalizable SM only operators that are phenomenologically equivalent to our new physics signal. If so, one would need to perform a global fit on the whole basis of Wilson coefficients instead of focusing on just one operator. The case of SM only operators after electroweak symmetry breaking is considered in Section III B. Ref. [28] on the other hand provides a classification of all dimension 5 and 6 SM only operators above the electroweak scale (i.e. invariant under SU(3)×SU(2)×U(1)). In order to replicate our signature, we need at least one photon, one neutrino and an additional gauge boson. If we assume that no particles except neutrinos escape detection, and furthermore that the interactions are 2 → N, then none of the dimension 5 or 6 operators in Ref. [28] contribute to single photon processes at beam dump experiments, LEP or the LHC.

III. INTENSITY FRONTIER
We consider probing HNLs at beam dump experiments and our analysis focuses on neutrino experiments hosted at CERN, Los Alamos and Fermilab. Fermilab is building a substantial Short-Baseline Neutrino oscillation program [22] that among other physics goals will settle the question of sterile neutrinos at ∆m 2 ∼ 1 eV 2 . It will consist of 3 LAr-TPC detectors called SBND, MicroBooNE and ICARUS, which will be spread out over a 600m range from the proton target. The SBN program is designed to achieve a 5σ sensitivity in the parameter space of (3 + 1) sterile neutrino models consistent with LSND at 99%CL. These detectors can resolve photons from electrons with a 94% photon rejection rate.
At CERN, we will be interested in the past experiment NOMAD and future proposal SHiP. The proposed SHiP experiment is unique among beam dump experiments in that it features very large neutrino energies and a sizeable flux of electron, muon and tau neutrinos. Furthermore, the use of lead inside the neutrino detector, Z = 82, will provide an ideal setting to take advantage of coherent production, which scales as Z 2 . At Los Alamos, we consider the LSND experiment which will prove to be useful at low HNL masses. In what follows, we discuss the various production mechanisms at beam dumps, the main backgrounds involved in the search, and our results.

A. Production mechanisms
At neutrino beam dump experiments, HNL production can happen in three principle ways. The first-and most familiar-mechanism is mass mixing, however this is subdominant in our analysis by assumption. The two dominant production mechanisms are therefore meson decays and Primakoff upscattering, both of which are explained in greater detail below. In principle DIS production via Drell-Yan like processes is also possible, but we found this to be subdominant.

Primakoff upscattering
Neutrino upscattering is the dominant production mechanism for N across a wide range of masses for the experiments we consider. It happens when an incoming neutrino interacts with matter and upscatters into a longlived HNL state N . The HNL subsequently decays into a neutrino and a photon; an explicit example is provided in Fig. 4. This process can either happen inside the fiducial volume of the detector or in the line of sight separating the proton target from the detector. In all our results, we employ the narrow width approximation, since N is usually produced on-shell and travels some distance before decaying. Having an HNL lifetime and energy consistent with the necessary flight distance is enforced by In Appendix A 1, we present the details of how the cross section is obtained for coherent and diffractive scatter- Tree level neutrino scattering process with a final state photon, arising from dipole portal to HNL. We work in the narrow width approximation, and assume the above diagram factorizes.
ing. We apply the cuts described in Appendix C to ensure proper kinematics of the photon. There, it is also fully described how the region of integration of t is determined. Once we have obtained the cross section, cuts, photon detection efficiency and luminosity, we can set limits following the discussion in Appendix B.

Meson decays
At low mass, HNLs are long lived and represent a kinematically allowed decay channel for light mesons. Unlike mass mixing induced decays, the dipole portal allows for electromagnetically mediated Dalitz-like pathways in addition to weak decays mediated by an off-shell neutrino. The qualitative features that can allow for significant production of HNLs are (i) High meson multiplicity per proton (e.g. pions).
In terms of meson production at the experiments we consider, the largest difference between them is that immediately following the proton target, SBN features a 50m meson decay chamber, whereas SHiP has a hadron stopper. This divides our discussion into prompt (τ rest 10 −12 s) and long-lived (τ rest 10 −12 s) mesons.
Only the former will contribute to HNL production at SHiP, whereas both will be relevant at SBN due to its long decay chamber. To obtain rates, we calculate the differential cross section of HNL production from mesons in the meson rest frame, which we combine with the meson fluxes in the lab frame. The details of these calculations are outlined in Appendix A 2.  The species we have included in our analysis are shown in Table I, from which it is clear that the prompt mesons are π 0 and η. For both of these, the dominant channel for HNL production is We immediately see that these radiative Dalitz-like decays will be useful for improving the sensitivity to d e and d τ flavored couplings, since the process in Eq. (13) is universal in the flavor a. By contrast, neutrino upscattering at beam dump experiments is limited by smaller incident fluxes of ν e and ν τ neutrinos, as compared to ν µ neutrinos. The long-lived mesons we consider are π ± and K ± . They can produce HNLs via an off-shell neutrino decay When considering decays to electron flavor, such as K, π → eν e , one typically expects a chiral suppression of O m 2 e /m 2 µ in the branching ratio relative to the muon channel. While we concentrate on Eq. (14) for muon flavors at SBN, we note that K, π → eN γ will avoid chiral suppression due to the chirality-flipping nature of the dipole portal. The K + states, whose rates are about a tenth of those of pions, are important because they allow production of heavier HNLs.
To get a handle on which mesons are expected to contribute most, we calculated the average multiplicities of each meson per proton on target at SBN. Our results are shown in Table I. The π − multiplicity has been calibrated to match that of Table X in [29], and we find very good agreement for the other meson multiplicities. No distribution parameters for K − and η were available, and so we rescaled those of K + to match expectations. Both K − and η contributions are very small, so the discrepancy in average momentum and angle as compared with Table X has a negligible effect on our results. We conclude that pions will be the most important mesons for sourcing low mass HNL particles.

B. Backgrounds
The main backgrounds for HNLs will be single photon signatures, arising from mis-reconstructed π 0 or radiative resonance decays such as ∆ → N γ. At SHiP, there is not much publicly available information, and therefore we consider various benchmark estimates for these backgrounds. We guide our estimate by considering the observed single photon backgrounds at NOMAD, rescaled to account for differences in the target mass and number of protons on target.
On the other hand, the SBN collaboration has estimated the number of single photon events that can fake a ν e CC signature in each of its detectors. We can estimate the total single photon background by taking this number and dividing it by 6% to factor out the photon rejection rate. We then impose a 200 MeV threshold in our results since the single photon backgrounds grow with decreasing energy. To account for signal photons that may have been lost, we apply a 20% signal efficiency cut.
The backgrounds at LSND are similar in spirit to those at MiniBooNE, in that electron-like events arise from both electron and photon sources. In order to obtain constraints, we base our analysis at LSND on an electron-neutrino elastic scattering search [30], respecting the fiducial geometry and energy cuts described in that paper. Examining Fig. 1 and 10 of [30], we note that the incident neutrino flux favors energy values between 30-50 MeV, whereas the collected electron-like sample peaks at energies around 22 MeV. Single photons from HNL decays on the other hand tend to be much harder and closer in energy to their parent SM neutrino. We therefore explore two different recast strategies. In the first case, we impose a lower threshold on the incident neutrino energy of 18 MeV. This corresponds to the full dataset collected by LSND, comprising of roughly 300 predicted background and data events. In the second strategy, we impose a lower energy cut of 40 MeV in an attempt to better discriminate our new physics signal from SM backgrounds. This cut amounts to keeping roughly 27 predicted background and data events. We find that the latter strategy provides slightly better sensitivities to HNLs, and these are the LSND results that feature in all of our plots.
Lastly, diagrams containing loops of charged leptons and either a W or Z boson, can induce an effective γγνν vertex in the SM and provide a potential source of single photon backgrounds. We have explicitly estimated the size of this background in Appendix A 3 and it is many orders of magnitude lower than the HNL production cross section estimated in the previous section, and can therefore safely be ignored.

C. Experimental results and prospects
In what follows we describe and summarize the implications of existing measurements at LSND and Mini-BooNE. We also comment on the projected reach of ongoing and future experiments such as MicroBooNE and SHiP.

LSND
The LSND oscillation anomaly, which consists of an excess ofν µ →ν e events [16], has historically motivated interest in sterile neutrinos. While common interpretations of the excess typically involve very light sterile states, more recently it has been proposed that a dipole portal coupled with HNLs with m N ≈ 50 MeV could explain the excess [4,5,7]. It is therefore of great importance to consider the observations at LSND and their implications for dipole portals to HNLs.
The setup at LSND involves a neutrino flux coming primarily from µ + and π + decays at rest [30]. Consequently the dominant production channel of HNLs is through neutrino upscattering. In modelling the production of HNLs at LSND we include Primakoff upscattering of neutrinos, as well as decays in flight for π 0 , decays at rest for µ + and decays both at rest and in flight for π + . We account for the change in LSND's source of neutrinos, LAMPF, and include two years of data assuming a water based target and three years of operation using a high-Z target (mostly tungsten) [30]. For our purposes the primary effect of the target material is to modify the incident flux of neutrinos, and mesons.
The decays in flight of π + and π 0 are modelled assuming a Burman-Smith distribution with appropriate parameters for both water and tungsten [31,32]. Additionally, the decay at rest of µ + and π + contribute to the production of HNLs. The decay mode of interest for π 0 is a Dalitz-like decay, while for µ + and π + an offshell neutrino mediates the production of HNLs. This off-shell neutrino can be either ν µ , or ν e and we include both of these processes in our analysis. Summing all of these processes, and appropriately boosting the HNLs from decays in flight, leads to an incident flux of HNLs which may enter the detector and decay leaving a single photon signature.
On top of a flux of HNLs due to pion and muon decays, Primakoff upscattering of neutrinos in transit on their way to the detector can provide an additional source of HNLs. Alternatively, upscattering can occur within the detector itself. These processes have to be considered separately since much longer decays are possible in the case of the former, the target material upon which the neutrino upscatters is different, and angular cuts will be dictated by the different geometries.
When upscattering in transit to the detector, the medium of interest is the dirt-and other terrestrial material-along the line of sight between the source and the detector. In our analysis this is modelled as SiO 2 and we include both coherent and diffractive scattering. The produced HNL must be directed in a range of solid angle so as to guarantee that it passes through the detector. The range of angles for which this occurs is different depending on how far away the HNL is produced from the detector. To account for this effect, we analyze ten evenly spaced points between the source and the detector. At each of these points, given a flux of neutrinos, we calculate the number of HNLs that would both be produced and enter the fiducial volume of the detector. The LSND detector is off-axis from the neutrino source, and is roughly cylindrical in shape, and so we define the angular cuts such that the HNL would pass through the bottom-near and top-far corners (relative to the neutrino source) of the detector; the angular cuts are implemented as described in Appendix C and account for fiducial cuts at the bottom of the detector. In addition to passing through the detector, the HNL's subsequent decay must occur within the fiducial volume for a signal to be observed. We account for this effect by including the probability that the HNL decays in the fiducial volume Eq. (12). Angular cuts within the detector are, as before, described in Appendix C.
It is also possible that upscattering occurs within the fiducial volume of the detector. For LSND this implies a target composed of CH 2 (mineral oil) for the incident neutrinos, and implies furthermore that neutrinos can be produced and subsequently decay along the entire line of sight. We account for this effect at leading order in the limit of L dec L fid , which is the relevant regime when considering the minimal bound on the dipole-coupling of the HNL. We restrict the production of HNLs to the forward pointing hemisphere (i.e. an angular cut of θ ≤ π/2), due to experimental cuts. Additionally, we only include the effects of coherent scattering due to the presence of a hadronic veto within the detector.

Fermilab's SBN program
At Fermilab, we are interested in the past experiment MiniBooNE, as well as ongoing experiments involving the SBND and MicroBooNE detectors. At MiniBooNE, we consider the existing search for ν µ → ν e quasi-elastic scattering events [15]. When limited to reconstructed neutrino energies of 475 < E QE ν < 1250 MeV, they find very good agreement between background and data. However, for energies between 300 and 475 MeV, Mini-BooNE sees a persistent excess. MiniBooNE, being an oil based Cherenkov detector, cannot distinguish electrons from photons. A possible explanation for the excess [17] is from the ∆ → N γ process faking a ν e signal. A direct chiral perturbation theory calculation finds these rates to be twice as big as data driven estimates from Mini-BooNE.
The more exotic interpretations of the MiniBooNE and LSND anomalies [4,5] involve additional single photons from new physics coming from an HNL model with a large dipole coupling d and an active neutrino mass mixing term in the range |U N ν | 2 10 −3 − 10 −2 . In that case, production of HNL arises from neutral current ν scattering that leads to the production of HNL. In Fig. 5, we revisit the constraints from MiniBooNE by considering both production and decay stemming only from the dipole portal. Since it is difficult to reconstruct HNL energies (due to energy being carried away by outgoing neutrinos), we take an inclusive approach and sum over all the backgrounds and data bins. We calculate the allowed 95%CL HNL limits following the procedure in Appendix B for three different assumptions, which we denote by Bkg 1, 2 and 3 in Fig. 5. Firstly, we use the data and backgrounds as given in [15]. Secondly, we repeat the analysis after including the additional sources of backgrounds identified in [17]. And lastly, we compute constraints taking into account only the E ν > 470 MeV region. Based on [17], we assume a 25% photon identification efficiency to account for resolution and smearing effects. The photon energy detection threshold is 140 MeV.
Comparing our results to [5] where dipole portal production mechanisms are ignored, we see that around 50 MeV masses production from dipole portal is actually dominant. An explicit calculation reveals that for the best fit parameters in [5], the dipole production cross section is roughly 20 times larger than production from mixing, and so this explanation appears to be excluded. This point is discussed in [7], and in the same work, the authors attempt to accommodate the constraint from the muon capture with photon emission at TRIUMF [6,33] by introducing an additional heavy neutrino ν h . In this way N can decay to N → ν h γ as a main decay channel, and the branching ratio to ν µ can be adjusted to accommodate the LSND/MiniBooNE anomalies while evading muon capture bounds. This same model was recently considered in the context of coherent and diffractive scattering at both MiniBooNE and MicroBooNE [34]. In contrast, we make no attempt to go beyond the minimal dipole coupling and we therefore exclude the favored regions of [5].
For 500 MeV HNL masses explaining MiniBooNE data, we find that production from mixing dominates. Therefore in order to obtain stronger dipole-only constraints, we turn to ongoing and future experiments. Our results for SBND and MicroBooNE are shown in Fig. 6. They assume 6.6 × 10 20 POT of data in SBND and 13.2 × 10 20 POT of data in MicroBooNE. As we see, after only 3 years of data taking, they can start cutting into favored parameter space, provided photon data is collected in this duration.

SHiP and NOMAD
At CERN, we will consider the NOMAD experiment, which ran from 1995-1998 [35][36][37], and the proposed SHiP experiment. Both of these experiments are based 95% CL limits for HNL particles using Mini-BooNE and LSND νeCC measurements. In light of the experimental anomaly, background option 1 (Bkg 1) uses the data and backgrounds as is, option 2 includes an alternative stronger ∆ → γN background estimate [17], and option 3 includes only neutrino energies in the anomaly-free region (Eν e > 470 MeV). We also overlay regions of interest (ROI) from the MiniBooNE and LSND anomalies (see text). on CERN's Super Proton Synchrotron, and consequently have neutrino fluxes extending to larger energies as compared to Fermilab. NOMAD has already performed a search for single photon production. Using this data corresponding to 1.45 × 10 18 POT, Monte Carlo simulations of HNL signals (with no mass mixing) where performed [38,39] to simulate the Primakoff process ν µ Z → N Z. The signature of interest was an isolated electromagnetic shower corresponding to a single photon with energy distributed from 0 to E ν , with E νµ /2 as an average. The backgrounds, estimated to be roughly 10 events, come mainly from π 0 production, as well as ν e CC interac-tions. The full results 1 from their simulation are shown in Fig. 7.  (1000) background events during the lifetime of the experiment. We also overlay existing constraints [38,39] from NOMAD.
CERN has also proposed a future high energy facility called SHiP [40]. If indeed funded and built, it would provide some of the strongest probes of heavy neutral leptons to date [3]. At SHiP, neutrinos are produced by 400 GeV protons impinging on a molybdenum and tungsten target. A hadron stopper immediately after the target allows only prompt meson decays, and a magnetized iron shield deflects muons. Following this is an emulsion cloud chamber near detector (which we will refer to as "ECC detector") containing lead bricks, a vacuum decay chamber followed by the main detector (which we will refer to as "main detector"). The length of the whole experiment would be on the order of 100m. It is advantageous to consider HNL production from prompt mesons, the line of sight, and lead bricks in order to maximize our sensitivity to a large range of HNL lifetimes. We apply a photon detection efficiency of 80% and an energy threshold of 0.1 GeV.
A unique feature of SHiP is that it is expected to have a sizeable flux of ν e and ν τ neutrinos. Therefore, we can interpret the results of the single photon search as constraints on d f γ , for a given flavor f . Recall that flavor indices in Eqs. (1) and (9) are suppressed and a priori general. The projected sensitivities achievable at SHiP are shown in Fig. 7 for muon flavors assuming 2 different benchmark choices for the number of background events (10 and 1000 background events). In Fig. 8, we show the sensitivity for electron and tau dipole moments assuming 100 background events. At SHiP, single photon rates have not yet been studied. We can obtain a naive estimate by comparing to NOMAD, which had about 10 background events with 100 times less protons-on-target than SHiP. Therefore, with higher luminosities coupled to improved detector capabilities, it is reasonable to estimate around 100-1000 background events in the SHiP ECC detector. This detector will probably have more background events than the main detector, since the latter is surrounded by veto structures designed to reduce backgrounds as much as possible. For the SHiP curves appearing in Fig. 1, we assume 1000 backgrounds events in both detectors in order to provide a conservative estimate.

A. Production mechanisms
Beam dump experiments feature very large luminosities, however, the masses of N which are accessible are limited by the incoming neutrino energy spectrum, typically peaked around 1 GeV, or between 10 − 20 GeV in the case of SHiP. In contrast, particle colliders can probe much larger masses at the expense of smaller luminosities [11]. Additionally, since dipole operators must couple to either B µν or W µν above the electroweak scale there is the added possibility of on-shell production of the Z and W mediators. The HNL couplings appearing in all of the high energy plots for LEP and the LHC are defined as follows. We take the relations in Eq. (11) and rescale We now discuss the mechanisms for producing HNLs at LEP and the LHC, and then discuss the details of the analyses and our results.

LEP
At LEP, production will proceed via e + e − → (N → γν)ν + h.c.. The signature to look for is thus a single photon final state with missing energy. This channel can proceed via either Z or γ mediators depending on the dipole coupling in the unbroken phase (see Eqs. (10) and (15)). Therefore the total production cross section at s = m 2 Z for e − e + → Nν integrated over all angles is where we treat the electron as massless and assume that d W = 0. The axial and vector couplings are defined as C A = −1/2 and C V = −1/2 + 2 sin θ W . In practice, we apply the experimental angular photon and energy cuts described in Section IV B and Appendix C and do not make approximations on the masses of electrons.

LHC
At the LHC, there are two main production channels we can consider. The first channel is analogous to LEP, and consists of oppositely charged quarks and antiquarks interacting via an s-channel photon or Z boson: q iqi → (N → γν)ν + h.c.. This gives the same signature as LEP, up to subtleties that will be discussed in Section IV B. In addition to neutral currents, the LHC provides us with the opportunity to study interactions proceeding via charged currents. The charged current couplings appeared as one of two possible couplings above the electroweak scale in Eq. (10), and leads to a final state consisting of a single photon, charged lepton and missing energy-for example: u idj → (N → γν) + . For the LHC, the rate of production of HNLs is calculated using MadGraph5_aMC@NLO v2.5.5 [41], making use of FeynRules2.3 [42,43] to load our implementation of the HNL model.

LEP
There have been many analyses dedicated to the γ + E miss final state [44][45][46][47]. We choose to focus on the results of LEP1, which ran at a center of mass (COM) energy corresponding to the Z pole and accumulated about 200pb −1 of data, and LEP161 which ran at a COM energy of 161 GeV and accumulated 25pb −1 [48]. Using partial luminosity and combining many analyses, LEP1 was able to set an upper bound of 0.1pb on the cross section of new physics contributing to the γ + E miss final state, within the angular acceptance range of | cos θ γ | ≤ 0.7 and requiring the outgoing photon to have a minimal energy of 0.7 GeV. We also enforce that the HNL decays within 1m of the interaction point using Eq. (12). To set constraints using LEP data that extend to slightly larger HNL masses, we point out that LEP's 161 GeV run also set an upper bound of 1pb on the single photon cross section from new physics.

LHC
To probe the coupling d Z , we recast a recent dark matter search at √ s = 13 TeV by ATLAS [49] involving final states containing at least one photon with E γ T > 150 GeV, missing energy greater than 150 GeV, and 0 or 1 jets. Events in our MadGraph simulation were generated with 0 or 1 photon, and no jets. Owing to the systematic uncertainties in the modelling of initial state radiation, only background predictions with 1 jets are shown in the ATLAS paper. We use a data-driven method to estimate the background events with 0 jets by looking at the ratio of data events reported to contain either 0 or 1 jet. Following this, we see a deficit of data events in both the 0 and 1 jet channels as compared to the background predictions, which will motivate us to adopt the CL s method for estimating the sensitivity at the LHC, which we describe in Appendix B. The dominant background for this search was the irreducible Z(→ νν)γ process, followed by W (→ ν)γ in which the final state lepton was not detected. In addition to all the cuts described in the paper, we also impose a probability function requiring the HNL to decay before the closest distance to the ECAL barrel, namely r = 1.5m from the beamline. We take the photon ID efficiency to be 92% [50].
The LHC also provides us with the opportunity to probe the charged current HNL extension. We make use of a √ s = 8 TeV CMS search for supersymmetric models with gauge-mediated breaking [51]. In its analysis, the collaboration searched for 1 electron/muon with transverse momentum greater than 25 GeV, 1 or more photons, and missing energy greater than 120 GeV. The dominant backgrounds in this search were misidentified photons, misidentified leptons, and electroweak backgrounds. In the case of CMS, the transverse distance from the beamline to the ECAL barrel is 1.29m, and the detection efficiency for electrons and muons are 80% and 90% respectively. There are no requirements on the number of jets, however they show results consistent with low jet activity by requiring that the scalar p T sum of jets (H T ) be smaller than 100 GeV. In our event generation, we do not consider associated jet production, which provides us with a conservative estimate. We simulate production of N and from a W boson via the d a γW coupling, and decays of N to a neutrino and photon via the d γ coupling. We do this for various relative magnitudes between d a γW and d γ . In both the CMS and ATLAS searches, results are shown in terms of several signal regions defined by an additional requirement on the missing energy. We cycle through each of these signal regions and independently calculate the sensitivity in order to find the most constraining missing energy requirement. We now briefly comment on ways in which one could extend the reach of this analysis. Access to longer HNL lifetimes could be achieved by using the location of the photons hitting the ECAL barrel and endcaps, and statistically mapping these back to the original direction of the HNL. Then, on an event-by-event basis, we could select different maximal distances in the probability of decay cut. Currently, we only used the distance of closest approach between the IP and the ECAL barrel. An additional possibility is to allow the HNL to decay somewhere inside of the ECAL as opposed to before reaching the surface. To avoid potential difficulties with triggering however, this might have to be done in association with jets or leptons. Lastly, tau flavored couplings could be explicitly probed in the +γ + / E T analysis by tagging tau leptons. This would be a nice complement to neutrino beam dump experiments, whose characteristic energies and neutrino flavors often prohibit tau production. We do not include tau leptons in our simulations.

Results
The compilation of the high energy limits on the dipole couplings is presented in Fig. 9. All constraints have a characteristic "U" shape. The right boundary of the excluded region is controlled by the kinematic reach, and in the case of the LHC extends beyond a TeV. The left boundary (small m N ) is controlled by the lifetime of N , as smaller m N leads to the longer lifetime of N and the  Table II for an explanation of the plot labels.
loss of the γ signal in the detector. The bottom part of the constraints is controlled by the rates and backgrounds, and is approximately independent on m N as in this region the production cross section is m N independent, and its decay is relatively prompt. It is interesting that below m Z /2 the LEP experiments are still capable of providing better sensitivity to the neutrino dipole portal.

A. Big Bang Nucleosynthesis
Cosmology provides a very sharp tool in limiting the coupling constants of metastable heavy particles. In particular, consistency of BBN-predicted 4 He and deuterium yields with observations shows that the Universe was dominated by electrons, photons and SM neutrinos at very early epochs with temperature T ∼ 1 MeV. Any massive relic surviving in large abundances down to these temperatures, or conversely having a lifetime in excess of 0.1 seconds, will distort this balance, and contribute to the Hubble rate during the proton-neutron freeze-out. Since most of the neutrons end up in 4 He, this possibility constrains the lifetime of heavy metastable relics if they are populated to large thermal abundances.
Therefore, we are led to investigate the mechanisms that populate HNLs in the early Universe. The analysis of the conventional mass-mixed case in its impact on BBN was performed in Ref. [52], and the mechanisms for thermal population of HNLs through neutrino oscillations is quite established [53]. Here we notice that the processes that populate N 's through a dipole portal can be divided into two categories.
(i) Inverse decays 2 , ν + γ → N . These processes are important at T ∼ m N , and can be derived from the width of N .
(ii) 2 → 2 processes, such as f + f − → Nν orN ν, where f is a SM fermion, as well as all crossing-related processes. While higher order in the coupling constant, these rates are enhanced in the UV.
At any given temperature in the early Universe, the abundance of N particles is set either by equilibrium, if their interaction rates are faster than the Hubble rate, or by the approach to equilibrium regulated by where n f are n N is the number density of charged species and HNLs, H(T ) is the Hubble rate, and σv nf is the temperature-dependent rate for creating an HNL per unit of time. The most important for us is the scaling of the above expression with temperature and parameters of our model. Making a simple parametric estimate we arrive to where M Pl is the Planck mass and g * is the effective number of degrees of freedom appearing from the definition of the Hubble rate, H(T ) 1.66g Pl . The most important feature of Eq. (18), besides the self-explanatory dependence on M Pl and d, is its scaling with temperature. The rate is enhanced in the UV, and therefore, it is the highest temperatures in the system that determine the initial abundance of N . Therefore, strictly speaking, one cannot determine the initial abundance of N without ever specifying the initial temperature relative to d −1 . On the other hand, assuming that the Universe at some point had temperature T ∼ d −1 , the ratio in Eq. (18) is then larger than one for all values of d covered by our master plot, Fig. 1. Therefore, with this assumption, one can be sure that N was in fact thermalized in the early Universe.
Once N is thermally populated, it will last until the lifetime of the Universe is comparable to τ N . To predict how much energy the thermally-created reservoir of N stores, one would need to understand at what temperatures HNLs decouple, which can be estimated parametrically by equating the r.h.s. of Eq. (18) to one. This gives the decoupling temperature of where we re-expressed d in terms of the lifetime formula for N . The decoupling of N means that at temperatures T < T decouple the decays of heavy SM particles heat up the SM bath but not N , and its relative energy density is somewhat diluted as g * at decoupling will be larger than at the time of decay. At the same time, for N heavier than an MeV, there is a possibility for a significant enhancement of the N energy density at decay due to them becoming nonrelativistic. The ratio ρ N /ρ SM will gain an enhancement factor m N /T dec , where T dec is the temperature corresponding to the time of the decay of N , H(T dec ) ∼ τ −1 N (in the assumption that T dec < m N ). Consequently, our estimate becomes where g N = 7/8 × 4 as N caries four fermionic degrees of freedom. This estimate can be used to constrain the lifetime of HNLs as ρ N /ρ SM is constrained at T ∼ 1 MeV through the n/p freeze-out. If τ N ∼ 0.1 s, the ratio in Eq. (20) is O(1), while only less than 10% variations are allowed (see, e.g. Ref. [54]).

B. Supernova SN 1987A
The modification of energy generation and transfer in stars can also serve to limit the viable parameter space for a dipole neutrino portal. In particular, SN 1987A has proved to be a useful probe of weakly coupled particles below the GeV scale [55][56][57][58][59][60][61]. The typical consideration is as follows: weakly coupled particles may serve to substantially enhance the rate of cooling of a supernova, and if this cooling proceeds too quickly and the energy is able to escape without being reabsorbed, then nuclear processes at the core of the supernova can rapidly stop. This in turn leads to significant deviations between the predicted and observed neutrino pulses observed at terrestrial neutrino observatories [62][63][64]. Therefore it is the rate of cooling, rather than the rate of production itself that is important.
There are two considerations in determining whether HNLs (or any new weakly coupled particle) can spoil supernova predictions. First, for sufficiently weak coupling very few HNLs will be produced, and consequently they will not be able to efficiently cool the interior of the supernova. This naively suggests strong couplings can be excluded, however, if the coupling is sufficiently large, then any HNLs that are produced will be trapped. Provided this trapping occurs within the "neutrinosphere" (defined as r < R ν where T (R ν ) = 3 MeV) [60], then the energy stored in the HNLs can be efficiently recycled and re-emitted in the form of neutrinos, ultimately having no impact on the observations at terrestrial detectors. A full treatment that captures this competition between production and absorption would involve a de-tailed study 3 of the following integrals [60] where dΓ/dr is the local rate of production of HNLs, E N denotes the HNL energy, R far is a large radius to which the escape probability is insensitive, and the average is taken with respect to the local thermal bath at r 0 . The probability of escape P esc is found by exponentiating the line-of-sight integral of the mean free path, which in the case of the dipole portal will always be inversely proportional to the square of the dipole coupling λ MFP ∝ 1/d 2 .
For each HNL mass m N , there will exist a minimal dipole coupling d prod (m N ) for which too few HNLs are produced to significantly alter the observed neutrino signal. Likewise, there will also exist a maximum dipole coupling d abs (m N ) such that for any stronger couplings the HNLs will be efficiently reabsorbed and will not cool the interior of the supernova appreciably. The region of excluded parameter space lies between theses two curves in the d − m N plane i.e. d prod (m N ) < d excl (m N ) < d abs (m N ). Although Eqs. (21a) and (21b) are in general complicated, in the weak coupling regime (d d prod ), and the strong coupling regime (d d abs ), the analysis simplifies.
In trying to obtain the lower curve d prod (m N ) of Fig. 11, the coupling is small and so the probability of escape is nearly unity. We may therefore study the production of HNLs and neglect the absorptive properties of the bath. Furthermore, this may be done locally, as opposed to globally, at a characteristic radius. This approximation is often termed the "Raffelt criterion" [55], and is defined in terms of the energy carried by HNLs per unit volume, per unit time, dE N /dt (being referred to as emissivity throughout this paper), at a fixed radius r 0 where dE ν /dt is the maximum energy per volume per time emitted via neutrinos. This criterion essentially requires that HNLs produced at some fixed radius r 0 carry no more than 10% of the total energy lost to neutrinos per time. The emissivity constraints derived based on the Raffelt criterion and from the criterion with the integrated energy are compared explicitly in [58]. The difference is well within an order of magnitude as demonstrated for their scenario.
3 Equation (21b) assumes an outward radial path for the HNL and does not account for passage through the core of the supernova. Neglecting this O(1) effect is already an approximation [60]. In the limit of strong coupling, the relevant question is whether the produced HNLs can escape the supernova's neutrinosphere. Since d abs (m N ) d prod (m N ) we may assume a large flux of HNLs in the parameter space of interest, and so by Eq. (21a), it is the probability of escape that must inhibit cooling due to HNL production. As demonstrated by Eq. (21b), and the discussion thereafter, this quantity depends exponentially on the dipole coupling by way of the mean free path. Therefore, a reasonable criterion is that that P esc (d abs ) = 1/2, since for d d abs this quantity will be exponentially suppressed. Although the Raffelt criterion is most naturally imposed where the temperature is maximal, and densities are high, it is possible that this will lead to a rather conservative bound on d abs . This is because, being produced in the hot and dense interior of the supernova, the HNLs must travel through several kilometres of absorptive material composed of electrons, protons, and neutrinos, all of which have number densities in excess of 10 37 / cm 3 . This feature is mitigated to some extent due to Pauli-blocking, however which effect is dominant is hard to determine. With this in mind, we perform our analysis at two radii r  [55,56,58,60] of the hottest (T ≈ 30 MeV) and most dense (n e , n ν , n p ≈ 10 37 / cm 3 ) region of the su-pernova. The latter choice, by contrast, includes slightly lower temperatures (T ≈ 20 MeV) number densities (n e , n ν , n p ≈ 10 36 / cm 3 ) but does not require transit through the most dense regions of the supernova due to the sharp decline in number density in the outward radial direction.
Before turning to the details of the calculation of the emission rates and escape probabilities, we first summarize the physics that is included in our calculations. We use radial profiles corresponding to a supernova with an 18M progenitor, which are obtained by digitizing the reference runs shown in Fig. 5 of [59]. In calculating the optical depth, the full radial dependence is accounted for, but as discussed above, we apply the Raffelt criterion at two fixed radii. We include all species present except for neutrons as they do not couple to HNLs via the dipole portal. In computing the optical depth, and emissivities, we account for the effects of quantum degeneracy including Pauli-blocking, which is found to modify the rate of production and to have a dramatic effect on the escape probabilities of HNLs.

Production
Supernovae typically have significant populations of protons, neutrons and photons, as well as electrons and neutrinos, and their associated anti-particles. Save the neutron, HNLs couple to all of these species at tree level via the dipole portal, and this allows for the following production mechanisms ν + e ± → N + e ± (upscattering) (23) ν + p → N + p (upscattering) (24) e + + e − → ν + N (synthesis) We point out that our analysis does not include thermal field theory effects, and so we omit the "plasmon decay" γ → νN production mode. In general, ignoring the thermally acquired effective mass of photons in T channel scattering processes is only justified if the characteristic momentum flowing through the photon is much larger than its effective mass, which is on the order of 20 − 30 MeV. Using vacuum propagators for the dominant HNL production process e − ν → e − N , we calculated the quantity − q 2 and found it to be greater than 70 MeV for all masses considered, eventually asymptoting to m N for heavy N . Furthermore, for all masses considered, ignoring the regime −q 2 < 30 MeV changes q 2 by less than 4%. In addition to thermal effects, we also neglect the influence of nucleon magnetic moments (because of the additional ∝ m −1 p suppression), and for that reason neglect ν + n → N + n production mode. Going back to the channels we consider, all of these have two incident species, and so the rate of production is controlled by the product of their densities (i.e. n e n ν in the case of electron upscattering). In the case of upscattering, however, the chemical potential can be an order of magnitude larger than the temperature, and so Pauliblocking of the outgoing SM product must also be taken into account.
As discussed above, in considering the minimal dipole coupling that can spoil predictions from SN 1987A, we study the Raffelt criterion,Eq. (22), at both r 0 = 10 km and r 0 = 14 km.
The following integral equation defines the emissivity where f a = 1/(exp[(E a − µ a )/T (r 0 )] + 1) is the Fermi-Dirac distribution for species a, and v Møl is the Møller velocity The average, E N σ F , is taken over phase space with the appropriate distribution functions included. For inverse decays, this is the trivial one-body phase space of the HNL, but for 2 → 2 process the appropriate Pauliblocking factor of the outgoing SM particle, , is included, where E 3 is evaluated in the rest frame of the bath. Explicitly, for 2 → 2 processes the average is defined as where Φ 2 (p 3 , p N ) denotes the two-body Lorentz invariant phase space of the outgoing HNL and SM particles, F(s) the Lorentz-invariant flux factor, and E N , like E 3 , is evaluated in the rest-frame of the bath. The production matrix element M prod is calculated at zero-temperature, and does not include-for example-the in-medium modification of the photon propagator. Following [56,65,66], we can rewrite Eq. (27) as where Using the Mandelstam variable s, we can show that E − depends on s, E + , m 1 , m 2 , and cos θ, and that its associated bounds of integration are obtained by considering the limits cos θ → ±1 with E + and s held fixed.

Escape
The escape probability Eq. (21b) is dictated by the mean free path λ MFP of the HNL in the hot bath of the supernova. Demanding that the probability of escape is less than 50% is equivalent to demanding that − ln P esc 2/3. Since the dipole portal is the only coupling between the Standard Model and the HNL, all processes that contribute to λ MFP are proportional to d 2 . It is therefore convenient to introduce a reduced mean free path λ, defined at a reference value d = 10 −7 GeV −1 via Implicit in the above analysis is the assumption that the path of the HNL is directed radially outwards. This underestimates the probability of absorption as it neglects paths that travel through the core and other overdense regions, however as discussed in Appendix B. of [60] this effect is O(1) and can be captured by multiplying the optical depth by the substitution λ MFP → λ MFP /3. We may then define the critical dipole moment where HNLs are efficiently trapped via the condition The above procedure does not take into account the flux of HNLs coming from the core of the supernova can be exponentially large, and therefore some of amount of energy deposition can happen beyond d abs . The flux is a factor of (d abs /d prod ) 2 ∼ 10 6 larger than the lower bound, and so an even larger dipole coupling is required to efficiently absorb this large flux of HNLs, given roughly by , which is approximately an order of magnitude larger, and consequently more stringent. Since we neglect this effect, our analysis can be considered conservative in this regard.
Both single body decay of the HNL, and 2 → 2 scattering contribute to the mean free path. In no particular order, the relevant processes are N + e ± → ν + e ± (downscattering) (34) N + p → ν + p (downscattering) (35) ν + N → e + + e − (annihilation) (36) N → γ + ν (decay) (37) N + SN → N + SN (gravitational trapping). (38) We have included the full radial dependence of the temperature and chemical potentials in our calculation of Eq. (33). As can be clearly seen in Fig. 10, the chemical potentials of the neutrinos and electrons are significantly higher than the temperature within the interior of the supernova, therefore for HNLs produced at r 0 ≈ 10 km, Pauli-blocking and the Fermi-Dirac distributions of the absorptive species can play an important role in determining the escape probability. As discussed above, we compute the reduced optical depth integral at a reference dipole coupling of 10 −7 GeV −1 and include the effects of Pauli-blocking via Here α ∈ {e − , e + , ν e,µ,τ , ν e,µ,τ , γ, p} labels the species that can absorb HNLs, and n α 's are their Fermi-Dirac distributions. The thermal averages n α σ αN and Γ includes the thermal distribution of the absorptive bath for the 2 → 2 absorption, and the associated Pauli-blocking of outgoing SM particles for both decays and 2 → 2 processes.
We fix the incident HNL energy to be E N (r 0 , m N ), defined as the average energy per HNL produced at r 0 = 10 or 14 km, and this implies a boost factor for the HNL βγ(r 0 , m N ). In practice we compute the average energy numerically, however the qualitative behaviour can be understood as follows. The dominant production mechanism over most of the mass-range is Primakoff upscattering off of electrons which is Pauli-blocked on the outgoing electron. For m N µ e , the momentum transfer required to create an HNL typically kicks electrons above the Fermi surface and imparts the HNL with threemomentum of order P N ∼ O(µ e ). Therefore the momentum can be estimated using elementary kinematics. In contrast, for low masses the effects of Pauli-blocking must be accounted for by demanding a large momentum transfer, q 2 ≈ −µ 2 e , so as to kick the electron above the Fermi-surface. Taking the average neutrino to be µ ν /2 and averaging over angles then leads to the estimate where the chemical potentials are evaluated at r 0 . We also assume the HNL's path is directed radially outward (and correct for the possibility of transit through overdense regions via a factor of 3 as discussed above). The thermal averages n α σ αN and Γ N take into account the radial profile of the supernova, as a consequence of the Pauli-blocking of outgoing SM particles and the thermal distributions of initial SM particles inheriting the radial dependence of the chemical potential and temperature profiles. As in the case of production, the matrix element |M abs | 2 is computed at zero temperature and we have checked that finite temperature corrections are under control.
Finally, the gravitational pull from the supernova could potentially trap the HNLs and prevent additional cooling of the supernova from happening. This is especially relevant for the high mass regime. Here we follow the simple energy argument introduced in [67] that determines the particle mass for which this effect becomes important.
The gravitational trapping has to be taken into account when where E kin HNL is the average kinematic energy of the HNLs, G is the Newton constant, M c is the enclosed mass of the supernova within the radius R c , at which the HNL of mass m N is produced. We take M c ≈ M SN which is the mass of SN 1987A and calculate E kin HNL at two radii R c =10 km and 14 km, corresponding to the radii we choose for the emissivity and the optical depth considerations. We determine that for m N > ∼ 320 MeV gravitational trapping is important at both R c = 10 km and 14 km. FIG. 11. Emissivity and optical depth constraints (red) from supernovae SN 1987A, and parameter space facilitating its conversion to a neutron star (green). We also show lines of constant HNL lifetimes to gauge where BBN might be affected. Two radii of production r0 are plotted for comparison, with one at the hottest densest radius r0 = 10 km and one closer to the edge of the high density region r0 = 14 km. The gravitational trapping becomes significant for HNLs with mass above the vertical gray line, labeled "Gravity".

C. Results
We begin with the BBN limits, that rest on several assumptions. First, we assume that the temperatures in the early Universe were initially rather large, and as a consequence, HNLs got thermally populated. If the maximum (i.e. reheating after inflation) temperature was limited to a sub-GeV range (which is a rather extreme assumption), then domains of parameter space with small m N and small d will not be constrained by n/p freeze-out, as the abundance of HNLs at 1 MeV can be much smaller. The second assumption is that we assume that the BBN proceeds along a standard scenario, and HNLs provide only a small perturbation. An alternative scenario, when the Universe is actually dominated by N , and its decay reheats the ν and γ, e baths, might not be excluded throughout the whole parameter space. Namely, the BBN provides only a handful of reliable predictions ( 4 He, D/H). It could be possible that for some "islands" on {m N , d} space, the outcome of the nuclear reaction network is similar to a standard BBN. In this case, however, one would also have to make sure that the energy densities of neutrinos and photons are also consistent with measurements of N eff . This may look as an additional fine tuning, and therefore we do not consider such an accidental possibility seriously.
Thus, with the above caveats, if the ρ N /ρ SM ratio is larger than 0.1 at the time of n/p freeze-out, the BBN is perturbed outside of its agreement with observations. Then it is possible to set the constraints on lifetime to be less than a fraction of a second (see Ref. [68] for a somewhat similar analysis of the Higgs portal relics). We choose to be on a very conservative side, and set the limit for lifetime to be 1 sec, shown by the diagonal line in Fig. 11. (At m N ∼ 1 MeV, the decoupling temperature is close to an MeV, and therefore ρ N /ρ SM > 0.2 unless N particles decay early. At m N > 10 MeV, the decoupling temperatures are in the GeV regime and larger, so that there can be a significant dilution by g * (T decouple ). However, m N /T dec more than compensate for this dilution, along the τ N = 1 second line). We observe that on {m N , d} space the BBN constraints do not overlap with neutrino/beam dump or high energy experimental constraints.
Our astrophysical results are collected in Fig. 11. As described in detail in the previous subsections, we have calculated present limits on heavy neutral lepton dipole moments stemming from supernovae cooling. The lower curve of the excluded region is found by requiring that the rate of energy produced by HNL (the emissivity) is larger than a tenth of that from neutrinos. The upper curve is obtained by enforcing that λ −1 M F P dr < 2/3, namely that the probability of an HNL interacting with something on its way outside the star (the optical depth) is small.
Our analysis reveals that Pauli-blocking of electrons and neutrinos is an essential feature in determining both the emissivity and especially the optical depth. In the latter case, quantum degeneracy makes the hot and dense interior of the supernova nearly transparent to HNLs whose decay and downscattering is inhibited by a Fermi-sea extending up to momenta on the order of µ ν ≈ 250 MeV. Unintuitively, this means that the escape probability for an HNL produced at r 0 = 10 km is nearly equal to that of one produced at the edge of the densest regions at r 0 = 14 km. Similarly, the production of HNLs is severely inhibited by the Fermi-sea of electrons. Naively, the high densities of electrons and neutrinos shown in Fig. 10 favor HNL production, and this suggests that Primakoff upscattering is the dominant production mechanism. This is, in fact, the case at low masses (but only marginally so), however at higher masses (m N 50 MeV) inverse decays actually come to dominate despite the number density of photons being two orders of magnitude smaller. This is because the inverse decay is not Pauli-blocked. The consequences of quantum degeneracy are that the HNL behaves as if it is much more weakly coupled than one would expect based on naive predictions. The qualitative features of our results can be described as follows. The upper curve is dominated at low masses by downscattering off of electrons and neutrinos, and the inclusion of Pauli blocking increases the bound on d due to the large chemical potentials (i.e. a large number of already occupied states) of these leptons for r ≥ 10 km. Downscattering is relatively insensitive to the mass of the HNL, (i.e. σ ∼ d 2 ) and so is eventually overtaken by the decay of the HNL which scales as Γ ∝ d 2 m 3 N and benefits from the absence of Pauli-blocking on the outgoing photon; this crossover between m N independent downscattering, and power-law decay lengths can be clearly seen in Fig. 11. The bottom curve is dominated primarily by upscattering of neutrinos off of electrons. This process is only Pauli-blocked on the outgoing electron, and benefits from high number densities of both electrons and neutrinos. In direct parallel with the escape probabilities, this process is eventually overtaken at large masses by inverse decays. The inverse decays scale as m 4 N d 2 and provide the dominant contribution for m N 50 MeV. The maximal emission is reached when m N √ s ≈ (T +µ ν ), but this production channel ceases to be viable at masses much higher than the average center of mass energy m N √ s ≈ (T + µ ν ) ≈ 250 MeV because the HNL cannot be efficiently produced. Upscattering has a slightly higher kinematic limit of m N √ s ≈ (µ e + µ ν ) ≈ 500 MeV due to the large chemical potential of the neutrinos.
Gravitational trapping of the HNLs becomes important for large mass HNLs. Above the mass m N = 320 MeV, the average kinematic energies of the HNLs are smaller than gravitational potential they feel from SN 1987A, as indicated with a vertical line in Fig. 11. The effect can to some degree alleviate the cooling bound of the SN on the HNLs since these HNLs can be gravitationally trapped and never travel out of the supernova. We leave a more refined determination of the gravitational effect on the SN cooling to future works.
Also on the plot there is a region called "Assisting SN Explosions". The detailed mechanism of core-collapse supernova explosion is an active research topic, and with the most explored mechanism being driven by neutrinos [69]. Simulation results such as [70,71] have tended to find that the neutrino-driven explosion struggles to reproduce the revival of the shock-waves for a successful explosion, and requires additional shock energy to match the observation during the core collapse. It is worth noting, however, that the most recent simulation based on a 3D progenitor model [72], suggests that the neutrinodriven mechanism itself could possibly provide enough shock revival and explain the observed explosion energies. It is likely that a larger range of progenitors and more refined simulations are still required to fully understand the issue of SN explosions.
With these details in mind, it is worth noting that new degrees of freedom, for example, HNLs, have long been proposed to power SN explosions [73], and were most recently proposed to assist neutrinos in reviving the shock waves and augment their energies [74]. We briefly review the mechanism for the reader. The star begins by collapsing under its gravitational pull, causing a bounce off of the inner core. This radiates an outward shock. The shock gets stuck, because of dissociation of heavy nuclei, and gets revived by SM neutrino heating and hydrodynamic effects, producing an explosion. This depletes the star's core of leptons. The outward shock then encounters a matter envelope surrounding the star. At this point, previous simulations [70,71] found that the shockwave is not able to expel the envelope, and the explosion is quenched. The matter in the envelope falls back into the core, possibly creating a black hole and preventing a neutron star final state from forming. If it was blown away, however, the core could live on as a neutron star, which is the observed remnant of the core-collapse supernovae. By adding HNLs (or any other metastable particles with right properties), they can escape to the envelope and decay into neutrinos and photons. This creates an additional outward radial pressure in the envelope and breaks up some of the heavy nuclei. The original shockwave then has an easier time expelling the envelope away, and wastes less energy dissociating the nuclei inside the envelope. Interestingly, even a small amount of additional energy injection could possibly result in a proper explosion [73,74].
In Fig. 11, the upper bound of the 'preferred' region for assisting supernova explosions is determined by consistency with SN 1987A limits. The lower bound, which is the main numerical result of [74], corresponds to having an energy emission from HNLs of 10 51 ergs. By contrast, the energy emitted by all Standard Model neutrino species in SN 1987A is E ν ≈ 3 × 10 53 erg [55]. It is important to note that the simulation in [74] assumes vacuum flavor mixing angles of sin 2 θ > 10 −8 for ν τ mixings and 10 −8 < sin 2 θ < 10 −7 for ν µ mixings, which are not present in our model. However, the main features of their analysis still hold in our case, since the HNLs in our scenario can also generate the required amount of energy injection given in [74]. To obtain the favored region, we have effectively redone the emissivity analysis described in Section V B 1 using an emitted energy of 10 51 erg. Recall that the emissivity constraint is done requiring a power loss through HNLs less than 10% that from SM neutrinos. We find that the "Assisting SN Explosion" regime is mostly covered by the BBN constraint on HNLs with lifetimes longer than 1 second.

VI. DISCUSSION AND CONCLUSION
In this paper we have considered a variety of phenomenological consequences of a massive Dirac particle, that has a dipole portal d to the SM neutrinos and the photon, as a main source of production and decay of HNLs. The Dirac nature of the mass of N is dictated by the arguments of the neutrino mass generation. Different variants of such models have been proposed in the past, as a way of mimicking the excess of neutrino signals observed at LSND and MiniBooNE. We have provided an attempt at a comprehensive analysis of this class of model, assuming the dominance of dipole couplings.
We find that the high energy probes (LEP and LHC) of HNLs through a dipole portal are giving sensitivity to d at a scale of (10 TeV) −1 and better, mostly through the mono-photon type signatures. In particular, the sensitivity of the LHC experiments extends to the TeV scale m N . High intensity beam dump and neutrino experiments ("intensity frontier" experiments) cannot reach to such high masses, but instead are able to probe much lower values of couplings for the sub-GeV masses. We find that the inclusion of the dipole production of N disfavors common explanation of the MiniBooNE and LSND anomalies by already existing data. Interestingly, LSND itself provides the most stringent constraints on the dipole coupling at low masses, while the MiniBooNE, MicroBooNE, and SBND detectors provide the leading constraints at slightly higher masses. At the peak sensitivity to the dipole coupling, for m N ∼ few 100 MeV, the experiments probe scales of d ∼ (10 −7 − 10 −6 ) GeV −1 , which is far beyond the weak scale. Future experimental facilities, including SBND, and in particular SHiP, will be able to help improve sensitivity to these couplings. For the SHiP main detector, the level of the single photon backgrounds is not currently well understood, and while we use our optimal estimates at this point, detailed simulations can help better evaluation of sensitivity to dipole portal. Astrophysics, in particular physics of SN explosions, further restricts the parameter space for the model, probing up to a few hundred MeV scale masses and a d ∼ (10 −7 − 10 −10 ) GeV −1 range of couplings. The cosmological bounds are somewhat model dependent as they are sensitive to the high-temperature regime of the early Universe for which we do not have the direct experimental data. In the most likely eventuality of high initial temperatures, the constraints on lifetime are in the 1 second range and better, disfavoring low-d, low-m N corner of the parameter space. Overall, the HNL coupled to the dipole portal adds to new physics models that can be studied both at high and medium energies, and in astrophysical/cosmological settings. We conclude our paper with a few additional comments: (i) One of the reasons the current model can be studied with such a variety of tools is the fact the dipole portal we explore, below the electroweak scale, is a dimension 5 portal. It gives cross sections that scale as σ ∝ d 2 . This is similar to the interactions of axion-like particles a (e.g. g aγγ aFF ), which is also dimension 5. Indeed, one can observe broad numerical similarities between sensitivity to g aγγ and our derived sensitivity to d.
(ii) We have covered only a handful of the existing intensity frontier searches that we think to be the most sensitive. It is possible that some other experiments (such as e.g. CHARM, CCFR, and T2K) may also provide additional constraints on the model. Among new planned facilities, some would involve unprecedented intensities (DUNE), and it is possible that new levels of sensitivity to d can be derived there as well.
(iii) There are several experimental setups proposed at the LHC to probe long-lived particles, including MATHUSLA [75] and CODEX-b [76], and a small detector to probe weakly coupled states in the forward regime, FASER [77], which has already considered HNLs (but not their neutrino dipole interactions) [78]. These setups could potentially extend our reach at the energy frontier. However, since the lifetime of the HNLs in our scenario scales as m −3 N as seen in Eq. (5), the decay lengths may be too short in the near GeV mass range to significantly improve on the reach of existing probes.
(iv) We have provided a SM gauge invariant completions of the dipole portal operator. This should not be confused with a proper UV completion (which was briefly discussed in [11,12]). Such UV completion may also point to a potential tuning issue that can arise in this model. Operators (9) can radiatively induce significant mass mixing operator, LN H, which we have assumed to be small and/or absent. It will be important to find out whether tuning-free UV completions of this model exist. This task falls outside the scope of this paper. We obtain an expression for dσ/dt. Consider first the matrix element for the production of N , which factorizes into a hadronic and a leptonic tensor, i.e.
In terms of a right-handed projection operator, the leptonic tensor is The hadronic current is given by In the heavy nucleus limit, squaring Eq. (A3) gives The representation of the form factors will depend on whether the scattering is coherent or inelastic. In the former case, the neutrino upscatters on the nucleus as a whole and the cross section scales as Z 2 . Since M H = AM nucleon and |t| = |q 2 | = Q 2 is small, we retain only F 1 in Eq. (A3), which we take to be the Woods-Saxon (WS) form factor. The WS form factor parameterizes the charge density of the nucleus as and takes its Fourier transformation with respect to the momentum exchange q [79,80]. From Eq. (A1), we obtain The 1/t 2 pre-factor in the lab frame is proportional to 1/(E N − E ν ) 2 , meaning there is a phase space On the other hand, when the scattering is inelastic, the incoming neutrinos scatter off of the individual nucleons. When this happens, |t| is of moderate size, M H = M nucleon and we retain both form factors. F 1 and F 2 take on different values depending if they are for the neutron or proton. Their values are given [81,82] by solving the system of equations We then obtain In contrast to the coherent scattering case, the inelastic cross section depends only linearly on Z and A. Furthermore, values of t for which we have inelastic scattering generically avoid the t → 0 enhancement.

Meson decays
In determining the number of HNLs present at intensity frontier experiments, it is important to consider both Primakoff upscattering and direct decays of mesons into HNLs. The decay in flight of mesons will lead to a distorted spectrum of HNLs that depends on the details of the decay at rest, and the spectrum of incoming mesons. In this appendix we outline how to obtain the spectrum of HNLs given a spectrum of incident mesons.
We denote the rest frame energy and momentum E and P, and the angle relative to the boost vector in the rest frame as φ, while lab frame quantities are defined analogously as E, P , and θ. We first compute the rest frame differential decay rate as a function of the energy of the HNL dΓ/dE. Normalizing by the overall decay rate of the meson defines the differential branching ratio in the rest frame dBR/dE| rest = (1/Γ) · dΓ /dE| rest . The most important contribution to HNL production is from pions, and so we quote the result of dΓ/dE in the rest frame for the process π 0 → N νγ In our meson calculations, we have set the lepton masses in some of the integration bounds to 0 in order to make the integrals tractable. For most of the meson decay channels, this approximation was found to have a minor effect on the results. For heavy m N , the π → µN γ channel was found to be underestimated by this approximation, yielding a conservative estimate. Next, for a given energy E, the resultant distribution in the lab-frame can be found by considering and noting that the decay of a pseudo-scalar is isotropic in the rest frame. Consequently the lab energies are sam- The population of the interval of phase space in the lab frame must be the same as its corresponding interval in the rest frame. This implies that a delta-function distribution in the rest frame is transformed to a box distribution with a width of (E + − E − ) = 2γβP in the lab frame.
The same argument can be applied to obtain the maximum and minimum rest frame energies that can be boosted into a given infinitesimal window centered about E. These are given by E ± = γE ± βγP. (A12) Using this information we can construct the spectrum of HNL energies generated by a meson traveling at velocity β in the lab frame where the factor of 2γβP accounts for the normalization of the box distribution discussed above. The quantities E A and E B are defined via E A (E, γ) = min(E − , E min ) and E B (E, γ) = max(E + , E max ) where E min and E max are the minimum and maximum energies of the HNL that are kinematically allowed in the rest frame. Notice that the limits of integration on the right-hand side are functions of the lab energy E and the velocity β, or equivalently γ. Finally, we consider a spectrum of parent mesons. In this case a spectrum (e.g. N (γ) = N (E/m π ) in the case of pions) is assumed to be given and we weight the contribution of each value of β by this spectrum finally giving the spectrum of HNL's produced from a given flux of mesons. The meson energy lab spectrum used was adjusted to account for the magnitude of the beam energy, and the meson masses under consideration. When considering SBN the Sanford-Wang [29,83] distribution was used to model the incident pions, while for kaons and eta mesons the Feynman scaling hypothesis [29] was employed. At SHiP where the incident proton beam has an energy of 450 GeV the BMPT [83,84] distribution was used instead for both pions and eta mesons. The use of the Feynman scaling approach was inspired by Ref. [85], which argues that low energy proton beams and high meson masses exhibit special mass effects that are not well captured by Sanford-Wang. The Feynman scaling approach assumes that d 2 σ dpdΩ depends only on p T and x F = p COM || /p COM,max || , and is proportional to (1 − |x F |). Mass effects tend to give stronger weight in the data in the x F = 0 regime. This is reflected in the Feynman Scaling approach, whereas Sanford-Wang keeps increasing as x F crosses over to negative values. At even lower energies, such as at LSND where the POT energy is around 0.8 GeV, we employ the Burman-Smith distribution [31,32]. By fitting to datasets spanning a wide range of pion kinetic energies (30 − 553 MeV) the Burman-Smith distribution attempts to model the pion spectrum down to zero kinetic energy. At LSND, as low kinetic energy protons interact with the beam stop, pions which are produced are slowed down. The negative pions are absorbed in matter while the positive pions decay. Most of these π + are at rest, while some (2%) decay in flight. For µ + and π + that decay at rest, we take their spectrum to be isotropic. For π 0 and π + that decay in flight, we use the Burman-Smith distribution.

Perturbative electroweak backgrounds
As another source of background, we consider nonresonance induced single photons from perturbative electroweak processes. Although it is intuitive that the loop suppressed SM backgrounds from Aν → Aνγ will be low, it is important to quantify by how much, as this process could occur via neutrinos interacting in the walls of the SHiP experiment. Our goal here is to show that this potential source of background is very small and under control. The cross section for γν → γν has been explicitly calculated using effective operators [86,87], and this provides a convenient way to calculate the SM contribution to Aν → Aνγ by way of the equivalent photon approximation (EPA). The EPA treats the nucleus as a static charge distribution which sources a Coulomb field coherently (see Ref. [88,89] for a comprehensive review). As discussed in Ref. [79,90] the full σ νA cross section can be calculated from the σ γν cross section via where E ν is the energy of the neutrino in the lab frame. The function P (s, Q 2 ) can be interpreted as the probability of the nucleus sourcing a quasi-real photon with "mass" Q 2 whose center of mass energy with the incident neutrino is s. Typically the EPA reveals an IR logarithmic enhancement, due to the effective measure of ds/s induced by P (s, Q 2 ). This IR enhancement is offset due to the steep s dependence of σ γν (s) [87], which scales as σ γν (s) ∝ s 2.8 for s → 0. Using the EPA approximation to calculate production in the lead bricks of SHiP for a representative neutrino energy of E ν = 20 GeV, we find a SM background estimate of σ bkg Pb atom = 5.7 · 10 −10 fb = 5.7 · 10 −49 cm 2 .
This is many orders of magnitude lower than the HNL production cross section estimated in the previous section and can safely be ignored. The smallness of this process follows physically from Yang's theorem [86,91,92].

Appendix B: Sensitivity
We wish to briefly outline the general strategy for how all of the projected and real exclusion limits were calculated. The strategy is based on the 2009 PDG on statistics [93]. We consider a counting experiment where the experiment has seen n events, whereas b were predicted from the Standard Model and s from new physics. In a Bayesian framework given a posterior probability and likelihood function, one can set an upper limit at credibility level 1 − α by solving Solving for s up gives us the number of signal events consistent with the observation and background prediction at (1 − α)CL. Throughout this paper, we choose 1 − α = 95%. To estimate projected sensitivities, we assume that n = b, namely that the data collected exactly matches the background prediction. For the LHC data, we implement the CL s method due to the presence of under-fluctuations of the data compared to the background predictions. This consists in defining and solving for s up in with α defined in Eq. (B3). This method overcovers in order to avoid setting bounds to signal rates which we are insensitive to, which can happen precisely when the data under-fluctuates. In all these cases, once we have obtained s up , we can solve for the new physics coupling in the equation s up = Lσ prod Br(N → γν) cuts A geom P dec (L 1 , L 2 ). (B6) In the equation above, L is the luminosity of the experiment. In the case of beam dump experiments, there is often an implicit sum over neutrino energies, and L is obtained by considering the rates and cross section of CC events in the experiment, as thoroughly described in [79].
where M H is the mass of the nucleus or nucleon depending on the context. For completeness, we also derive that This equation will be convenient for limiting ourselves to values of t in which E N is 4 times above the photon energy threshold of the experiment, and to ensure that E N is sufficiently boosted for production in the line of sight. In addition, we derive cuts that require the photon from N → γν to point in the right direction and be above the energy threshold of the experiment. We enforce this by imposing an efficiency factor defined as To derive a closed form for , assume N travels with momentum p N in the z direction. The width in the lab frame is given by In the limit E cut γ = 0 and integrating cos θ between -1 and 1, we obtain as expected . (C6) Requiring E sol γ ≥ E cut γ is equivalent to imposing Putting everything together, in terms of x = cos θ and β the velocity, we have We calculate x cut as x cut = Max cos θ • cut , cos θ γ cut (C9) where we have included a cut on the angle of the emitted photon with respect to the direction of N . We will typically choose θ • cut = π 4 since we want to be emit photons in a cone centered along the initial direction of N .