Statistical Analyses of Higgs- and Z-Portal Dark Matter Models

We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

portal in [19][20][21][22], for Higgs portals with DM of spin 0, 1/2 and 1 in [23,24], and for Higgs and Z portals with DM of spin 0, 1/2 and 1 in [17]. The null results of the many searches have led to the suspicion that the plausibility of simple WIMP Higgs and Z-portal models has been severely damaged (see e.g., Ref. [18,25,26]). However, neither this nor the impact of planned DD experiments such as LZ [27] and XENONnT [28] has yet been quantified by a dedicated statistical analysis.
We address this point in this work by performing statistical analyses of the spin 0, 1/2 and 1 SM-portal models described in Sec. 2. We use frequentist and Bayesian methodologies for the first time jointly assessing the plausibility of these models. Our statistical framework is recapitulated in Appendix A, and we add a novel determination the number of constrained degrees of freedom at the end of Sec. 5. Our work includes data from DD, ID and collider experiments, and we also analyze the potential impacts of future DD experiments, using the pseudodata described in Sec. 3. Our priors for the model parameters are presented in Sec. 4. Finally, we present our findings on the viability of SM-portal models in Sec. 5 and conclude in Sec. 6.
Unlike previous analyses (see e.g., Ref. [17,18,25,26]), which were ambivalent about the status of portal models, we find both Higgs-and Z-portal models that are entirely consistent with the available experimental constraints. These include Higgs-portal models with real scalar, real vector, Majorana or Dirac fermion dark matter particles, as well as Z-portal models with Majorana or Dirac fermion dark matter particles. Many of the viable models feature relatively light dark matter particles that would contribute to invisible Higgs or Z decays, and there are also interesting possibilities for heavier dark matter particles that weigh 100 GeV. The planned direct dark matter search experiments would be sensitive to some of these dark matter models, but Higgs-and Z-portal models with Majorana or Dirac fermion dark matter particles could still escape detection. We ascribe the difference in emphasis between our paper and other analyses to the fact that we calculate statistical measures of goodness-of-fit and relative plausibility of models; some previous analyses drew conclusions on the status of portal models without a clear statistical methodology. Our results open the way towards constructing viable DM models that are not significantly damaged by the present data. As with other simplified models, the models we discuss do not attempt to capture details and potential issues within any specific UV-complete model. Although we discuss several viable UV completions in Section 2.1 in this simplified model approach, we do not include theoretical or experimental constraints upon the parameter spaces of any particular UV completions. The merit of this approach, which has proven extremely popular in DM (see e.g., Ref. [17,[19][20][21][22][23][24][29][30][31][32][33]) and LHC studies (see e.g., Ref. [34][35][36][37][38][39][40][41]), is that we may study the phenomenology of a comprehensive set of SM portal models capturing common features.

Models
We consider simplified WIMP models in which spin 0, spin 1/2 or vector DM particles are stabilized by a Z 2 symmetry (see e.g., Ref. [13]) and couple to the SM Higgs or Z boson. This portal interaction with the SM has a number of phenomenological signatures. Through such portals, DM can scatter with quarks via t-channel exchanges producing signatures that are in principle observable at DD experiments. Portal couplings are also responsible for the annihilations of DM particles into SM particles, which control the relic abundance of DM via the freeze-out mechanism. The same DM annihilations could, furthermore, still be active in regions of space characterized by a high DM density, such as the galactic center, dwarf spheroidal galaxies (dSph) and possibly the Sun. Primary or secondary traces of annihilation could then be detected by ID experiments. In order to explore the vast phenomenology of these simplified models, we implemented them in FeynRules-2.3.27 [42,43], linked with micrOMEGAs-4.3.5 [44,45] via calcHEP [46].

The Higgs Portal
We consider models of spin 0, 1/2 and 1 DM particles coupling to the SM via a Higgs portal (see e.g., Ref. [17,18] for similar Higgs portal models), assuming that the DM is stabilized by a Z 2 or U (1) symmetry (see e.g., Ref. [18]). Furthermore, in order to accommodate interactions between the Higgs and pairs of photons or gluons via loops, we include effective operators hF µν F µν that result from the top quark and the W -boson loops in the SM.

Dirac/Majorana fermion DM particles interact with the SM via
where g s and g p are scalar and pseudoscalar couplings, respectively, and c = 1 (1/2) for the Dirac (Majorana) case. We note that such interactions could originate from mixing after electroweak symmetry breaking (EWSB) between an SM singlet, S, and the Higgs in a gauge-invariant renormalizable model, as discussed in [47]: where α is a mixing angle between the Higgs and singlet. We assume that the singlet is so heavy that it cannot impact the phenomenology. Unitarity would require that [48] where m h 125 GeV and m S is the mass of the singlet.
Vector DM interacts with the SM via where c = 1/2 (1) for a real (complex) vector and v = 246 GeV is the vacuum expectation value of the SM Higgs doublet. The vector DM may acquire a mass via the Stueckelberg mechanism (see e.g., Ref. [51]). This interaction could arise by charging the SM Higgs under a new U (1) gauge group, which would risk unacceptable mixing between the DM and SM Z boson [52], or by an alternative mechanism (see e.g., Ref. [53][54][55]), e.g., mixing between the SM Higgs and a SM singlet scalar that is charged under the U (1).
All of these Higgs portal models allow the possibility that there are no phenomenological signatures of the UV completions in current experiments, as we assume here.

The Z-Boson Portal
We also consider all possible spin 0, 1/2 and 1 DM particles annihilating via a Z portal.

Dirac fermion DM interacts with the SM via
where g v and g a are vector and axial couplings, respectively. This operator could originate after EWSB by coupling the fermion to the current [56,57] via a non-renormalizable operator, kinetic mixing between the SM Z boson and a Z boson [58] or if the DM has an SU (2) × U (1) charge [59]. The non-renormalizable operator could, however, generate DM annihlation into hh and Zh. Assuming that it originates from a non-renormalizable operator, the interaction in Eq. (6) is unitary for couplings and masses of phenomenological interest [58].

Majorana fermion DM interacts with the SM via
The vector coupling is forbidden by charge conjugation for Majorana fermions as the operator is odd.
where a and g is an effective gauge coupling. These operators may originate from kinetic mixing between the SM Z boson and a Z boson [58], or via higher-order terms of the Vector DM interacts with the SM via where g is an effective gauge coupling (see e.g., Ref. [18]). The higher-order operators that can induce these interactions are rather involved; we refer the reader to Ref.s [60][61][62][63] for possible realizations.
As in the Higgs-portal case, we assume here that the vector DM acquires a mass via the Stueckelberg mechanism.
We note that the couplings of the Z boson to particles in specific representations of the SM gauge group must take discrete values in terms of the SU(2) and U(1) gauge couplings. However, more general values are allowed in the presence of mixing between the DM and SM particles. In the interest of generality, we allow for this possibility without entering into specific mixing scenarios. We note also that in the spin 1/2 and 1 cases there are unitarity issues (anomaly cancellation and rapid increase in scattering amplitudes at high energies, respectively) that need to be addressed in the UV-completion (see e.g., Ref. [56]), whose possible experimental signatures we do not discuss here.

Likelihood
The likelihood function is a critical ingredient for our frequentist and Bayesian statistical methodologies, which are described in Appendix A. Our likelihood, summarized in Table 7 [18]. We, however, incorporate important nuclear and astrophysical uncertainties.

Relic density
We assume that the DM candidate accounts of all of the observed DM, i.e., it is not an underabundant species of DM. We calculate the relic abundance of DM with micrOMEGAs-4.3.5 [44,45]. We describe the Planck determination of the relic density [1] by a Gaussian, and include in quadrature a supplementary 10% theoretical uncertainty in the calculation of the relic abundance. Recent analyses suggest that the standard treatment in micrOMEGAs-4.3.5 [44,45] may be insufficient, as it neglects the s-dependence of the mediator width in a Breit-Wigner and the effect of kinetic decoupling [64,65], but we leave any improvement to future work.

Invisible decay widths
The invisible width of the Z-boson was measured by LEP [66]. We calculate the SM contribution, due to Γ(Z → νν), with a parametric two-loop formula from Ref. [67], which includes the dependence on M Z , which we treat as a nuisance parameter. We calculate Γ(Z → DM) with micrOMEGAs-4.3.5 [44,45] and implement the measurement as a Gaussian upon Γ(Z → νν) + Γ(Z → DM). The branching fraction of an SM-like Higgs to invisible particles was constrained by CMS [68] to be BR(h → invisible) 0.24. Since the likelihood function has been published, we apply it directly in our analysis. We neglect any correlations between measurements of the SM masses and widths and we neglect BR(h → νν), which is absent in the SM.

Monojet and monophoton searches
Monojet and monophoton searches look for missing transverse momentum resulting from DM particles produced in pairs that escape the detector. The DM particles are not produced back-to-back in the laboratory frame when they recoil from the emission of a jet or photon. We include √ s = 8 TeV and 13 TeV monojet and monophoton searches for DM at the LHC via CheckMATE-2 [69][70][71][72][73][74]. The searches that we included are listed in Table 1, and the limits we use are shown in Fig. 1. In the models under study, production cross sections at √ s = 8 TeV and 13 TeV are comparable, and the greater integrated luminosity in the former make them more powerful 1 . Since we add to the SM only a DM particle, there are no cascade decays of dark sector particles resulting in e.g., jets or leptons.
We generated events from our models with MadGraph5_aMC@NLO [76] and processed the resulting lhe event files with CheckMATE-2. We derived our own 95% exclusion contours on planes of mass versus 1 We did not include a recent ATLAS monojet search at √ s = 13 TeV with 36  coupling for each SM-portal model for the ranges of mass and coupling in Table 8. We approximated the likelihood function by a step function by assigning to each parameter point a likelihood of zero if it was excluded at 95% and one otherwise.

Direct detection
We include the world-leading SI DD constraints from PandaX [4] and world-leading SD DD constraints for neutrons from PICO [2] and for protons from PandaX [77]. We have calculated scattering cross sections with micrOMEGAs-4.3.5 [44,45] while noting, however, that there are appreciable uncertainties that we now discuss and have considered in our analysis.
The WIMP interactions with partons can be described by an effective field theory. For non-relativistic velocities, scalar, vector, pseudovector and tensor operators dominate, as other operators are velocitysuppressed. The scalar, vector, pseudovector and tensor operators contribute to SI even, SI odd, SD even and SD odd interactions, respectively [78]. The WIMP interactions with nucleons, which are relevant for DD, are governed by the partonic interactions and nuclear form factors. The vector, pseudovector and tensor form factors are well-known from lattice calculations, and because the vector form-factor depends only upon the valence quarks. We set them to their micrOMEGAs-4.3.5 [44,45] defaults.
However, there are uncertainties in the scalar form factors, which depend upon to this treatment we perform extra calculations in which we discard lattice results and use only the phenomenological determination. We pick a Gaussian for σ s from a lattice determination [84] 2 and flat priors between intervals reported by the PDG [66] for ratios of light-quark masses. Even though a recent analysis [85] indicates that uncertainties in the Higgs-nucleon coupling are likely to be overestimated, for our analysis we choose a conservative approach in our treatment of the DD data.
In addition to nuclear uncertainties, there are astrophysical uncertainties that affect DD, such as the velocity profile and local density of DM. We include a log-normal uncertainty upon a canonical choice of local density, ρ = 0.3 GeV/cm 3 . Since we assume that the DM accounts of the entire relic abundance, we do not rescale DD cross sections by e.g., Ωh 2 /0.1. For all but the real scalar Higgs portal, we neglect velocity profile uncertainties. For that model, though, we vary the shape parameters of a truncated Maxwell-Boltzmann velocity distribution describing the velocity profile.
In our implementation of the DD likelihood, we picked a step function at the 90% limits for scattering cross section from PandaX and PICO for a particular DM mass and marginalized uncertainties in the local DM density, nuclear form factors and ratios of light quark masses. As a crosscheck, in specific cases, we used the likelihood functions implemented at the event level in DDCalc-1.1.0 [86,87] and marginalized the local DM density and velocity distribution.
To the best of our knowledge, since we investigated the DM velocity profile, this constitutes the most 2 An alternative approach estimates σs using σ πN and phenomenological estimates of the SU(3)-breaking contribution to baryon masses, σ 0 . However, this is subject to considerable uncertainties in σ 0 [81] as well as σ πN .
comprehensive treatment of DD uncertainties to date. In, for example, a recent GAMBIT analysis of DD in the real scalar h-portal [21], only the light quark masses, the local density and form-factor uncertainties were treated with nuisance parameters. Note, moreover, that our prior of σ πN ≈ 27-58 MeV differs from that in GAMBIT of 58 ± 9 MeV, reflecting the tension between the lattice and phenomenological determinations.

Indirect detection
We consider the Fermi-LAT limit from several dwarf spheroidal galaxies (dSphs) [88] on the zero-velocity limit of the DM annihilation cross section. As in the case of DD cross sections, we do not rescale ID cross sections by e.g., Ωh 2 /0.1 as we assume that the DM accounts of the entire relic abundance. We approximate the likelihood function by step functions at the 95% exclusion contours on the (m, σv ) planes for uū, bb, W W , eē, µμ and ττ channels. The astrophysical uncertainties for ID are characterized by uncertainties in J-factors for each dSph. Since we consider Fermi-LAT's combined limit from several dSphs, we make an approximation: we model uncertainties in the limit by a universal J-factor uncertainty that scales the whole signal, picking an uncertainty in log 10 J of 0.25 motivated by typical J-factor uncertainties for individual galaxies. Of the 15 dSphs included in the Fermi-LAT limit, the greatest log 10 J uncertainty was 0.31 and the smallest was 0.16. Since a universal J-factor implies that the uncertainties are 100% correlated, we anticipate that our treatment is conservative (i.e., overestimates the uncertainties). We could, in principle, perform a detailed treatment of the Fermi-LAT data with 15 individual J-factors. For the portal models under consideration, however, the ID cross sections are typically smaller than the limits from Fermi-LAT and we argue that our simplified treatment is adequate.
We do not include constraints from solar neutrinos from IceCube [89]. IceCube may compete with PICO limits on the spin-dependent scattering cross section, though this interpretation requires assumptions about DM spin-independent interactions in the Sun, the square of the local density, whether DM reaches equilibrium in the Sun, and the rates at which DM annihilates to specific final states. We anticipate that a careful treatment of uncertainties, as discussed in Ref. [90], would render the IceCube limit weaker than or similar to that from PICO in our models.

Future data
In assessing the potential impacts of future experiments, we utilize projected 90% exclusions from LZ [91], XENONnT [28] and PICO500 [92], and an estimate of the neutrino floor -the level of scattering cross section at which a WIMP signal becomes hard to distinguish from the neutrino background [93]. The neutrino floors assume Xe targets, except for the SD interaction with protons, which assumes a C 3 F 8 target, as in the PICO experiment. The neutrino floor can be very slowly lowered with greater exposure (or overcome with e.g., directional detection, modulation or complementarity between different target nuclei);   Table 7 for further details.
see Ref. [93] for details of the assumed exposures. The current limits, projected limits and neutrino floors are shown in Fig. 2.
The projected limits are expected limits assuming the background only hypothesis, i.e., no DM. The observed limit would, however, be subject to statistical fluctuations. We do not investigate the impact of such fluctuations and assume that evidences calculated with projected limits are similar to ones for which the fluctuations are averaged. We anticipate that this is reasonable to within the desired uncertainty of about an order of magnitude in the evidence.

Priors
The priors for the model parameters used in our Bayesian analysis are shown in Table 8. In addition to the DM mass and couplings, there are numerous nuisance parameters describing uncertainty about, e.g., the local density of DM, as discussed in Sec. 3. Our priors for the DM mass and couplings are quite relaxed; we permit masses between 1 GeV and 10 4 GeV and couplings between 10 −6 and 4π. Since we are a priori ignorant of their scale, we pick logarithmic priors for the mass and couplings, e.g., p(ln m | M ) = const.
For models with two couplings, we pick factorizable priors, reflecting the assumption that the couplings are determined by different mechanisms and thus independent, e.g., We investigate the sensitivity to priors in Sec. 5.3.
We used 5000 live points and a stopping tolerance of 10 −4 , and selected importance sampling determinations of evidences and posteriors.
We show Bayes factors and minimum χ 2 values for our SM-portal models in Table 2 We see in the lower panels of Fig. 4 that the same is true for the confidence regions for Higgs-portal models with (c) real and (d) complex vector DM. In these models there are also two confidence regions,            corresponding to on-and off-shell Higgs couplings to the DM particles. there are two couplings to the Higgs: scalar g s (left panels) and pseudoscalar g a (right panels). There are extended regions of the scalar couplings with log 10 g s < −1.5 that are favoured at the 1-σ level, whereas there are only narrow confidence bands with log 10 g p > −1.5 in the pseudoscalar cases. The DM may annihilate through a combination of scalar and pseudoscalar couplings. The scalar coupling is forced to be small by DD constraints, whereas the pseudoscalar coupling is not affected by DD constraints as its contributions to the scattering cross section are momentum suppressed. Since the scalar coupling must be small, the pseudoscalar coupling must set the correct relic density, and so is much more restricted. In these models the only confidence regions correspond to off-shell Higgs couplings to DM particles weighing 100 GeV. The Bayes factors, minimum χ 2 and p-values for these Higgs-portal models are all quite comparable, as seen in Table 2, with insignificant evidence against any of them.
On the other hand, the same Table shows that there is decisive evidence against the scalar and vector Z-portal DM models, and we do not display the mass/coupling planes for these models. We do, however, display the corresponding planes for Z-portal fermionic DM models in Fig. 6. The upper panels are for Dirac fermion DM, and the lower panel is for Majorana fermion DM. In the former case we consider both axial (upper left) and axial (upper right) DM-Z couplings, whereas in the Majorana case only an axial DM-Z coupling is allowed. In all three cases, only the off-shell Z option with a DM particle mass 200 GeV is credible. As seen in Table 2, the Majorana Z-portal DM model currently has the largest Bayes factor, and also shares with the Dirac Z-portal case the lowest χ 2 minimum and the highest p-value.
In Table 3  We compare the findings from the Bayes factors with the corresponding frequentist analysis in Table 4.
The χ 2 values for the scalar and vector Z portals increase substantially when possible future data are considered. The statuses of the remaining models barely change, however, though the vector Higgs portal might be ruled out by SI limits at the neutrino floor. Even for our simple SM-portal models, we can fine-tune the mass and coupling to evade possible powerful constraints upon the DD cross section from future experiments. Fine-tuned masses and couplings can enhance DM annihilation by an s-channel Elastic scattering on nucleons, though, proceeds via t-channel exchange and is not enhanced.

DD uncertainties
We find that for the impact of uncertainties in DD evidences are small. The uncertainties smear a limit in the scattering cross section, as in Fig. 8. Whilst this smearing affects the posterior density, the evidence, which is an average likelihood, is relatively stable with respect to hadronic, velocity profile, and density uncertainties. In Table 5, we show evidences for the real scalar Higgs-portal model with DD, ID, relic     abundance and collider data, computed with our native implementation of a likelihood for DD and that from DDCalc-1.1.0 [86,87], with different uncertainties included. DDCalc-1.1.0 [86,87] includes XENON1T data, which is fractionally weaker than PandaX for larger DM masses.
Uncertainties that change Bayes factors by less than about a factor of 5 are typically considered irrelevant, since they are overwhelmed by changes in Bayes factors induced by changes of priors. The changes from treatments of DD uncertainties in Table 5 are of order unity, and hence largely irrelevant.
The biggest impact is that from including the uncertainty in the local density in our calculation of the likelihood, which smears a step-function to a Gaussian error function as shown in Fig. 8 [86,87]; only the latter allows velocity profile uncertainties (denoted Vel). Note, however, that DDCalc-1.1.0 [86,87] includes XENON1T data, which is fractionally weaker than PandaX for larger DM masses. We denote the uncertainty in the local density of DM by ∆ρ and our alternative treatment of hadronic uncertainties (Had) by Alt.

Treatment of resonances
To insure that narrow resonances are adequately sampled, for the real scalar Higgs-portal model we investigated our sampling settings. We varied the number of live points, and compared multimodal and importance-nested sampling. Furthermore, we checked that we obtained similar results with an alternative technique in which we traded a coupling for the relic density, g → Ω, such that We performed the resulting integral by sampling the relic density from a Gaussian likelihood, L(Ω), which played the role of a prior, solving for the coupling as a function of the sampled relic density, g Ω , by Brent's and that from DDCalc-1.1.0 [86,87] convolved with uncertainty in the local density (blue). The mass of the DM was fixed to m χ = 100 GeV. Note that DDCalc-1.1.0 [86,87] contains the fractionally weaker XENON1T 2017 data.
method and calculating the derivative, which played the role of a likelihood. We found no significant changes in our evidences. To calculate confidence regions, we furthermore performed dedicated scans around the resonance to insure that it was adequately sampled.

Prior sensitivity
In our Bayesian analysis, we find that with linear priors for the DM mass and coupling the evidence is about 10 times greater than that with logarithmic priors for the real scalar Higgs portal with current data and with current data except direct detection experiments. This can be ascribed to the fact that linear priors favor larger values for the DM mass, where experimental constraints from direct searches are weaker.
A factor of ten would not alter qualitative conclusions drawn from our Bayes factors in Table 3, as Bayes factors are typically interpreted on a logarithmic scale.
For the Majorana Z-boson portal, we find evidences about 100 times smaller with linear priors than with logarithmic priors with current data and also when the direct detection experiments are omitted.
This arises because a small coupling, as required by this model, is disfavoured by a linear prior. Factors of 100 may alter qualitative conclusions drawn from the Bayes factors in Table 2. This is not surprising: with qualitatively different prior beliefs about the scales of couplings and mass of DM, one may reach qualitatively different conclusions about the relative plausibility of SM-portal models. However, our default is to present evidences from logarithmic priors, since we are agnostic about the order of magnitude of the DM mass and couplings whereas linear priors, on the other hand, favour the greatest permitted orders of magnitude.
The partial Bayes factors in Table 3 (which involve ratios of the aforementioned evidences) are rather insensitive to the choice of logarithmic or linear priors, suggesting that conclusions drawn from them are somewhat robust. The frequentist χ 2 results, which we compare with the Bayesian analysis, are of course insensitive to our choice of prior.

Number of constrained parameters
Following Ref. [100][101][102], we consider the effective number of constrained parameters in each model where · denotes a posterior mean. This heuristic for the number of constrained parameters originates from considering the Kullback-Leibler divergence between the prior and posterior probability density functions.
To see that this is a heuristic for the number of constrained parameters, imagine an n-dimensional linear model in which the likelihood is a product of Gaussians with widths σ i for each of n parameters and the priors are uniform with widths w i . If the parameters are constrained, i.e., σ i w i , the posterior is approximately Gaussian. Thus χ 2 is the mean of a χ 2 distribution with n degrees of freedom, i.e., χ 2 = n. The minimum χ 2 = 0; thus n eff = n.
If, on the other hand, m parameters are unconstrained, i.e., σ i w i , their posterior is approximately uniform. Thus their contributions to χ 2 are averaged upon a uniform distribution and approximately vanish such that χ 2 ≈ n − m and we find n eff ≈ n − m. This example demonstrates that whether a parameter should be considered constrained depends upon our prior.
The effective number of parameters for each model is shown in Table 6. We find that the effective numbers of parameters range from about 1 -3, as expected. Ref. [101] argues that when evidences are similar, we should discriminate between models by their numbers of constrained parameters, favouring models with fewer. While not necessarily endorsing this point of view, we note that we find that Higgs-portal models have fewer constrained parameters than Z-portal models.

Conclusions
We have presented Bayesian and frequentist appraisals of models with DM of spin 0, 1/2 or 1 that interacts with SM particles and annihilates via interactions with the SM Higgs or Z-boson, in light of constraints from Planck, DD, ID and LHC experiments. We have also considered the possible impacts of null results from future DD searches at LZ and PICO. The Bayesian and frequentist analyses yield similar conclusions,   [100][101][102].
and our results are relatively insensitive to the uncertainties in the DD scattering matrix elements, and to the choice of Bayesian priors.
We find that all the Higgs-portal models studied are compatible with the available data, and that they offer prospects for on-shell decays of the Higgs boson into pairs of DM particles, as well as allowing for the possibility of heavier DM particles that can be produced only via off-shell Higgs bosons. In the case of Z-portal models, we find that the available data already provide decisive evidence against the spin-0 and spin-1 cases, though they do allow both the Majorana and Dirac spin-1/2 fermion options. However, in these cases only off-shell Z interactions are allowed. Null results from future DD experiments would provide substantial evidence against the scalar and vector Higgs-portal models, but the fermionic Higgsand Z-portal models would still be viable. Null results of DD experiments down to neutrino 'floor' levels would provide decisive evidence against the scalar and vector Higgs-portal models, and start to provide substantial evidence against the Dirac Z-portal model.
We argue that our statistical analyses and our investigation of uncertainties in DD may be the most comprehensive to date. It is for this reason that our results differ in emphasis from some of the previous literature. The underlying Lagrangian models we study are similar to those used in previous papers, we use a similar set of phenomenological constraints, and our implementations are also similar. However, we consider the entire parameter spaces, rather than slices, and our combination of statistical approaches enables quantitative and precise characterizations of the allowed parameter spaces for all the models we study, permitting more complete statements about the regions of parameter space in which they may survive, as well as how they could be probed in the future.

Acknowledgements
The

A Statistics
We analyze SM-portal models with Bayesian and frequentist statistics. Within each framework, we require two distinct methods: a method to estimate the parameters in a DM model, e.g., the DM mass; and a method to judge the viability of the DM model. An ingredient in all calculations is the likelihood -the probability of obtaining experimental data D given a parameter point x, in a model M , i.e., The likelihood in our analysis is a product of statistically independent likelihoods from measurements of the DM abundance and DD, ID and LHC experiments, The individual likelihoods are described in Sec. 3.
For parameter inference with frequentist statistics, we calculate confidence regions by Wilks' theorem.
For a two-dimensional confidence region for parameters a and b in a model with parameters x = {a, b, c}, by Wilks' theorem, wherex are the best-fitting parameters andĉ is the best-fit c for a particular a and b. The, e.g., 95% confidence interval is the region in (a, b) in which ∆χ 2 (a, b) ≤ 5.99. For judging model viability with frequentist statistics, we construct a test statistic Unfortunately, as the distribution of the above quantity is unknown, it is not possible to calculate precisely a p-value, i.e., the probability of obtaining a test-statistic so extreme were the null hypothesis true. See, e.g., the discussion of the difficulties in estimating the distribution in a similar analysis in Ref. [21].
For illustration, though, we calculate p-values assuming a χ 2 distribution with two degrees of freedom, minimum χ 2 ∼ χ 2 2 , which may be reasonable given the constraints and parameters in our models, and the predictions of our DM models at their best-fit points.
Our Bayesian methodology requires a further ingredient: priors, p(x | M ), for the parameters x in a model M . The priors for our SM-portal models are discussed in Sec. 2. We update the priors with experimental data by Bayes' theorem, resulting in posterior distributions For parameter inference, we marginalize parameters that are not of interest, e.g., and find so-called credible regions: regions of (a, b) that contain a particular fraction e.g., 95% of the posterior. The regions are not unique; we use the ordering rules detailed in Ref. [98]. For judging the viability of a model, we calculate directly its change in relative plausibility [99]. This involves calculating evidence integrals, The This may be expressed in words as Change in relative plausibility = Posterior odds Prior odds = Bayes factor (25) Thus we calculate Bayes factors, which are ratios of evidences, to judge changes in the relative plausibility of SM-portal models in light of data.