Relic density of dark matter in the inert doublet model beyond leading order for the low mass region: 1. Renormalisation and constraints

The present paper is the first in a series that addresses the calculation of the full one-loop corrections of dark matter (DM) annihilation cross-sections in the low mass region of the inert doublet model (IDM). We first review the renormalisation of the model both in a fully on-shell (OS) scheme and a mixed scheme combining on-shell (for the masses) and a $\overline{{\rm MS}}$ approach when the partial invisible width is closed and does not allow the use of a full OS scheme. The scale dependence introduced by the mixed scheme is shown to be tracked through an analysis of a parametrisation of the tree-level cross-section and the $\beta$ constant of a specific coupling. We discuss how to minimise the scale dependence. The theoretical uncertainty brought by the scale dependence leads us to introduce a new criterion on the perturbativity of the IDM. This criterion further delimits the allowed parameter space which we investigate carefully by including a host of constraints, both theoretical and experimental, including in particular, new data from the LHC. We come up with a set of benchmark points that cover three different mechanisms for a viable relic density of DM: {\it i)} a dominance of co-annihilation into a fermion pair, { \it ii)} annihilation into 2 vector bosons of which one is off-shell that requires the calculation of a $2 \to 3$ process at one-loop, {\it iii)} annihilation that proceeds through the very narrow standard model Higgs resonance. Since the $2 \to 3$ vector boson channel features in all three channels and is essentially a build up on the simpler annihilation to OS vector bosons, we study the latter in detail in the present paper. We confirm again that the corrected cross-sections involve a parameter that represents rescattering in the dark sector that a tree-level computation in not sensitive to.


Introduction
Cosmology has entered the era of precision measurements. One such measurement is the inferred relic density of dark matter (DM) that is now determined at the per-cent level [1]. For a particle physicist such an accuracy is reminiscent of the one achieved at LEP. In turn, on the theory side, this level of accuracy requires that observables be computed with a precision on par with the experimental precision or even better. Given the cosmological model for the thermodynamics/evolution of the universe and a model of DM, the computation of the relic density of DM involves the calculations of annihilation rates. While, unfortunately, no sign of a model of DM has emerged either in direct detection or at the colliders, many models have been proposed and a huge amount of work has been dedicated to the search and study of these models. Yet, despite the central role that the very precise measurement of the relic density plays in constraining the phenomenology of these models, only very few examples have provided the calculation of the annihilation rates beyond the leading order, treelevel, approximation. In these series of papers, we will tackle the computation of many annihilation rates that occur in different viable scenarios of the the inert double model (IDM) [2][3][4], in particular the low mass DM region in this model, beyond the tree-level approximation. This is a continuation of the work we initiated in [5] to cover the heavy DM scenario, 500 GeV to 1 TeV, which complements important non-perturbative electroweak Sommerfeld effects [6][7][8][9][10] that are relevant beyond the TeV. The IDM, with its possible link between the Higgs sector and DM [4] has enjoyed some popularity [9, but its structure beyond the tree-level has only been looked at either for some specific applications or conditions [4,29,[44][45][46][47][48][49][50][51][52]. In order to perform one-loop calculations, for any process in the IDM and in particular for DM annihilation into standard model (SM) particles, we need a full and coherent renormalisation program for the IDM. We present the details of our renormalisation schemes and show many important features that we believe will be very useful not only in studying DM annihilation in the IDM but also in studying the renormalisation of models beyond the standard model (BSM).
The plan of this first paper in the series is as follows. We, in the next section, section 2, briefly describe the model bringing forth the physical parameters (rather than the parameters of the underlying Lagrangian). This will set the stage to the section, section 3, on renormalisation where we strive as much as possible to take an on-shell (OS) scheme, which will use the physical masses of the model and the partial width of the SM Higgs to a pair of DM. Even in the narrow range of masses for the IDM that we will study here, a fully OS scheme is not possible when the latter's partial width is kinematically closed. In this case we advocate a mixed OS-M S scheme. This mixed scheme introduces a scale dependence but we will show, on a simple example first, how to track the scale dependence through the β constant of the associated coupling and a knowledge of the tree-level dependence on this coupling. This section will give us the opportunity to present our automated code to conduct one-loop calculations and the means we have to check the correctness of the results (switching between different choices of non-linear gauge-fixing parameters [53,54] and ultra-violet finiteness of the crosssections and decays). Section 4 is lengthy but necessary since we reanalyse all available experimental data and theoretical arguments to delimit the new parameter space of the model. The constraints include new data on direct detection and also the LHC. We then propose a set of benchmark points which will then be scrutinised further by computing their respective relic density based on one-loop cross-sections. We will then see how the relic density predictions compare with those derived with tree-level cross-sections. This, thorough scan reveals in fact three mechanisms that permit a good DM candidate with a mass below the W -boson's, M W . The features of these mechanisms are summarised in section 5. They consist of i) a narrow co-annihilation region driven essentially by gauge coupling, ii) annihilations driven essentially by the SM Higgs resonance and iii) annihilations into 3-body final states, W ff , Zff built up (mainly) on W W , ZZ (W , Z denote the off-shell vector bosons, a notion which will become clearer when we study the 2 → 3 processes). The technicalities involved in these three mechanisms are quite different and this is the reason we decided to present the calculations in these three cases in three separate publications for more clarity and readability. Both ii) and iii) are very challenging. This is the first time that annihilation of DM to 3 particles is conducted at one-loop. Renormalisation and loop corrections in the presence of a resonance require extreme care, in fact our paper [55] on the mechanism of annihilation through a Higgs can serve as a good example for many other models and processes. Very subtle issues about renormalisation beyond what is presented in this first paper of the series will be highlighted in [55]. Moreover, even in the so called co-annihilation scenario [56], there is a small (but non-negligible) contribution that proceeds through annihilation to 3-body final state. A contribution from W ff will also feature in the Higgs resonance mechanism. This is one of the reasons why before tackling the challenging W ff , Zff in [57], the present paper studies in section 6, as a warm-up, the annihilation to a pair of on-shell W + W − and ZZ, even if phenomenologically they lead to under-abundance. Another reason is that in this mass range a fully OS scheme is not possible and we need to use the mixed scheme. We therefore study the scale dependence of these annihilation cross-sections and suggest an optimal scale for such processes thereby putting a conjecture we make for the Higgs decay in this paper and for other processes elsewhere, on more solid ground. This study will also reveal that certain choices of parameters lead to too large scale dependence and even a breakdown of perturbativity in the sense of the loop expansion. We propose a criterion to avoid such configurations, criterion that goes beyond the perturbativity argument used in our constraints in section 4. This helps reduce the number of the benchmark points we use in the accompanying papers of this series. We end the present paper with a short conclusion in section 7.

The model and the parameters
The IDM consists, in addition to the Standard Model (SM) Higgs doublet Φ 1 , of an extra doublet of scalars Φ 2 on which a discrete Z 2 symmetry is imposed. This symmetry entails that Φ 2 is odd while all other fields (of the SM) are even. As a consequence, this symmetry guarantees the stability of the lightest of the scalars of the Φ 2 doublet. If the latter is neutral it qualifies as a possible dark matter candidate. Another important upshot of this symmetry is that these extra scalar fields in Φ 2 cannot couple to fermions, at least through renormalisable operators. Keeping only renormalisable operators, the scalar sector is therefore modified to µ i and λ i are real and D µ is the covariant derivative. We parameterise the doublets as where v is the SM vacuum expectation value (vev) with v 246 GeV, defined from the measurement of the W (M W ) and Z (M Z ) masses. We have where e is the electromagnetic coupling. The SU (2) gauge coupling, g, and the hypercharge gauge coupling, g , are then h is the SM Higgs boson (with mass M h = 125 GeV), and G 0 , G ± are, respectively, the neutral and the charged Goldstone bosons. X and A are the new neutral physical scalars 1 and H ± is the charged physical scalar. While both X and A are possible DM candidates, the physics is the same through the interchange (λ 5 , X) ↔ (−λ 5 , A). In these series of papers we take, for definiteness, X as the DM candidate. Note that, in order to avoid any confusion between the neutral scalars of the model, we have labelled the fields differently than in our previous paper [5]. It is much instructive to revert to the description of the model through the physical parameters, especially when we will be seeking, as much as possible, an on-shell renormalisation of the model based on physical observables. In particular, the parameters of the potential can be translated to the physical masses of the scalars Unfortunately, the masses do not provide enough input to determine all the independent parameters of the potential. First of all, λ 2 is a parameter that describes the interaction solely within the dark sector X, A, H ± . Therefore, at tree-level, the SM particles are insensitive to this parameter even in their interaction with the dark sector particles X, A, H ± particles. Nonetheless, as we will see, one-loop observables will depend on this parameter that describes scattering/re-scattering in the DM sector. Second, to fully define an observable, we still need an extra parameter, µ 2 , λ 3 , or alternatively λ L The latter seems to be the most appropriate since it has a direct physical interpretation. Indeed the coupling of the SM Higgs, h, to a pair of DM is (at the amplitude level) given by The superscript 0 relates to the tree-level definition. In lieu of the mass term, µ 2 2 , we take λ L as an input parameter. The parameters of the potential in equation 2.2 can then be reconstructed from the input parameters (M h , M X , M A , M ± H , λ L ) as (2.12) µ 2 is a redundant parameter. As mentioned earlier, λ 2 is a parameter, which, at tree-level is not accessible since it describes the interactions solely within the dark sector. To define any benchmark and in view of the OS scheme for the renormalisation of the IDM that we advocate, apart from the SM parameters the input (physical) parameters we choose are It is instructive to make the following comments which will prove helpful later. When the DM candidate, X, is almost degenerate in mass with A, co-annihilation may be important. This scenario requires λ 5 ∼ 0. In another limit, M H ± = M A , the electroweak custodial symmetry parameter, T = 0 at one-loop even when M A , M H ± are large. In this limit with M A,H ± > M X , λ 4 = λ 5 = −λ < 0 (with λ > 0). As we will see later, direct detection will impose that λ L 1 which means that λ 3 ∼ 2λ when the custodial symmetry is imposed.
3 Renormalisation of the model

Automation of the calculation
A new model file for the IDM allowing different schemes for the renormalisation of the model, to which we turn shortly, has been added to SloopS [5,54,[58][59][60][61][62][63][64]. SloopS, our automated code for the calculation of tree-level and one-loop observables relies of the bundle of packages based on FeynArts [65], FormCalc [66] and LoopTools [67], that we refer to as FFL for short. A few improvements to LoopTools [58] have been made over the years. A key component is the generation of the model file (with counterterms and renormalisation conditions) made possible with LANHEP [68,69] judiciously interfaced with the bundle FFL. We have exploited this approach successfully for the SM, the full renormalisation of all sectors of the MSSM [54,59,62] and the NMSSM [64]. The code has been thoroughly checked and allows many tests on the correctness of the results, including tests on the ultraviolet finiteness (for loop calculations) and very importantly, on gauge parameter independence of the results both at tree-level and at loop-level, see 3.3 below. The code is optimised also for DM annihilation cross-section such that it inputs directly the result for σv, where σ is the annihilation cross-section and v is the relative velocity of the annihilating particles. This avoids potential instabilities when specialising to v → 0.

The SM parameters
The SM part of the IDM is renormalised exactly in the same way as what has become standard practice in one-loop calculations of the electroweak observables [64]. This calls for an OS scheme whereby the fermion masses as well as the mass of the W , M W = 80.449 GeV, the Z, M Z = 91.187 GeV and the Higgs, M h = 125 GeV, are taken as input physical masses. For the SM, the tadpole, (at tree-level) is required to vanish at all orders. The electric charge is defined in the Thomson limit, see [54]. The light quark (u, d, s, c) masses, m u = m d = 66 MeV, m s = 150 MeV, m c = 1.6 GeV, are taken as effective quark masses that reproduce the SM value of α −1 (M 2 Z ) ∼ 128.907. Use of the latter effective coupling can, in many instances, amount to about 13% correction compared to the use of α = α(0) = 137.036. Since it is α(M 2 Z ) that is used as the effective coupling in the tree-level evaluation of micrOMEGAs [70][71][72][73], we will investigate whether implementing the numerical value α(M 2 Z ) instead of α in tree-level cross-sections can account for a significant part or the full one-loop correction. For a portion of the allowed parameter space of the IDM, the relic density is driven almost entirely through the SM Higgs resonance for which h → bb is dominant. We will therefore take an effective b-quark mass that (at tree-level) reproduces very well the dominant partial width to bs and the total SM Higgs width. In fact, for such a scenario, we adapt the renormalisation of the Higgs mass to include its width as well. We give more details [55] on this subtle but important issue when we describe the processes for the DM annihilation that we will be dealing with. The top mass is taken as m t = 174.3 GeV.

Gauge fixing
We take a non-linear gauge fixing term [53,54,58] that still preserves a Z 2 symmetry. We take these gauge fixing terms to be renormalised. In particular, the gauge functions involve the physical fields. Although this will not make all Green's functions finite, it is enough to make all S-matrix elements finite. With A µ , the photon field, we write the gauge-fixing as where The ghost Lagrangian L Gh is derived through the use of the BRST transformation, see [53], which leads to an easy implementation in the automated code SloopS. We specialise to the case ξ W = ξ Z = ξ γ = 1 in order that the gauge propagators retain the simple form of the usual Feynman gauge but the nonlinear gauge furnishes enough parameters (α, ...˜ ) on which to carry the gauge parameter dependence of the amplitude for further checks on the correctness of the calculation beside the ultraviolet finiteness tests. We carry gauge parameter independence checks on all cross-sections and decays we calculate with SloopS in this series of papers.

On-shell renormalisation for the masses of the IDM scalars
The renormalisation of the IDM is technically quite straightforward since the new scalars, A, X and H ± , do not mix, nor do they mix (1 → 1 transition) with those of the SM, including the Goldstones, because of the Z 2 symmetry. Shifts on the parameters of the potential (which get translated into counterterms, δλ i , for the λ i (and µ 2 ) as well as shifts on the fields through wave function renormalisation constants (δZ), are introduced. In the OS scheme, the masses of the physical scalars, φ (φ = X, A, H ± ), are defined from the pole position of the one-loop corresponding self-energy Σ φφ (k 2 ), where k is the momentum carried by φ. We also require that the residue at the pole of these particles be properly normalised to unity. The conditions on the counterterms are then (3.4)

λ L : MS and OS
We use the full h → XX amplitude to define λ L . At tree-level, the amplitude is given by Equation 2.10. It serves to generate (and define) the counterterms for the one-loop amplitude. The latter also includes the contributions of the one-loop diagrams such that the full renormalised one-loop amplitude is momentum dependent. The full one-loop renormalised amplitude (when the threshold is open) for h(Q 2 ) → X(p 2 1 )X(p 2 2 ) of the SM Higgs, h, with momentum Q to a pair of the DM X with momenta p 1 and p 2 writes as is the full one-loop 1-particle irreducible vertex. δλ L is the counterterm for λ L for which we are seeking a renormalisation condition, δZ h is the SM wave function renormalisation and δZ X is the wave function renormalisation for the DM particle, X. δv is the counterterm for v (defined through e, M Z , M W in Equation 2.4). When the threshold for the Higgs decay to XX is open, we set Q 2 = M 2 h and p 2 1 = p 2 2 = M 2 X and require that the full one-loop correction to this partial width is zero, defining a gauge invariant OS counterterm for λ L as Another gauge invariant but scale dependent scheme valid even when the decay threshold is closed, is to use a MS definition where only the (mass independent term) ultraviolet divergent part is kept.
In our code, δ OS λ L in Equation 3.6 and δ MS λ L in Equation 3.7 are extracted exactly by evaluating the amplitude h → XX and constructing the above equations.

The β constant for β λ L
The coefficient of the ultraviolet divergent part is nothing but the one-loop β constant for λ L .
where ε = 4 − d with d being the number of dimensions in dimensional regularisation and γ E is the Euler-Mascheroni constant. Or, keeping µ dim , the scale introduced by dimensional regularisation (which goes hand in hand with the ln(4π) term as ln(4πµ 2 dim )), the scale, Q 2 , dependence through the parameter λ L writes also as The analytical formulae for the β constants of the IDM have been adapted from those of the general 2HDM (2 Higgs Doublet Model) [44,74] and can be found in [24,75]. For our purposes, we have reformulated them for λ L , taking into account the scalar(s), the gauge (g) and the Yukawa (Y) contribution asβ Here, m f is the mass of the fermion f and N f c is its corresponding colour factor (3 for quarks and 1 for leptons). For small λ L , the Yukawa contribution is tiny and the largest contribution is from the top-quark. We find excellent agreement (in fact perfect agreement with machine precision) between these analytical formulae and those extracted from our code according to Equations 3.7, 3.8. An important observation here is that Equation 3.11 shows that even if λ L = 0, a one-loop induced λ L is generated. Also, while λ 2 is a parameter residing solely in the dark sector and is not involved at tree-level in DM annihilation processes to SM particles, it makes its effect felt at one-loop with the conclusion that a large scale variation due to λ 2 can be present apart from other contributions at one-loop. In our code, we can freely vary µ 2 dim to quantify the scheme dependence in an MS scheme. Obviously in a fully OS scheme, the check on the UV finiteness of the result means that there is no µ 2 dim dependence.

Scale dependence of the one-loop corrected h → XX in the MS
At tree-level, the partial width for Higgs decay to a pair of DM, X, is expressed as showing that the parametric dependence on λ L , a quadratic dependence, of this observable is algebraically straightforward. This is the reason that this observable is chosen as an input to define λ L . Naturally, in the OS scheme, this observable receives no correction. Nonetheless, as we will see later, based on present experimental constraints, which means that the Higgs partial decay rate to DM is way too small to be measured, let alone be measured with good precision, one may be forced to rely on an MS prescription. When h → XX is closed, the use of the OS scheme is not possible and in this case we have to study the scale uncertainty. The purpose of this short interlude is to investigate which scale choice may be considered the best, best in the sense of minimising the one-loop correction. In fact, Γ h→XX provides a good example. One can study this observable at one-loop in the MS and quantify how it deviates from the tree-level result which is, by construction, the result of the OS scheme. Knowing the λ L dependence of an observable at tree-level and having at our disposal the β λ L of the model, the scale dependence can be derived analytically allowing to compare the difference between two choices of scale, µ 1 and µ 2 .
The λ L dependence of Γ h→XX is trivial. Indeed, from Equation 3.12, we have which through Equation 3.9 allows us to relate the one-loop correction at scale µ 2 to that at scale µ 1 as As expected, an important lesson to always keep in mind is that largeβ λ L values induce large scale variations. For the IDM β λ L depends also on λ 2 which can contribute significantly. These considerations are crucial. However, they do not point to the most optimal choice of µ. The most optimal choice of µ is the one that minimises the full one-loop correction. The latter will involve all scales of the problem. Therefore, to investigate the issue of the optimal scale, we need to consider a few examples calculating the full one-loop correction.
For the purpose of this exercise, we take a value of λ L which is not too small in order not to induce unnaturally large relative corrections. At this stage, we do not impose experimental constraints on the parameter space. This is studied in great details, later. Our aim here is to see if there is a trend for an optimal choice of the scale that we could then advocate for other processes driving the relic density. We consider 3 models that differ in the masses of A and H ± . With M h = 125 GeV, we keep M X = 57 GeV for these 3 models. These choices generate different hierarchies for the scale of the problem. While λ 2 is not needed to calculate the tree-level decay, the value of λ 2 is required at one-loop. We consider 3 values of λ 2 for each point to gauge the λ 2 dependence and we test various values of the scale, µ, in relation with the scales that are involved in the observable at one-loop. It is important to observe that the masses of the IDM act at two levels in the one-loop result. Re-interpreted in terms of λ i=3,4,5,(L) , they contribute directly to the coefficientβ λ L as λ 2 does but unlike λ 2 , the masses are involved in the arguments of the various one-loop scalar functions.
Our results of the numerical full one-loop corrections are shown in Table 1. First of all, Equation 3.14 is in perfect agreement with the numbers given in Table 1 where we compare, for the same point, the correction between two scales. This is another evidence of the correctness of the implementation of the model and the numerical computation.
The relevant scales of the problem are all the invariants that are involved in the loop functions that enter the radiative corrections. They include both the external kinematical variables but also the internal masses called in the loop calculations. The µ dim dependence is tracked as log(Q 2 ∆ /µ 2 dim ). Q ∆ is an effective collective scale which is a combination of the scales involved in the (various) loops entering the loop calculation. As argued at some length in Ref. [64], if one of these scales is (much) larger than the others, it should approximate Q ∆ .
One would expect that the typical scale for this decay is M h . Observe that in our case M h ∼ 2M X . Table 1 shows that µ dim = M h , 2M X is a good choice but only if M A < M h , 2M X . In particular for case (C) of Table 1 where M A is much larger that M h and 2M X , M A is by far the best choice for all values of λ 2 confirming the general observation made in [64] in the totally different context of the NMSSM where it was found that the largest scale minimises the correction. Keep in mind also that when M A is much larger than M X , β λ L is larger (since λ 3,4,5 are larger). It is therefore more important to choose M A as the optimal µ dim . As a rule of thumb we advocate the optimal scale to be µ = max(M h , M A ), M h is the typical (kinematical) scale of the process while M A is the typical internal scale. We find that it fares better than M H ± . Scales that are much smaller or much larger Table 1: One-loop corrections, δΓ h→XX , to the partial decay width of the SM Higgs (M h = 125 GeV) to pairs of DM, X, with M X = 57 GeV (all values for the widths are given in MeV) in the MS scheme for different masses of M H ± and M A and λ 2 , the coupling within the dark sector. The tree-level width of this observable is 5.106 MeV. All masses (and scales) are in GeV. For each set of parameters defining the IDM model, we give the reconstructed λ i parameters as well as the one-loop β. The parameters of the model are only illustrative and do not necessarily pass the experimental constraints on the IDM. Entries in bold represent corrections that represent more than 75% of the tree-level value, while those that are also underlined lead to a total width that turns negative.
than the optimal scale, for instance M X , M h /2, 2M h , lead to large corrections. On many instances these corrections are so large that the calculations are not reliable.
We check this conjecture about the optimal scale also in the study of the annihilation crosssection of DM, XX → SM. Considering the rather small velocities taking part in the annihilation cross-sections, 2M X will, again represent the typical kinematical scale to be compared again with the internal mass, M A .

Present Constraints on the parameter space of the IDM
We quantify the effect of the one-loop corrections on the annihilation cross-sections in the IDM. We perform these calculations on some benchmark points of the IDM. The benchmark points need to pass several constraints. While searching for benchmark points that satisfy all theoretical and experimental constraints, we perform a numerical scan where all the constraints that we discuss in this section are implemented numerically without any approximation, by running different codes and routines. Nonetheless, it is instructive to extract the salient features through approximate analytical formulae and some judicious considerations as this helps carry out a more efficient scan. Thorough investigations of the constraints have been performed recently [32,37] 2 . While we agree with their findings, we update the constraints in view of some new experimental data on the direct detection of dark matter and searches/analyses at the LHC. Our purpose is not to present new exclusion zones in the parameter space of the model but to make sure that the benchmark points we pick up for our studies at one-loop, pass all the experimental (and theoretical constraints) and are representative of a larger set. As is known, the present precision on the relic density of DM is such that it reduces the parameter space of any model of DM, drastically. However, since all studies impose this constraint based on treelevel calculations of the annihilation cross-sections, we use the tree-level based relic density constraint only as a guide and see how the prediction transforms when a full one-loop calculation is implemented and how the theoretical uncertainties should be taken into account. Many studies have allowed benchmarks where the predicted (tree-level) relic density leads to underabundance, but not (obviously) overabundance. Nonetheless, a prediction of an overabundance with a tree-level calculation may turn into a viable model with a more precise calculation. First of all, let us repeat again that we are taking X as the DM candidate. X is neutral and is taken to be the lightest of the three additional scalars that the IDM provides. From 2.11 this means that The equality (λ 4 = λ 5 ) automatically evades indirect electroweak precision measurements by preserving custodial symmetry. Remember also that the existence of the SM Higgs means that Usually while studying the viable parameter space of the IDM, the discussion starts with the restriction from the stability and the perturbativity of the potential. We discuss all these constraints but we prefer to start with a constraint that has in the last years become quite stringent. It also delimits a specific coupling and not a combination of a large number of parameters, it concerns the parameter λ L and its connection to the DM direct detection. λ L is a key input in our OS renormalisation scheme.

Dark matter direct detection
The direct detection of dark matter sets a very strong constraint on λ L /M X for the IDM. In the rather narrow (light) mass range, direct detection proceeds through the SM Higgs exchange with a cross-section given by Assuming that X accounts for all of the DM density on Earth and provides the correct measured relic density as we are assuming, the latest Xenon1T limit [77] for a DM in the range 50 < M X < 100 GeV translates into Applied to our case, we have This invisible branching fraction is far too small compared to the present limit from the LHC fits [78][79][80]. Indeed, from Equation 3.12 Considering the very tiny value of the invisible width, we take the total Higgs width to be the SM value of the total width from [81,82] If the IDM only provides a fraction of the total relic density of DM, the direct detection cross-section needs to be rescaled. With the density of the IDM written as Ω X and that extracted from the Planck measurements as Ω Planck DM , the limits above are transformed into where λ max L is derived from Equation 4.6. We take the view that if the IDM is to be considered as a model for DM, it should provide at least 50% of the total DM. This requirement does not significantly change the limit on λ L from direct detection constraints (a modest factor of √ 2 is possible) which is restricted to be rather small, λ L < 0.01.

Electroweak Precision Observables (EWPO)
The custodial SU (2) symmetry breaking parameter, T [83], restricts mass splitting between the scalars of the IDM. In the limit M X M A , M H , we require M A ∼ M H . Indeed, is combined with the S [83] parameter which gives the weaker constraint .
The full expressions for the S, T contribution in the IDM can be found in Ref. [3].

Stability of the potential
As with λ 1 > 0, we need to have λ 2 > 0. Equations 4.1 and 4.2 are sufficient to guarantee the vacuum to be neutral (λ 4 − |λ 5 | < 0). The other constraints on the potential (see for example [37]) are easily satisfied if one takes into account that direct detection requires a very small λ L . The conditions expressed as a function of λ L require for instance which is satisfied if we take λ L > 0. In our case, the condition 2 Moreover, with very small λ L and not vanishingly small λ 2 , the parameter, , satisfies R 1 and we have that the minimum of the potential, which is the trivial one with the inert minimum being the deepest [37]. Requiring satisfies these conditions. An upper limit on λ 2 (λ 2 enters at one-loop order in the relic density calculation) is derived by considering unitarity to which we now turn to.

Tree-level unitarity
Unitarity constraints (that are found to be stronger than the perturbativity constraints) are also imposed, see [49]. These have been studied by considering the eigenvalues of the full set of all scattering processes involving all the scalars in the theory. They allow us to set an upper limit on the masses of the scalars. For M H ± M X , these constraints simplify if we impose the T parameter constraints, M A = M H ± (or λ 4 = λ 5 = −λ). They can be decomposed into subsets. The λ 1 , λ 2 independent limits translate into Stronger limits apply if we allow for non-negligible values of λ 2 . In the approximation, λ 4 = λ 5 and with λ L ∼ 0, we have the additional constraint which, for small λ 2 , gives the limit in Eq

Direct searches at LEP
The LEP constraints can be easily evaded if the masses of the inert scalars are above the threshold for LEPII production. In our benchmarks, this applies to H + H − pair production. For XA associated production at LEPII, avoiding the constraints is possible either because the cross-section is much reduced (close to threshold) and/or because the signature is such that these particles go undetected. Reinterpretation in terms of the IDM of LEPII data for searches of charginos [85] and neutralinos [14] has been done. These constraints can be summarised simply as M A , M H ± > 110 GeV independently of the X mass (as long as M X < M A,H ± ). There is however a caveat when ∆M AX = M A − M X < 8GeV that leads to too soft leptons that invalidate the LEP searches. This recently discovered small region [37,86] allows efficient XA co-annihilation into a fermion pair. We study this region and look at the impact of the loop corrections on the relic density in Ref. [56].

Direct searches at the LHC
The searches (and limits on the parameter space) are based primarily on the Drell-Yan like associated production of the scalars. The signatures are classified according to the number, , of charged leptons, missing energy and possibly jets: where the Z, W ± may or may not be on-shell. The monojet signature (XXj, XAj) is relevant only when λ L is not too small (XXj) or when M A − M X is very small [37]. The contribution from vector boson fusion is negligible [37]. Higher values of = 4, 5 can be envisaged depending on the mass difference between M A and M H ± giving more leptons through cascade decays [86]. The most studied [12,87]  Simulations for all process 4.18-4.21 were studied for some benchmark points in [86] while a full simulation was performed for 4.19 in Ref. [37]. The caveat, |m − M Z | > 10 GeV, we alluded to earlier, has a crucial impact on the searches, existing and forthcoming. This cut is meant to reduce the large SM background in events containing a (on-shell) Z-boson. Yet, for a large part of the IDM parameter space, ∆M AX > M Z , where A → XZ proceeds with an on-shell Z. Therefore, large values of M A not only give smaller cross-sections but also a very large fraction of the yield of leptons are cut. Thus, the prospects for future discovery are slim. Very recently, ATLAS has also given limits on the masses of the supersymmetric charginos and neutralinos in the pure (simplified) wino and (non-degenerate) higgsino limits of the MSSM (minimal supersymmetric model) [88] based on data from the LHC at √ s = 13 TeV. In these special manifestations of the MSSM (heavy mass sfermions), the production mechanisms and the signatures at the LHC are the same as those of the IDM. The only (notable) difference is the spin of the produced particles in these models. Therefore, we also take into account the ATLAS exclusion zones observing that the corresponding limits on the IDM are more relaxed because of the scalar nature of the IDM particles resulting in much smaller cross-sections. In Table 2, we therefore also provide the corresponding cross-sections for the electroweak production of the IDM alongside those of the wino and higgsino production processes. Moreover, we also carry a recasting analysis based on MadAnalysis 5 [89][90][91] on some of the benchmarks points. A more extensive recast of the IDM will be studied independently of this series of papers. We check few of the benchmarks against some of the existing searches. In particular, we use the 2 + / E T [92,93] from electroweakinos/slepton pair production, the 2 + / E T , 3 + / E T [94,95] from the chargino-neutralino pair production, the 2 + / E T [96,97] from the mono-Z search and the + / E T [98,99] from the W search. We must mention here that the topologies considered in the latter two analyses are not the same as ours and detailed simulations including relevant cuts are necessary for a proper study. However, the aforementioned analyses, especially the ones with the chargino pair or the chargino-neutralino productions, show that our benchmark points are safe. We also use MadAnalysis 5's extrapolation code [100] to show that the checked benchmark points are also allowed at the high luminosity run of the LHC (HL-LHC) with an integrated luminosity of L = 3 ab −1 .

µ γγ
We also check for a possible constraint from the signal strength in the diphoton decay of the SM Higgs, With [81,82] Br SM (h → γγ) = (2.27 ± 0.05) 10 −3 . (4.24) The ATLAS and CMS limits for µ γγ are given in Refs. [101] and [102] respectively. We use the standard combination of signal strengths and uncertainties (see [52]) and impose µ γγ = 1.04 ± 0.1. (4.25) The IDM contribution has been worked out in [22]. It turns out, see Table 2, that the present signal strength constraint on the IDM is not significant. Nonetheless, we list the values of µ γγ for the benchmark points should future LHC analyses improve the limits.

Relic density
The present limit [1] on the relic density abundance is 4 The experimental uncertainty is less than 2%. This is the main reason it is extremely important that the theoretical prediction be as precise as possible. Once the cosmological model has been set up, in our case the thermal freeze-out assumption, the major uncertainty is the evaluation of the annihilation cross-sections. These cross-sections must be evaluated beyond the tree-level approximation. This is the aim of this series of papers for the IDM model. However, to select a few points that pass all other constraints and weigh in the importance of the radiative corrections, we have to be content with a prediction of the relic density based on tree-level cross-sections. We use micrOMEGAs 5.0.7 [70][71][72][73]104] for computing the relic densities. Some considerations beyond tree-level are taken into account in micrOMEGAs; in particular the Higgs couplings to the SM particles. The code also uses a running for the electromagnetic coupling with an effective coupling estimated at α(M Z ) ∼ 1/128.907 instead of the OS coupling in the Thomson limit, i.e., α = /137.036. For many annihilation cross-sections, this change of α amounts to an overall rescaling that brings a correction of about 13% in 2 → 2 annihilation processes. However, in many scenarios and models beyond the SM, this rescaling does not account for the full one-loop correction [5,52,54,60,61]. It is therefore wise when we select benchmarks points, using micrOMEGAs for the relic density constraint, to allow for a theoretical uncertainty of 20%. We will therefore keep points that satisfy 0.096 < Ωh 2 < 0.144. Moreover, we quote the value of the relic density using micrOMEGAs with both an effective α(M Z ) and α.
Once we compute the full one-loop corrections, we comment on the use of the effective electromagnetic coupling for each of the benchmark points that we study at one-loop. When running micrOMEGAs, we also let the code check explicitly for the direct detection and possible indirect detection constraints.  (%)  ----10  10  12  12  11  12  12 12  Table 2: Benchmarks points that pass the constraints discussed in Section 4. All masses are in GeV. The benchmarks are divided into three classes: 1) the Higgs resonance region, 2) the small coannihilation region with ∆M = M A − M X allowed by LEPII and 3) the less tuned annihilation class for masses of a DM around M X = 70 GeV. For the relic density, we use micrOMEGAs 5.07 for which the SM parameters are given in Section 3.2. The relic density is calculated both with an OS α in the Thomson limit and with the effective α(M 2 Z ). To weigh the dependence of λ L , we also give the result for λ L = 0. The relative contribution of the most important annihilation/co-annihilation (more than 5%) cross-sections to the relic density are given: XX → W + W − , ZZ, bb, gg and AX → ff . These relative contributions do not depend much on the value of the α used for the relic density calculation. For the relative contributions, we therefore show only those calculated with α(0). In the particular case of point P57, the relative contributions are changed drastically when λ L = 0 (they become 90% into W W and 10% into ZZ ) since the Higgs mediated bb final state is eliminated with λ L = 0. We do not show, in this table, either the values of the S, T, U parameters or of the direct detection. The latter can be trivially rescaled, see Section 4 for the theoretical constraints of unitarity which we carefully check. For each point, we do however give the values of the LHC ( √ s = 13TeV with NN23LO1 parton distribution function, pdf, as implemented in MadGraph5_aMCNLO [103]) cross-sections (in fb) for the electroweak production of a pair of the new scalars, should future LHC analyses update the exclusion regions. Alongside these IDM cross-sections, we also list the wino and higgsino cross-sections that the LHC collaborations have studied (simplified wino and non-degenerate higgsino) [88] and may update in the future. Note that the phenomenology of point A at a high luminosity LHC is considered in [37] and that of point B in [86]. The LHC cross-sections are computed at LO. NLO results for the crosssections at the LHC should be scaled by a factor of 30%. While the NLO corrections in the case of supersymmetry have been computed, we expect the same corrections for the IDM, since these are DYlike processes and the corrections are essentially initial-state QCD corrections. We also show the values of the Higgs to diphoton signal strength that includes the IDM contribution, should this observable be better constrained in future LHC analyses. 5 The different channels contributing to the relic density for 55 < M X < 75 GeV

Characteristics of the benchmarks points
Keeping all the constraints that we have introduced so far, we do of course recover the general conclusions of a scan of the IDM with the relic density constraint calculated with micrOMEGAs. Namely, apart from a region with the heavy scalar, M X > 500 GeV, scenarios, we investigated in [5], the low mass region range lies in the range 55 GeV < M X < 75 GeV.
Within this relatively small range, 3 different mechanisms are at play • The co-annihilation region For M X ∼ M A (within M X − M A < 8 GeV set by the LEPII constraint), extremely efficient co-annihilation XA → ff takes place. It is so efficient that the Boltzmann factor is needed to reduce its contribution. Therefore, ∆M AX should be as large as possible. This is the reason this scenario occurs at the very edge of the limit allowed by LEPII i.e., M A ∼ M X + 8 GeV. For M X ∼ 55 GeV, XX annihilations are still away from the h peak and the onset of XX → W ff , XA → ff is the principal channel. In any case, for the smallest M X masses, one still needs to reduce annihilation to the SM Higgs by setting λ L ∼ 0. Although we find this scenario extremely fine tuned, we find it a good example for the effect of the loop corrections in a certain limit. Indeed, in this limit the tree-level process is governed totally by the gauge coupling. The one-loop corrections to AX → Z → ff are technically the easiest to consider. A detailed investigation is conducted in Ref. [56].
• The SM Higgs resonance For M X ∼ M h /2, efficient annihilation is possible through the very narrow Higgs resonance. Again, this also fine-tuned. Nonetheless, the annihilation does require non-zero hXX, λ L , coupling. The one-loop corrections here will have to deal with the implementation of the width at one-loop, a non trivial problem. We leave all technicalities and discussion on the results in Re. [55].
• The annihilation XX → W W , ZZ For M X < 80 GeV, these annihilations occur with large enough rates even though we are below the W W threshold. They become too efficient when this W W threshold is crossed. Past about M X ∼ 73 GeV, the annihilations become too large, depleting the dark matter density. Technically, the one-loop corrections here are quite challenging, requiring the computation of 2 → 3 processes at one-loop. We leave these calculations and their impact on the relic density to a separate paper [57]. It is not possible to attempt a 2 → 2, XX → W ff , with an offshell W , and integrate over Breit-Wigner distribution without getting into trouble with gauge invariance. Nonetheless, since W W and ZZ annihilation cross-sections are a backbone to the XX → V ff [57], we study these 2 → 2, XX → W + W − , ZZ on-shell processes, see section 6, before we embark on the full 2 → 3 processes. This investigation helps unravel some characteristics of the corrections and more importantly reveals that one-loop perturbativity may break down for some choices of the parameters that tree-level unitarity allows. This study above the W W and ZZ threshold will help select further among the benchmark points.
These benchmark points based on the constraints we reviewed so far are listed in Table 2. Most of the regimes are, at tree-level, driven by the gauge coupling but the contribution of λ L is not negligible. The value of λ L plays a major role in the derivation of the cross-section for the Higgs resonance scenario since this mechanism calls for the hXX coupling. The co-annihilation region requires λ L 0 and is therefore driven by the gauge coupling. While the (non-resonant) annihilation region, XX → W ff and XX → Zff , is dominated by the gauge coupling, the λ L contribution is not negligible at all. First, it occurs because there is a contamination from the SM Higgs exchange but also because the production of longitudinal vector bosons is λ L dependent. This λ L dependence can be seen in the XXGG and XXG + G − quartic couplings which once the masses of the scalars are fixed (λ 4,5 fixed) the coupling is sensitive to λ L . Table 2 gives the value of the relic density when λ L is switched off. Because of the importance of this coupling and the fact that we will, in some cases, rely on an M S scheme for its renormalisation, we give in Table 3 the value of the β constant of λ L ,β λ L , for each benchmark point. Observe the very large values we obtain for this constant for points E and D that are associated with large values of the heavy scalars.
Model  Table 3: Values of the underlying λ 3,4,5 parameters and the corresponding β functions for λ L ,β λ L , for the benchmarks points defined in Table 2.

Cross-sections and velocity dependence
In order to convert the cross-sections into their contribution to the relic abundance, a thermodynamical/cosmological model needs to be implemented. In the case of freeze-out that we will be working with, a thermal average and a convolution of the cross-sections over relative velocities of the annihilating particles is performed. The velocity distribution is important since if the cross-sections are large in regions where the weight of the velocity distribution is negligible, the contribution of the process to the relic density is very much reduced. In case of annihilations, v determines the centre of mass total energy, or the kinematical invariant s, as Although we will interface all of our cross-section to micrOMEGAs for the velocity and temperature averaging for a full numerical integration, it is instructive to gain an analytical understanding. The velocity averaged cross-section, at a temperature T for annihilating DM particles of mass M X can be approximated as Figure 1 shows the weight function of the velocity distribution at the freeze-out temperature, T f . As a general rule, freeze-out occurs at x f = M X /T f ∼ 20 − 25. This helps understand which range in the relative velocity, v, we need to generate most of the annihilation cross-sections. This is particularly useful for the calculations at one-loop cross-sections where there is no need to compute the (involved) one-loop cross-sections for too many values of v where these cross-sections are small. Observe that a typical v where we must get the cross-section right is v ∼ 0.4 (see figure 1) and that for v > 0.8 the cross-sections are weighted down. As we will see, there may be exceptions to this general rule [55]. In particular, if the cross-section is dominated by a peak (in our case the Higgs resonance region) that may occur toward the higher end of the distribution function and also, to a lesser extent, if the cross-section grows because of the opening up of a threshold. In our analyses, for the most general case, we compute the cross-sections for values of v as high as v = 0.9 (or even v ∼ 1). Going to such high values of v makes sense only if the cross-section peaks dramatically as is the case of the resonance through the very narrow SM Higgs. Indeed, for the point M X = 57 GeV, v = 0.82 corresponds to √ s M h = 125 GeV. Observe that for M X = 70, 72 GeV, v = 0.8 does not permit the production of both W s on-shell but only a W W and therefore we must consider W ff .
In the case of co-annihilation, the Boltzmann factor is a very penalising factor. In our case, this concerns AX co-annihilation when the mass difference between these two particles is small. The effective relative weight (to the annihilation XX rate) is For large δ, co-annihilation is therefore exponentially suppressed and it effectively does not take place. If δ is too small and the associated co-annihilation cross-sections are large, we then have too small Ω DM . In the case of co-annihilation, the relative velocity is calculated from the invariant s of the co-annihilating particles X and A as For M A = M X (m − = 0) we recover the usual annihilation relative velocity of equation 5.1. As remarked earlier, in SloopS when calculating annihilation cross-section for DM, the phase space factor is modified such that the code returns σv rather than σ which could be ill-defined for v = 0. We give, for DM annihilation, σv cross-sections, in units of cm 3 s −1 . Our conversion factor to translate σv expressed in GeV −2 m/s is c 0 = 2.99792 × 3.8937966 × 10 8 1.16732 × 10 9 . (5.5) 6 XX → ZZ, W + W − at one-loop as a warm-up The XX → ZZ, W + W − annihilations, with both vectors on-shell, are too efficient to account for a relic density in accordance with observation within the standard cosmological model of freeze-out. This is the reason that DM masses above 80 GeV do not feature in Table 2. However, many of the benchmark points in Table 2 survive because if one of the vector bosons is off-shell, the cross-sections are no longer so large. The 2 → 3 cross-sections XX → W ff and XX → Zff do carry the salient features of the on-shell production. It is much more transparent to investigate these features on the 2 → 2 process rather than on the more technically challenging 2 → 3 processes that we study in [57]. Besides, XX → W + W − , ZZ for M X > 80 GeV may still be a viable model since it does not lead to overabundance but would not account for all of DM. It would then be supplemented by a new ingredient to the IDM. In any case, our aim here is to unravel some important characteristics of these 2 → 2 cross-sections. Because of SU (2) symmetry, XX → W + W − and XX → ZZ share common features. At oneloop, we concentrate first on XX → ZZ to bring up the most important features that will later help understand both our full one-loop calculation of XX → W ff and XX → Zff . We do this because XX → ZZ is easier to compute since there is no need to consider real corrections of the sort XX → W + W − γ that are necessary to regulate the infrared divergences of the virtual corrections. We take M X = 100 GeV and require all constraints of section 4 to hold, apart from the relic density within the Planck limit. Independently of the direct detection limit we keep λ L very small, so that we are in a similar situation than in the 2 → 3 processes we will study and whose characteristics are given by the benchmark points. For the same reason, the different choices of M A , λ L correspond to λ 3 , λ 4 , λ 5 values similar to those found in the benchmark points for XX → W W , ZZ . At one-loop, one needs to specify the value of λ 2 . For each point, we take the values λ 2 = 0.01, 1, 2. These are the same values we considered in our study of the heavy mass IDM [5]. The aim of this introductory analysis serves to understand the following key points: i) Since we are in scenarios where M h < 2M X and can not take an OS definition of λ L based on the input Γ h→XX , we have to rely on the M S renormalisation scheme. We therefore want to investigate how large the scale dependence of the scheme is and whether we can advocate a choice for the optimum scale.
ii) Is perturbativity maintained at one-loop? Could the one-loop radiative corrections turn out to be too large for certain combinations of the underlying parameters?
iii) How much do the corrections depend on λ 2 ? If the impact of λ 2 is not small, phenomenological considerations based on tree-level analyses should be reconsidered.
iv) As an aside, we can ask whether there could be circumstances when a running of α (defined at M Z as assumed in micrOMEGAs) could account for the bulk of the corrections regardless of the value of λ 2 .
v) The scale dependence and the λ 2 dependence were touched upon in our study of the heavy mass IDM [5] and we promised to investigate these issues further. We see here how the scale dependence can be derived quantitatively and how the λ 2 dependence is not all contained iñ β λ L .
Answers to these questions will guide us when we study the 2 → 3 processes. Not that we investigated the first three items above in our study of Γ h→XX in the M S scheme in section 3.7.
To unravel some key features of the one-loop corrections, we consider 5 test points. While the DM mass, M X , is fixed at M X = 100 GeV for all test points, we consider different values of λ 3,4,5 with the constraint that λ L 1. This allows us to generate different values for β λ L as well as different values of M A,H ± with nonetheless M A M H ± (within the T parameter constraint). Characteristics of the test points are listed in Table 4.
The investigation of the tree-level cross-section gives us the λ L dependence which will then determine the scale dependence at one-loop in the M S scheme. The λ L dependence will, at one-loop, track the scale dependence which is only carried by the counterterm δλ L . At tree-level, the XX → ZZ Born amplitude is built up from the s-channel SM Higgs exchange, the quartic XXZZ interaction and the exchange of A in the t-channel, see Figure 2. For XX → W + W − , it is H ± which is involved in the t−channel exchange. The tree-level cross-sections for XX → W + W − and XX → ZZ for test point P1 are shown in Figure 3. Test point P1 is characterised by λ L = 10 −4 and M X , M A , M H ± = 100, 120, 130 GeV. The ratio between the W W cross-section and the ZZ crosssection is almost constant and only changes by about 6% in the range v = 0 − 0.9 (and less than 3% in the most relevant range v = 0 − 0.4), as shown in Figure 3. For smaller v, XX → ZZ is slightly impacted by the threshold factor (for M X = 100 GeV ZZ is closer to the threshold than W + W − ) which explains why the ratio is higher for small v than for larger vs. The tree-level behaviour helps us understand why these two cross-sections share common features also at one-loop.
As we pointed out before, the λ L dependence is not due to Higgs exchange only but it intervenes also in the XXZZ quartic coupling. At the cross-section level, these contributions lead to the quadratic λ 2 L dependence. The interference of these contributions with the t-channel amplitudes leads to a (linear) λ L dependence in the cross-section. For each point in the parameter space and for any given energy, or relative velocity, the λ L dependence of the tree-level cross-section, σ tree , can then be written as For XX → ZZ, both Higgs exchange and the XXZZ quartic coupling contribute to σ 2 whose value depends on M X and M h given a specific relative velocity. The t-channel A exchange diagrams and part of the XXZZ contribute to σ 0 and depend therefore on M A (and the SM gauge coupling). σ 1 is the result of the interference of the amplitudes contributing to σ 2 and σ 0 . We choose to derive σ 0,1,2 numerically from SloopS. For this purpose, we generate 3 values of σ tree corresponding to 3 values of λ L (we take λ L = 0, 1, 2) with all other parameters fixed. We are then able to reconstruct the λ L dependence in Equation 6.1 from a fit. Checks of the fit are found to be excellent when comparing the result of the fit-cross-section with a direct (tree-level) calculation for a fourth value of λ L not used for the fit. Obviously, for this 2 → 2 process, the λ L dependence could have been derived analytically but the main reason we propose this procedure is because we follow this also for the 2 → 3 processes [57] where an analytical expression is rather involved.

XX → ZZ at one-loop: Some examples
We start by considering XX → ZZ for a fixed value of v, v = 0.4, before showing the full one-loop corrections for both XX → ZZ in the range 0 < v < 0.9. At the heart of our discussion is the λ L dependence of the cross-sections which we construct according to Equation 6.1.
XX → ZZ at one-loop shows a very rich structure that accesses the parameters of the full model, in particular the λ 2 parameter of the dark sector. The latter shows up as rescattering effects XX → AA and XX → XX, see Figure 4.
Before showing the results of the full one-loop corrections in the M S scheme, let us derive the scale variation from the λ L dependence of the tree-level cross-section, σ tree (see 6.1). The scale variation only enters through the scale variation of λ L , ∂λ L /∂ log µ = −β λ L /16π 2 . Since one can relate the one-loop correction, δσ, between two scales as δσ(μ 2 ) = δσ(μ 1 ) − 1 16π 2 σ 1 + 2λ L σ 2 β λ L ln(μ 2 /μ 1 ).  Table 4. We could have run the code for only one value of µ dim and derived the results for any other scale through Equation 6.3. However, extracting the results for the different scales from the code is testimony that the code works very well and the implementation is consistent.
For later reference, with the condition that λ L 1, note that in terms of relative correction (normalised to the tree-level cross-section), Equation 6.3 turns into Equation 6.4 indicates that the scale variation and the associated correction is large whenβ λ L is large but also when σ 1 /σ 0 is large. For the test points P0-P5, σ 1 /σ 0 is between 5 and 6.
Our results for the full one-loop corrections of the cross-section XX → ZZ for points P0-P5, evaluated at v = 0.4, are given in Table 4. Beside listing the characteristics of the points P0-P5, Table 4 gives the λ L dependence of the tree-level cross-sections as well as theβ λ L of the model such that the reader can compare the results of the analytical scale variation (Equation 6.4) with the numerical output of SloopS for the full one-loop corrections for the different renormalisation scales and values of λ 2 . In passing, note that the relative dependence of the cross-section in λ L is not large. This dependence will be stronger when we study 2 → 3 processes [57]. Table 4 is extremely instructive. It reveals many important points.  Table 4: Tree-level cross-section and one-loop correction, δσ(μ dim , λ 2 ) for different values of λ 2 and for different renormalisation scales,μ dim , for λ L for XX → ZZ and v = 0.4 ( √ s = 204.124). The percentage change of the one-loop correction is also given in parenthesis unless the value is higher than 100% in which case the result is highlighted in bold as a warning for a breakdown of one-loop perturbativity. The loop results (from the numerical evaluation of the automated calculations) are given with sufficient accuracy in order to compare the results against those of the analytical computation given in Equation 6.3. The first entry in the second column is the tree-level cross-section while the second entry in the second column corresponds to ∆ = σ tree (α −1 (M 2 Z ) = 128.907) − σ tree (α −1 (0) = 137.036), the "correction" that corresponds to using α(M 2 Z ) instead of α in the Thomson limit (tree-level). The value of ∆ needs to be compared to the result of the full one-loop calculation. The first column gives the parameters of the model, with M X = 100 GeV for all points, and the corresponding λ 3,4,5 . In the same column, we also give β λ L and a fit of the tree-level cross-section as a function of λ L (see text). Cross-sections are for σv in units of 10 −26 cm 3 s −1 .
• The rescattering effects within the dark sector, through the λ 2 dependence, are confirmed. The one-loop calculation shows that λ 2 enters the prediction of the annihilation cross-section not only through the running of the coupling λ L , throughβ λ L , but there is also an added genuine one-loop contribution. Indeed, we have chosen test point P0 because itsβ λ L does not depend on λ 2 . Therefore, the results for P0 do show a dependence of the correction on λ 2 that is not an effect of the scale dependence of λ L . For P0, this dependence can be derived easily from Table 4, by taking the difference of any two columns (that is the difference between two values of λ 2 evaluated at the same scale). We find that δσ(λ 2 ) = −2.56 λ 2 or in terms of relative correction −6.5 λ 2 (%), which is not negligible for values of λ 2 > 1. For the other test points, there is also a λ 2 dependence that enters throughβ λ L . Therefore, only a full one-loop correction captures the full λ 2 dependence, beyond the dependence contained inβ λ L . The λ 2 dependence is important.
• The running of α at the scale of the process, of order M Z , does not account for the full one-loop correction. It can be considered as an acceptable description only for an almost vanishing λ 2 and very small λ 3,4,5 . In fact, for test point P0, with λ 2 = 2, the corrections practically vanish showing that extra corrections totally offset the correction from the running of α. Therefore, the use of a running α is of very limited applicability.
• As the split between the DM mass and the mass of the other scalars increases, the one-loop correction and the scale dependence increase. This can be (mostly) understood on the basis of the corresponding value ofβ λ L . A largeβ λ L is a harbinger of a large scale uncertainty. An inadequate choice of the scale can further exacerbate a large correction driven byβ λ L .β λ L increases with λ 2 but even for λ 2 ∼ 0,β λ L can be large leading to a large correction and casting doubt on the perturbative expansion. For example, P3 and most particularly P4 have very large scale uncertainty for all values of λ 2 . Based on these one-loop analyses, we make the proposal that such models fail the perturbativity test and should therefore not be considered even though they may pass the tree-level based experimental limits. One should not consider models withβ λ L larger than 20. For a starter, Table 4 shows that when one reaches higher values ofβ λ L , the corrections reach more than 100% unless one carefully picks up an optimal scale. In any case, for large values ofβ λ L , one is subject to violent variations even if one moves slightly away from the optimum scale. Second, if one takes QCD as example with g 2 s = 4πα s , one has (in our notation), β gs ∼ −13 while (for the most relevant quantity), β αs = −2.5 at energy scales where α s is perturbative (α s ∼ 0.12). For Q 2 ∼ (1GeV) 2 when QCD is nonperturbative, β αs ∼ 20. Therefore,β λ L ∼ 20 should be a good indicator for the onset of the non-perturbative regime. This indicator when translated in terms of the underlying parameters of the IDM suggests that we should not consider IDM benchmarks points to be theoretically valid if M A (M H ± ) > M X + 150 GeV, otherwise the corresponding λ 3,4,5 correspond to values that lead toβ λ L in excess of 20. This requirement of one-loop perturbativity reduces the IDM parameter space considerably. We take this conclusion into account to reduce the possibilities for the benchmark points of Table 2. We consider the benchmark points B, C, D, H to be theoretically unreliable.
• In our study of the scale dependence for the partial width, Γ h→XX , in section 3, we advocated the use of µ = max(2M X , M A ) as the optimum scale where the radiative corrections were minimised. Table 4 suggests a very similar behaviour. For the annihilations processes, the typical energy scale of the problem set by kinematics is √ s 2M X while the the typical internal mass is M A . Although we are dealing with a process involving 4-point functions, the arguments put forward in the case of h → XX remain the same especially because the scale dependence is embedded in 2-and 3-point functions. We find that µ = max(2M X , M A ) is once again the best scale. Scales much smaller or larger than this choice lead to very large corrections. This is most flagrant for test points P3 and P4 that we dismiss on the ground of perturbativity, to the point that following this prescription would lead to acceptable corrections if we choose µ = M A . For the "perturbative" test points P0, P1, and P2, the prescription is most evident for P2. The indication from P1 with λ 2 = 2 is misleading although the corrections are small for all scales (β λ L is small here), µ = 2M X does not give the smallest correction because of the λ 2 contribution not originating fromβ λ L . For λ 2 = 0.01, µ = 2M X gives the smallest correction. These observations will be confirmed also when we study XX → Zff and XX → W ff [57].
We also find that the λ L dependence of the XX → W + W − is practically the same as the XX → ZZ cross-section. XX → W + W − will therefore exhibit the same scale dependence as we will see next.
6.3 XX → ZZ and XX → W + W − at one-loop as functions of the relative velocity The particular study at v = 0.4 summarises, in fact, the analyses for the whole range of v. We therefore briefly discuss here the results for point 1 for the annihilations into both ZZ and W W .

Figure 5:
The relative (to the tree-level) full one-loop corrections to the cross-sections XX → W + W − and ZZ as functions of the relative velocity for test point P1 with µ = M X /2, M X , 2M X (from left to right). Tree values of λ 2 are considered. The percentage deviation due to choosing as input α(M 2 Z ) instead of α(0) is also shown. This correction (∼ 13%) is the same for both ZZ and W W . It is therefore superimposed for the two cross-sections and appears as a common line.
Recall that P1 has a smallβ λ L and therefore the scale variation is modest not only for v = 0.4, as we discussed, but is expected to hold for other values of v. Indeed, Figures 5 show that the radiative corrections, for the range of v relevant for the calculation of the relic density, are within a couple of per-cent of the results obtained with v = 0.4, for all cases of the scale µ and the parameter λ 2 . The radiative corrections increase slightly, within a 2% margin, with the relative velocity. The increase affects more ZZ than W + W − as v increases and is due to the fact that ZZ is more sensitive to the threshold for ZZ production for M X = 100 GeV. Otherwise, observe that, as expected in terms of relative corrections, the results are practically the same for both ZZ and W + W − production when the same configuration of the scale and λ 2 is taken. More importantly, what the full one-loop correction teaches us is that the "improved" tree-level cross-section, with the use of α(M 2 Z ), as assumed by default in micrOMEGAs, is a good approximation only for vanishingly small λ 2 . This "improved" cross-section can not, by construction, catch the genuine one-loop effects from the full λ 2 dependence. The prediction of the α(M 2 Z ) approximation can be quite far from the full one-loop correction for large values of λ 2 . For instance, for µ = 2M X and λ 2 = 2, the α(M 2 Z ) cross-sections are practically 20% higher than the full one-loop correction. In general, we see that measured from the tree-level cross-section, the corrections decrease as λ 2 increases and they also decrease as the scale increases, a behaviour which is fully explained by our λ L ,β λ L discussion exposed through Equation 6.4. Therefore, we recommend for such scenarios, with smallβ λ L , to apply a theoretical uncertainty to the prediction of micrOMEGAs ranging from −20% to −4% to account for the scale uncertainty and the λ 2 dependence that a tree-level calculation is not sensitive to.

Conclusions
Compared to a much studied model like the MSSM (minimal supersymmetric model), the IDM with the addition of one scalar doublet to the SM is quite simple and economical in terms of the number of parameters it involves. Yet, it has a rich structure and constitutes an excellent laboratory to address many technical issues about renormalisation of a BSM model. Namely, what physical input parameters one could define to carry a full one-loop calculation. Using the masses of the new fields (particles) of the model seems a most unambiguous scheme to define the model. However, even if all masses are given, we still need two more parameters, λ L (or some other combination of the underlying parameters of the Lagrangian) and λ 2 , to compute observables, most importantly cross-sections. λ L which measures the strength of the coupling of the SM Higgs to the DM particle, is needed for the (tree-level) calculation of the annihilation of DM to SM particles, besides the gauge couplings that we define from SM observables. The quartic coupling, λ 2 , represents interactions solely within the dark sector. Nonetheless, loop effects introduce an important λ 2 dependence of the annihilation crosssections. This indirect one-loop effect introduces therefore an uncertainty in the prediction of an observable such as the relic density. λ L also introduces an uncertainty of a different sort. Either this coupling is extracted from the partial invisible width of the Higgs, phase space allowing, or as we advocated in this paper an MS scheme is prescribed to allow a full one-loop calculation. The latter introduces a renormalisation scale dependence in the one-loop predictions. If the MS prescription is applied to λ L only, we have shown how the scale dependence can be tracked through the β constant for λ L (β λ L ) and the λ L parametrisation of the tree-level observable. We have also argued how to reduce the scale uncertainty by choosing an optimal scale which we conjecture to be close to the largest scale involved in the process. The overall theoretical uncertainty should be estimated by varying the scale around this optimal scale and also by varying λ 2 , a quantity the annihilation of DM to SM particles does not depend on. The study of the theoretical uncertainty at one-loop has also led us to introduce a new criterion for the perturbativity requirement: only configurations with small enough β λ L qualify. From the point of view of the relic density calculation, the IDM, even in the narrow range of low masses (55 < M X < 75 GeV), involves the main known mechanisms in the freeze-out scenario: annihilation in the continuum, co-annihilation, annihilation through a resonance. We have conducted a very thorough investigation on the allowed parameter space of the model which includes new LHC and direct detection data to delimit the range of the low mass IDM scenarios and the DM annihilation mechanisms that are involved. A set of representative IDM points covering these mechanisms was presented. It will serve as a starting point to conduct full one-loop calculations in these three scenarios. XX → W ff (a 2 → 3 process) at one-loop which has never been attended before, will figure in all three scenarios but with varying degree of importance. Salient features of this process are contained in the 2 → 2 processes XX → W + W − and ZZ which we have studied in this paper. Crucial technical process specific issues, beyond the general renormalisation procedure presented at some length here, are presented in the accompanying papers that cover each of the three scenarios. The calculation of the relic density at one-loop, when the mechanism is dominated by co-annihilation, will be presented in Ref. [56]. If we set aside the induced sub-dominant production through XX → W ff (a 2 → 3 process), the dominant mechanism is AX co-annihilation to a fermion pair. In Ref. [57], we consider 3 benchmark points to present results for XX → W ff , XX → Zff and how the corrected cross-sections translate into the calculation of the relic density. Finally in Ref. [55], the resonance region, we will show how to extend the OS scheme we detailed in the present paper by supplementing a complex scheme that avoids the issue of double counting in the presence of a width. The width, necessary in the tree-level calculation, is in fact induced at the loop level calculation since the width represents the imaginary part of the self-energy contribution. Again, the IDM can be a good example which illustrates how a loop calculation for a process that proceeds through a resonance, should be conducted.