Relic density of Dark Matter in the Inert Doublet Model beyond Leading Order. I) The Heavy Mass Case

A full renormalisation of the Inert Doublet Model (IDM) is presented and exploited for a precise calculation of the relic density of Dark Matter (DM) at one-loop. In this first paper, we study the case of a DM candidate with $m_{\rm DM} \sim 500$ GeV. In this regime, the co-annihilation channels are important. We therefore compute, for a wide range of relative velocities, the full next-to-leading order electroweak corrections to 7 annihilation/co-annihilation processes that contribute $\sim$ 70\% to the relic density of DM. These corrected cross-sections are interfaced with {\tt micrOMEGAs} to obtain the one-loop correction to the freeze-out relic density. Due to the accurate measurement of this observable, the one-loop corrections are relevant. We discuss the one-loop renormalisation scheme dependence and point out the influence, at one-loop, of a parameter that solely describes the scattering in the dark sector. A tree-level computation of the relic density is not sensitive to this parameter.


Introduction
The inert double model (IDM) consists in adding a scalar doublet, Φ 2 , to the Standard Model of particle physics (SM) [1]. Endowing this most simple extension of the SM with an unbroken Z 2 symmetry where Φ 2 is odd while other fields of the SM are even, guarantees the stability of the lightest inert particle, thus providing a possible dark matter (DM) candidate [2]. The new scalars of this additional doublet couple to the Higgs and the gauge bosons but not to the fermions in the SM. This model provides a nice link between the Higgs sector with the source of electroweak symmetry breaking and DM [3]. The model has received a lot of attention, primarily for DM studies but also for collider observables, see [4][5][6][7][8][9][10] for reviews and updates. The majority of these analyses were performed at leading order. A few exceptions where one-loop effects are considered include i) the computation of the tri-linear self-coupling of the SM-like Higgs boson [11][12][13][14], ii) one-loop corrections to the Higgs effective potential [3,15], the running of the Higgs/scalar masses and the running of the scalar parameters [6] (see also [16]), iii) one-loop induced crosssections relevant for direct detection [17] and iv) induced one-loop effects for photon production: Higgs decay to a photon pair [18] and DM annihilation to photons [19,20]. Yet, the relic density of DM as extracted by PLANCK [21] is a precision measurement at the percent level that calls for an equally precise theoretical prediction. In particular, the perturbative DM annihilation cross-sections that drive the amount of relic density must be evaluated beyond tree-level. This has not been performed in the IDM case despite the fact that the relic density sets the most stringent constraint on the IDM. This is somehow understandable since this task requires a coherent full renormalisation of the model, the evaluation of many processes at one-loop order and the inclusion of these processes for the evaluation of the relic density. It is the purpose of this paper to present such a programme and to present the first results for one-loop corrected processes and how they affect the value of the the freeze-out relic density in the IDM.
The IDM model is of course subject to various experimental constraints that leave two viable scenarios: one where the DM mass is about M W and the other where the mass is above 500 GeV. We will, in this first paper, be interested in a scenario with M DM ∼ 500 GeV. Non perturbative effects, such as the importance of the electroweak Sommerfeld effects [5,[22][23][24][25], occur for very high masses above the TeV and will not be treated here.
The plan of the paper is as follows. In the next section 2 we will first outline the model and underline its parameters. The renormalisation procedure is exposed in section 3. Section 4 takes into account the various constraints on the model and motivates us in setting up our benchmark scenario. Section 5 presents our findings for the full next-to-leading order corrections that, at tree-level, contributes more than 5% of the relic density contribution. We will find that because of co-annihilation we will need to consider 7 processes. Section 6 will translate these improved predictions into a corrected value of the relic density by interfacing our cross-sections with micrOMEGAs. Finally we conclude our findings in section 8. The appendices are relegated to section A.

The Inert Doublet Model at the classical level
To the Higgs doublet Φ 1 of the SM, a doublet Φ 2 is added. An unbroken Z 2 symmetry is imposed under which Φ 2 is odd while all other fields (of the SM) are even. The immediate consequence is that Φ 2 cannot couple to fermions to any order (in perturbation theory) and guarantees the stability of the lightest inert particle, thus providing a possible dark matter candidate. The Lagrangian of the IDM can be written as, where L SM is the SM Lagrangian whereas the scalar potential is given by In this equation, µ i and λ i are real. Since the unbroken Z 2 symmetry prevents the presence of tadpole terms for Φ 2 (and therefore no vacuum expectation value from Φ 2 ) and mixing with Φ 1 , we can directly parametrise the doublets in terms of the physical scalars, and where v is the SM vacuum expectation value (vev) with v 246 GeV, defined from the measurement of the W (M W ) and Z (M Z ) masses. We have We must note that quartic couplings of the type HHW + W − are also present. Annihilation of DM to vector bosons proceeds, in part, through these gauge interactions and in part through the scalar potential coupling to which we now turn our attention for more details.

Minimisation of the potential
Minimisation of the potential amounts to vanishing tadpoles for Φ 1 leading to the constraint There is no corresponding tadpole term for Φ 2 because of the unbroken Z 2 symmetry. The no-tadpole condition will be maintained at all orders.

Mass spectrum and scalar self-interactions
By collecting the bilinear terms in the physical scalar fields of V IDM , we get the mass spectrum of the scalar sector of the IDM where In the following, we will consider H as the possible DM candidate. The underlying reason is that both H and A can be treated on equal footing. Choosing A as the DM candidate would simply correspond to a flip in the sign λ 5 → −λ 5 without changing the phenomenology. The reason is the following (we borrow arguments from [26]): taking H as the DM candidate and thus M H < M A , M H ± , from Eqs. 2.9 and 2.10 we obtain λ 4 + λ 5 < 0 and λ 5 < 0. (2.12) The converse situation, with M A < M H , M H ± , corresponds to, For λ 5 = 0, H and A are mass degenerate. All portal triple and quartic couplings of the SM-like Higgs h to H/A are proportional to λ L/A . Indeed, we can write, at tree-level, the hHH and hAA coupling as (2.14) Thus considering one or the other scalars as the DM candidate amounts to switching In the same vein, we can write The quartic couplings between the SM Higgs and the new scalar are set by λ 3,L,A , λ hhHH,hhAA,hhH + H − = λ L , λ A , λ 3 .

Counting parameters
In order to survey the IDM parameter space it is important to count the number of independent parameters in the scalar sector. Setting aside the tadpole condition and the 125 GeV (SM) Higgs mass, the IDM requires 5 extra parameters, It is interesting to trade 3 of the above parameters of the scalar sector for the physical masses of the new scalars through Eqs. (2.8-2.10). This will be important for the renormalisation programme when we adopt an on-shell scheme 2 . The model can therefore be defined through the following two possible trade-offs, 19) or equivalently We set λ 2 apart as it describes couplings solely between the additional scalars and not involving the SM Higgs. At tree-level for example and for 2 → 2 annihilation processes, λ 2 is irrelevant. This would mean that at one-loop order, for annihilation processes, a renormalisation for λ 2 is not necessary. However, λ 4 and λ 5 can be reconstructed from a combination of the additional scalar masses The extraction of λ 3 not only requires a knowledge of at least one scalar mass but also either a value of λ L (or equivalently the hHH coupling) or the mass parameter µ 2 , to wit

Renormalisation of the IDM
The presence of the Z 2 symmetry tremendously eases the renormalisation of the IDM. As a result of this symmetry there is no mixing, at any order, between the SM fields and the extra fields introduced by the IDM. The tadpole condition only applies to the SM part. The SM part, including the SM Higgs (and the Goldstone bosons), are renormalised, independently and exactly as in the SM. We therefore follow an On-Shell (OS) scheme whose details can be found in Ref. [27]. We will pursue the OS approach for all three extra physical scalar fields, H, A, H ± . We will therefore use the physical masses of these fields as input parameters instead of the parameters of the scalar potential. Nonetheless, there remain 2 parameters which we still need to define. As stated earlier, of all the parameters in the IDM, only λ 2 connects the extra fields. Its renormalisation is not needed for one-loop annihilation processes to SM particles. However, one parameter (either µ 2 or λ L,A or a combination of these) is still needed to fully define all the couplings between H, A, H ± and the SM Higgs and Goldstones, see Eq. 2.19. For example λ L/A has, at tree-level, a simple physical interpretation as the portal coupling hHH/hAA. To carry the renormalisation programme and define the counterterms, shifts are introduced for the Lagrangian parameters and the fields. All bare quantities (X 0 ), particularly in Eq. 2.2, are decomposed into renormalised quantities (X) and counterterms (δX) as for the parameters 3 and for the fields. The OS conditions on the physical scalars require that their masses are defined as pole masses of the renormalised one-loop propagator and that the residue at the pole be unity. With Σ φφ (p 2 ) being the scalar two-point function with momentum p we have (φ = h, H, A, H ± ), δM H , δM A and δM H ± directly give OS definitions for λ 4 and λ 5 through Eqs. 2.21 and 2.22. With only 3 physical masses, we can not reconstruct all the 5 counterterms from two-point functions. We therefore revert to couplings between the Higgses. Remembering that λ L measures the hHH coupling, we could extract a counterterm for δλ L from a measurement of the hHH coupling. We could have also chosen the hAA or hH + H − couplings. Sticking with λ L , when M h > 2M H , the invisible width of the Higgs Γ(h → HH) is the observable of choice especially because no infra-red divergence affects this observable. This observable will therefore be set as input, which is equivalent to stating that the observable receives no correction. We denote the amplitude for h → HH as A(h → HH) ≡ A hHH . We express this amplitude at tree-level as 5) and the full one-loop renormalised amplitude for h( Another gauge invariant but scale dependent scheme is to use a MS definition where only the (mass independent term) ultraviolet divergent part is kept The coefficient of the ultraviolet divergent part is nothing but the one-loop β constant (β λ L ) of λ L .
where ε = 4 − d with d being the number of dimensions in dimensional regularisation and γ E being the Euler constant. As discussed at length in Ref. [28], a general scheme can be defined as where Q λ is an effective scale that depends on the external momenta (hence the subtraction point) and the internal masses introduced to define the counterterm, andμ is the scale introduced by dimensional reduction. For the MS, scheme Q λ =μ. For m h < 2M H , it is difficult to come up with a straightforward OS scheme for λ L (or equivalently µ 2 once the mass counterterms for the extra scalars have been set). We have therefore chosen an MS scheme for λ L according to Eq. 3.9. A formal OS extraction that would work for any configuration of H and h masses could use the cross-section that builds up direct detection, namely Hq → Hq in the limit of zero Q 2 transfer, in effect isolating the But direct detection involves uncertainties through the introduction of parameters from nuclear matrix elements. Moreover, the λ L contribution to direct detection can be swamped by the pure gauge contribution (which we discuss later).
As stated before, the counterterm for µ 2 is directly related to that of λ L : For the annihilation processes we will find that a counterterm for λ 2 is not necessary, however an MS definition based on the corresponding β function can be derived. This is shown in Appendix B.

A high mass IDM benchmark point
Previous studies on the collider and astrophysical constraints of the IDM parameter space [6,14,26, have delineated two regions with a viable DM candidate that provides the correct relic density of DM. The first one is dubbed the "low mass regime" where M H ≈ M h /2 and the second the "high mass regime" where M H 500 GeV. To show the impact of a more precise calculation of the annihilation cross-sections that enter the relic density, in this paper we start by finding a point that passes the relic density constraints based on a tree-level calculation. For this we use the code micrOMEGAs [52,53]. The relic density constraint set by PLANCK [21],  These small values of λs automatically ensure that a perturbative calculation can be performed. Moreover, vacuum stability holds [6,54] together with the fact that the global minimum is associated with the inert vacuum [9]. The degeneracy in the scalar masses can be viewed as rather fine-tuned [6]. This degeneracy means that constraints from electroweak precision measurements are easily evaded, in particular, the custodial isospin symmetry parameter T [48]. Indeed in this case, is vanishingly small. Far more stringent is the constraint from the spin-independent DM-nucleon cross-section for direct detection. In this scenario, the one-loop electroweak gauge contribution to the H nucleon cross-section, σ (g) HN [17,55], is almost an order of magnitude larger than the tree-level Higgs exchange contribution triggered by λ L , σ (λ L ) HN [56]. One obtains σ (λ L ) where m N is the nucleon mass, and f ∼ 1/3 is the nucleon form factor. With M H M W , one can write This benchmark point passes the present XENON1T [57] constraint. However, further improvement in the experimental sensitivity from direct detection experiments could make the viability of this benchmark point difficult even if the relic density constraint is passed.
The degeneracy in the scalar masses means that the relic density is built up by a few coannihilation channels. They mainly annihilate into vector bosons. The percentage contribution of each channel to the relic density is AA → ZZ (7%), These weights are given by micrOMEGAs based on tree-level calculations. The other channels contribute less than or equal to 5%, including the hh final state channel. Out of the 7 cross-sections, those with annihilations to a photon, H + A → W + γ and H + H → W + γ are driven solely by gauge couplings and are, at tree-level, independent of λ L,A . The other 5 interactions are sensitive to λ L . However, considering the small value of λ L in our benchmark example, to a good approximation these cross-sections are also dominantly (though not totally) driven by gauge interactions so we can write σ HH→W The weights quoted in Eq. 4.6 are a measure of the relative importance of the corresponding cross-sections diluted by the the Boltzmann factor. We will now look at the one-loop corrections that affect the 7 dominant processes given in Eq. 4.6 which, at tree-level, contribute more than 5% to the relic density. We will compute these processes for a wide range of velocities.

Some important technicalities
The calculation of the relic density requires the dependence of the different relevant cross-sections on the relative velocity v of the annihilating particles times v, σ ij v ij , where i, j stand for the annihilating/co-annihilating particles, before applying thermal averaging. Within the standard cosmological model and assuming freeze-out, the latter part is computed quite precisely by micrOMEGAs. For two annihilating particles with momenta p 1 and p 2 and masses m 1 and m 2 , the relative velocity is defined as However, it is possible to replace the tree-level cross-sections generated by micrOMEGAs by corrected cross-sections. This is what we do in order to obtain loop-corrected relic density. In our case, the corrected cross-sections are the one-loop corrected cross-sections for the processes listed in Eq. 4.6. Most of the steps of the calculation are automated. We rely on SloopS [58][59][60][61] which relies on LANHEP [62,63] to define the model. LANHEP generates the complete set of Feynman rules, applying shifts on fields and parameters and sets the conditions on the generated counterterms in a format compatible with an amplitude generator code. We interface LANHEP with the package bundle FeynArts, FormCalc and LoopTools [64][65][66].
For each process and for each velocity we check that the virtual corrections (including the counterterms) are ultraviolet finite. To this end we vary the C UV (Eq. 3.9) parameter by 7 orders of magnitude and check that the result is stable within machine (double) precision. For processes involving charged particles, bremsstrahlung processes 2 → 2 + γ are generated. The latter is split into two parts, the soft photon radiation and the hard photon radiation. The soft photon radiation for photon energies E γ < k c with k c small enough is generated automatically through the factorisation formula which eliminates the one-loop infrared divergence that we regularise with a small finite photon mass. The hard photon radiation is computed numerically. We loop over a few values of k c making sure that the soft plus hard part add to a value that is insensitive to k c . This step could be time consuming but we have optimised its automation. When we refer to NLO corrections, we have in mind the full one-loop, the soft and the hard radiation which is of course independent of the regularising photon mass or the intermediate cut-off k c .

Processes at one-loop
Our default values for the one-loop corrections are presented in this subsection for a scalē µ = M H taken to define λ L . The scale dependence will be discussed when we convert the results to the level of the relic density calculation.

H
We start our discussion with a process whose weight to the relic density (at tree-level) is 13%. Even though this is not the most dominant contribution, it helps bring forth an important behaviour. At tree-level, this process slowly and linearly varies with the relative velocity. This is still the case at one-loop where the full one-loop computation corrects the tree-level result by about −10% for relative velocities ∼ 0.2 and above. However, the one-loop contribution shoots with large positive "corrections" up to extremely low velocities; see Fig. 1. This is easily understood as a result of the electromagnetic Sommerfeld effect. The photon exchange between the electrically charged co-annihilating particles at very low relative velocities leads to a relative correction which at one-loop reads as Figure 1: Dependence of the tree-level and one-loop corrected cross-section H + H − → W + W − with respect to the relative velocity (squared). The right panel gives the percentage correction.
We have checked that our numerical code captures this effect exactly. This one-loop Sommerfeld contribution can be resummed with the result that the tree-level cross-section is turned into σ resummed = S nr σ tree , S nr = X nr 1 − e −Xnr and X nr = 2πα/v. It is however important that our full calculation catches such behaviour at very small velocities quite precisely. While presenting our results for the relic density, this resummation is performed even though its effect is tiny.
Note that an electroweak equivalent to the Sommerfeld correction is induced by rescattering through W and Z bosons and even through the SM Higgs boson. For H + H − → W + W − these low velocity effects are completely swamped by the photon exchange (Sommerfeld effect). For later reference, let us point out that these electroweak equivalents may play a role at very small velocities only if M W,Z,h /M DM 1, which is not attained in our scenario. The masses of the W, Z and h bosons provide a cut-off to the 1/v rise, so the rise of the cross-section, when the massive bosons are involved, is not indefinite. Higgs exchange will be totally negligible considering that the hHH coupling, for example, is controlled by the small λ L .

5.2.2
AA → ZZ and AA → W + W − AA → ZZ and AA → W + W − contribute respectively 7% and 10% to the relic density at tree-level. The velocity dependence of their cross-sections at tree-level decreases slowly with an almost similar rate. At one-loop, the photon final state radiation affects the W + W − channel. The overall corrections are thus larger in the charged channel than in the neutral channel. For v = 0.3 this correction is a modest −5% in the ZZ channel but about −20% in the W + W − channel. Nonetheless, the corrections follow a similar trend; see Fig. 2 Figure 2: Dependence of the tree-level and one-loop corrected cross-section AA → ZZ (upper panels) and AA → W + W − with respect to the relative velocity (squared). The right panels give the percentage correction.
of the relative velocity is a reminder of the Sommerfeld effect due to the W exchange. In these two processes such effects are only triggered by W exchange (and not by Z exchange ) since no AAZ coupling exists, where AH ± W ∓ is operative. This rescaterring, AA → H + H − , also explains why the corrections in the W + W − channel are larger, indeed the amplitude for H + H − → W + W − is more than twice as large as the H + H − → ZZ counterpart.

HH → ZZ and HH → W + W −
As expected, for small λs, the H ↔ A have the same cross-section at tree-level, see Fig. 3. Their dependence on velocity is the same, as are the radiative corrections apart from a notable difference for very small velocities. The HH annihilations with respect to the relative velocity now feature two dents, contrary to the AA annihilations where one dent appears. The first of these dents occurs at practically the same location in v as the one that occurs for the AA annihilations. It corresponds to the exchange of the W . The second one at slightly larger velocities is due to the Z exchange. Again the corresponding velocities are too small to be relevant for the calculation of the relic density.

H
Again these two cross-sections are practically interchangeable both at the leading and at nextto-leading order. The relative one-loop corrections decrease as the relative velocity decreases, with a correction of about −10% for a typical velocity, v ∼ 0.3, see Fig. 4 Figure 3: As in Fig. 2 but for HH → ZZ and HH → W + W − .
these processes do not depend on λ L,A , a fully on-shell renormalisation is possible with the results that these one-loop cross-sections are µ-independent.

Relic abundance computations
Having performed the full one-loop corrections to the 7 processes that make up about 70% of the total relic density at tree-level, we have interfaced our calculations with micrOMEGAs by providing the tables for these cross-sections (with the velocity dependence) in lieu of their treelevel value to micrOMEGAs for the calculation of the relic density (thermal averaging, freeze-out). Here we recall that we had taken as input α = α(0), the sign and size of the corrections are not due to the running of α. Smaller one-loop cross-sections compared to tree-level translate to a larger relic density than derived from tree-level cross-sections. This is corroborated by the value Ωh 2 = 0.12494 that we find.  of only ∼ −0.3%. This would tend to suggest that this scale, for the aforementioned choice of parameters, is optimal for reducing the size of the radiative corrections. For µ = 2M H , the correction to the relic density amounts to ∼ 15.3%. The scale variation is large compared to the experimental precision on the relic density. Considering that these scenarios are quite fine-tuned, for an almost degenerate scalar mass spectrum, our calculations show that if one is performing a tree-level analysis one should not strictly impose the very constrained experimental bound on the relic density but one should allow a theoretical uncertainty of at least about 10% for such benchmark scenarios. We may ask whether the radiative corrections change much the relative weights of the different processes. Table 1 shows that these relative contributions experience little change, independent of the scale chosen. The most important change is therefore an almost uniform correction on all the cross-sections. 7 News from the dark sector: Impact of λ 2 λ 2 does not enter the calculation of the annihilation cross-sections at tree-level. But, just as the Z decay to muons does not depend on the top quark mass at tree-level, at one-loop the top quark makes its effect felt. For the case at hand, there is less subtlety for the non-decoupling of the self-coupling in the dark sector. HH can rescatter before annihilating to gauge bosons. The rescattering HH → HH involves the self-coupling λ 2 . One therefore expects the one-loop annihilating cross-sections to depend on λ 2 . To investigate this effect, we retain the same value of λ L and consider two other values of λ 2 , both well within the perturbative and positivity of the potential bound, λ 2 = 0.1 and λ 2 = 1. Our results are shown in Table 2. We observe that there is a noticeable albeit small change when λ 2 is increased to  A more thorough one-loop analysis is in order by studying other scenarios with a larger range of values for the other quartic couplings. One could find points not allowed by a tree-level analysis that could be validated by a one-loop analysis. We leave this interesting analysis for a future publication. It looks however that although the virtual effect of λ 2 is not at all negligible it (fortunately or unfortunately) introduces also a non-negligible scale variation to the corrections. More importantly, compared to a tree-level treatment, one loop corrections introduce not only a scale uncertainty which is manageable for small values of λ 2 but also a parametric dependence (dependence on λ 2 ) which is not caught by a tree-level treatment. In fact, this λ 2 dependence turns out to be even larger than the scale dependence. One could in fact use the measurement of the relic density to constrain λ 2 .

Conclusions
The experimental value of the relic density as extracted from PLANCK data is now at the per-cent level. For many particle physics models of dark matter this is a very stringent bound that reduces drastically the range of the parameters in that model. Assuming a standard cosmological model based on freeze-out, the restriction on the parameter space of the model arises from the contribution of the annihilation and coannihilation cross-sections that build up the evaluation of the relic density. Unfortunately, most analyses are based on tree-level evaluations of these cross-sections. The level of precision on the experimental bound on the relic density calls for a theoretical prediction that should go beyond a tree-level evaluation of these cross-sections. Such a programme has been set up for the minimal supersymmetric model [58][59][60][61][67][68][69] and the next-to-minimal supersymmetric model [28]. After a first exploratory investigation in Ref. [70], the present paper extends this programme to the IDM. As such this paper presents a full systematic renormalisation of the model and specialises in the first application into the so called heavy mass scenario. In fact, in order to be fully perturbative, here we have covered a scenario with heavy scalar masses not heavier than 550 GeV. We performed full one loop calculations to 7 annihilation/coannihilation processes. We interfaced these corrected cross-sections with micrOMEGAs to turn these crosssections into a more precise evaluation of the relic density assuming the standard freeze-out mechanism. The one-loop calculations are implemented in an automated code for loop calculations. We have used a mixed scheme where most of the parameters are defined on-shell, based on the physical masses of the model. Having exhausted all masses in the model to fully define the model, one parameter is defined in MS, the coupling of the scalar DM, H, to the SM Higgs. We find that the one-loop corrections to the relic density for this particular mass vary from −34% to +15% depending on the renormalisation scale chosen to define the h − H − H coupling and most importantly on the value of the coupling λ 2 which measures the interaction solely within the dark sector between the extra scalars. This is an indirect effect that should be taken into account especially for λ 2 of order 1. Its effects can be larger than the renormalisation scale uncertainty of the one-loop calculation, else the relic density calculation can be used to set a limit on the dark-sector interaction. A tree-level calculation of the relic density is totally insensitive to these couplings that describe interaction within the dark sector. Preliminary investigations for DM masses beyond 750 GeV shows that electroweak Sommerfeld effects become important and that some resummation needs to be performed and merged with perturbative purely one-loop effects like those triggered by rescattering in the dark sector (the indirect effects of λ 2 ). We leave the study of this mass range to a forthcoming publication. Note that a for masses beyond 1 TeV, a purely non perturbative treatment has been given in [5,25]. Other viable parameter space of the IDM is the low mass regime with M H ≈ M h /2. This requires the calculation of 2 → 3 processes at one-loop for the evaluation of the relic density. We also leave this application for a forthcoming publication.

A Appendix: Feynman rules
We give below the Feynman rules of the tri-linear and quadri-linear couplings among the scalars of the model using the parametrisation of Eq. 2.19.
(Theoretical High Energy Physics) and the INFRE-HEPNET (IndoFrench Network on High Energy Physics) of CEFIPRA/IFCPAR (Indo-French Centre for the Promotion of Advanced Research). SB and HS also thank LAPTh for hospitality where a part of this work was carried out. SB and FB acknowledge the Les Houches workshop series "Physics at TeV colliders" 2019 where this work was finalised.