Advancing LHC Probes of Dark Matter from the Inert 2-Higgs Doublet Model with the Mono-jet Signal

The inert 2-Higgs Doublet Model (i2HDM) is a well-motivated minimal consistent Dark Matter (DM) model, but it is rather challenging to test at the Large Hadron Collider (LHC) in the parameter space allowed by relic density and DM direct detection constraints. This is especially true when considering the latest XENON 1T data on direct DM searches which we use here to present the best current combined limit on the i2HDM parameter space. In this analysis, we present prospects to advance the exploitation of DM mono-jet signatures from the i2HDM at the LHC, by emphasising that a shape analysis of the missing transverse momentum distribution allows one to sizably improve the LHC discovery potential. As a key element of our analysis, we explore the validity of using an effective vertex, $ggH$, for the coupling of the Higgs boson to gluons using a full one-loop computation. We have found sizeable differences between the two approaches, especially in the high missing transverse momentum region, and incorporated the respective K-factors to obtain the correct kinematical distributions. As a result, we delineate a realistic search strategy and present the improved current and projected LHC sensitivity to the i2HDM parameter space.


Introduction
Despite several independent evidences of Dark Matter (DM) at the cosmological scale, its nature remains unknown since no experiment so far has been able to claim its detection in the laboratory and probe its properties. Potentially, DM can be probed in direct or indirect detection experiments as well as be produced at the Large Hadron Collider (LHC) or future machines, though the latter can only detect DM candidates, as any observed missing energy can still be interpreted as generated by long-lived neutral particles. This combined effort on advancing our knowledge of DM properties is one of the key goals of the astroparticle and high energy physics communities.
A convenient way to understand the potential of both collider and non-collider experiments to probe DM is to explore simple, fully calculable, renormalisable models with viable DM candidates, which we refer to as Minimal Consistent Dark Matter (MCDM) models. We do not know yet which theoretical scenario corresponds to reality, but any model of this kind offers an excellent opportunity to gain insight into the intricate interplay between collider and non-collider constraints. MCDM models, which can be viewed as robust toy models, are self-consistent and can be easily incorporated into larger theoretically-driven scenarios of physics Beyond the Standard Model (BSM). Because of their attractive features, MCDM models can be considered as the next step beyond DM Effective Field Theory (EFT) (see e.g. [1][2][3][4][5][6][7][8][9][10][11][12][13]) and simplified DM models (see e.g. [14][15][16][17][18][19][20][21]).
The inert 2-Higgs Doublet Model (i2HDM), which has been initially suggested more than 30 years ago in [22], is one of the most representative MCDM models which has become very attractive lately  in the light of intensive DM searches. In fact, besides providing a good DM candidate, the i2HDM can also give rise to an 'improved naturalness' [24] since large radiative corrections from the inert Higgs sector can 'screen' the SM Higgs contribution to the Electro-Weak (EW) parameter ∆T .
It was shown in [45] that the LHC has limited sensitivity to probe the i2HDM with the mono-jet signature using the cut-based analyses optimised for the low luminosity Run 2 data. To complement these studies, in the present paper, we explore the LHC potential to probe DM via the mono-jet signature in the i2HDM scenario by exploiting a larger amount of information from observables at the differential level. More specifically, we will consider the shape of the missing transverse momentum (E miss T ) distribution. New findings of this study include: a) updating limits on the i2HDM parameter space following the recent XENON 1T results on DM Direct Detection (DD) searches; b) exploration of the range of validity of the effective ggH vertex in the heavy top mass limit by considering the E miss T distribution and comparing its shape to the full one-loop result, which will allow us to determine a realistic LHC potential for probing DM in different kinematical regions; c) optimisation and improvement of the LHC sensitivity to the DM mono-jet signal from the i2HDM defined by Higgs and Z-boson mediation processes using a shape analysis of the E miss T distribution; d) projection of our results to the High Luminosity LHC (HL-LHC) phase.
The rest of the paper is organised as follows. In Sect. 2 we discuss the i2HDM parameter space together with the current status of theoretical and experimental constraints. In Sect. 3 we present the main results of the paper which include the analysis of the validity of the effective ggH (H being the SM-like Higgs) vertex approach, the exploration of several model benchmarks and finally finding the LHC potential to probe the i2HDM at present and projected luminosities via exploitation of the E miss T shape in the mono-jet signature. In Sect. 4 we draw our conclusions.

The i2HDM 2.1 Parameter space
The i2HDM [22][23][24][25] is an extension of the SM with a second scalar doublet φ 2 possessing the same quantum numbers as the SM Higgs doublet φ 1 but with no couplings to fermions, thus providing its inert nature. This construction is protected by a discrete Z 2 symmetry under which φ 2 is odd and all the other fields are even. The Lagrangian of the scalar sector is where V is the potential with all scalar interactions compatible with the Z 2 symmetry: In the unitary gauge, the doublets take the form where we consider the parameter space in which only the first, SM-like doublet, acquires a Vacuum Expectation Value (VEV), v. In the notation φ 0 i = v i / √ 2, this inert minimum corresponds to v 1 = v, v 2 = 0. After EW Symmetry Breaking (EWSB), the Z 2 symmetry is still conserved by the vacuum state, which forbids direct coupling of any single inert field to the SM fields and protects the lightest inert boson from decaying, hence providing the DM candidate in this scenario. In contrast, the interactions of pair of inert scalars with the SM gauge-bosons and SM-like Higgs H are allowed, thus giving rise to various signatures at colliders and at DM detection experiments.
In addition to the SM-like scalar H, the model contains one inert charged h ± and two further inert neutral h 1 , h 2 scalars. The two neutral scalars of the i2HDM have opposite CP -parities, but it is impossible to unambiguously determine which of them is CP -even and which one is CP -odd since the model has two CP -symmetries, h 1 → h 1 , h 2 → −h 2 and h 1 → −h 1 , h 2 → h 2 , which get interchanged upon a change of basis φ 2 → iφ 2 . This makes the specification of the CP -properties of h 1 and h 2 a basis dependent statement. Therefore, following Ref. [45], we denote the two neutral inert scalar masses as M h 1 < M h 2 , without specifying which is scalar or pseudoscalar, so that h 1 is the DM candidate.
The model can be conveniently described by a five dimensional parameter space [45] using the following phenomenologically relevant variables: 1 − 4 where The |λ 345 | max value increases when M h 1 approaches M H /2, ranging from 0.024 at M h 1 = 50 GeV to 0.053 at M h 1 = 62 GeV. At the same time the Ωh 2 < 0.1 constraint sets the lower limit M h 1 40 GeV since below it there are no effective annihilation and/or co-annihilation DM channels to bring DM relic density to a low enough level consistent with Planck constraints. One should note that, when the decay H → h 2 h 2 also takes place, and, when h 1 and h 2 are close in mass (below, say few GeV), this channel will also contribute to the invisible Higgs decay. In this case the limit on λ 345 can be easily modified, taking into account that λ Hh 2 h 2 = λ 345 +  Relic Density Ωh 2 Figure 3: The new constraints on the i2HDM parameter space from XENON1T searches for DM [54].
The comprehensive analysis of the i2HDM parameter space performed in [45] using an i2HDM implementation into the CalcHEP [48] and micrOMEGAs [49,50] frameworks demonstrates an important complementarity of various constraints, which is presented in Fig. 2 as an effect of the sequential application of: a) theoretical constraints from vacuum stability, perturbativity and unitarity (theory); b) experimental constraints from colliders (LEP, LHC Higgs data, including those from EW Precision Test (EWPT) data); c) the upper bound on the DM relic density at Ω DM h 2 given by Planck [51,52] and constraints from DM DD searches at LUX [53].
From Fig. 2(a) and (b) one can see the large effect of the invisible Higgs decay constraint on λ 345 (of the order of 10 −2 ) in the M h 1 < M H /2 region, which is two orders of magnitude stronger than the constraint on λ 345 from vacuum stability. The constraint from DM DD searches from LUX [53] further limits λ 345 as one can see from Fig. 2(c). Let us recall first that we use the re-scaled DD Spin-Independent (SI) cross section,σ SI = R Ω × σ SI , where the scaling factor R Ω = Ω DM /Ω Planck DM takes into account the case of h 1 representing only a part of the total DM budget, thus allowing for a convenient comparison of the model predictions with the DM DD limits. One can see that this constraint is not symmetric with respect to the sign of λ 345 : the parameter space with λ 345 < 0 receives stronger constraints. The reason for this is that the sign of λ 345 defines the sign of the interference of DM annihilation into EW gauge bosons via Higgs boson and via the h 1 h 1 V V quartic coupling. For positive λ 345 the interference is positive and the relic density is correspondingly lower, so that the DM DD rates rescaled with relic density,σ SI , are lower than for the case of negative λ 345 , when the corresponding interference is negative and the relic density higher. One should also note that the combined constraints exclude M h 1 < 45 GeV for the whole parameter space of the i2HDM.
Since DM DD constraints play an important role, in the light of recent results from the XENON1T experiment [54], we have performed a further comprehensive scan of the i2HDM parameter space analogously to Ref. [45] and have found new constraints 1 . Our results are shown in Fig. 3, where we present the i2HDM parameter space left after the application of theory, LEP, EWPT, LHC constraints as well as upper bounds on the relic density from Planck and DM DD limits from XENON1T. One can see a large effect of the XENON1T constraints on λ 345 , which improve LUX limits by more than one order of magnitude, chiefly, over the M H /2 < M h 1 < 125 GeV region. In particular, in this region, |λ 345 | is limited to be always below about 0.05 which is crucial for one of the main signatures of DM searches at the LHC which we discuss below. The asymmetric picture with respect to negative and positive values of λ 345 is even more pronounced in case of these latest results as one can clearly see the white funnel region excluded for λ 345 < 0. The reason for this is again the negative interference between DM annihilation into EW gauge bosons via Higgs boson exchange and h 1 h 1 V V quartic couplings described above: in this funnel region this negative interference brings the DM relic density up, which in turn increases the DM DD rates.
One should note that though constraints from DM DD and invisible Higgs decay on |λ 345 | dominate the one from vacuum stability, the latter sets the most strict upper bound on λ 345 for M h 1 M H /2. In this region the invisible Higgs decay is suppressed by the phase space while DM DD rates rescaled by relic density are suppressed because Ωh 2 is driven to low values in this parameter space which is dominated by h 1 h 1 → H resonant annihilation. Therefore the constraint from vacuum stability which becomes important in this region limits λ 345 1.6 as follows from Eq. (7).

Mono-jet signatures at the LHC
The i2HDM exhibits different collider signatures which can potentially be accessible at the LHC. In this analysis we will focus on mono-jet final states, which arise from gg → h 1 h 1 + g, qg → h 1 h 1 + q and qq → h 1 h 1 + g processes, to which we will refer cumulatively as the h 1 h 1 j process. The corresponding Feynman diagrams are presented in Fig. 4. For this signature, and for M h 1 > M H /2, the relevant non-trivial parameter space is one dimensional and corresponds to the DM mass, M h 1 , since the production cross section is proportional to (λ 345 ) 2 . For M h 1 < M H /2, however, the situation can be different, for two reasons: a) only H → h 1 h 1 takes place, so that the cross section is defined by the production of the SM-like Higgs times Br(H → h 1 h 1 ) which is a function of λ 345 and M h 1 ; b) both H → h 1 h 1 and H → h 2 h 2 contribute to the invisible Higgs decay which then implies that both h 1 h 1 j and h 2 h 2 j will contribute to the same signature (for a few GeV mass difference between h 2 and h 1 , h 2 → h 1 ff is invisible because of the soft fermions f in the final state), the cross section of which is defined by the production of the SM-like Higgs state times (Br(H → h 1 h 1 ) + Br(H → h 2 h 2 ) which is a function of λ 345 , M h 1 as well as M h 2 .  Here, the cross section was evaluated for the initial cut on p jet T > 100 GeV.
In Fig. 5 we present the cross sections for the mono-jet process h 1 h 1 j at the LHC@13 TeV in the (M h 1 , λ 345 ) plane. The mono-jet cross section was evaluated with the initial cut on p jet T > 100 GeV, λ 345 has been chosen to be in the range [0.01, 0.02], M h 1 has been chosen in the range [20,60] GeV and M h 2 has been fixed to 200 GeV. We can see that, for this range of parameters, the cross section rate is between 100 and 1000 fb, which gives us a strong motivation to probe this signal at the LHC. For this and the following parton level calculations and simulations we have used the HEPMDB site [56], the CalcHEP package [48] and the NNPDF23LO (as_0130_qed) Parton Distribution Function (PDF) set [57] with both factorisation and renormalisation scale set to the transverse mass of the final state particles.
An important remark is that the mass of the top-quark in the loop which defines the ggH coupling can be less than the energy scale of the h 1 h 1 j process which is related to the jet transverse momentum, p jet T . Hence, in the region of high p jet T , one should check the validity of the EFT approach based on the heavy top-quark approximation which is often used for simplification. This is the subject of the next section.
There is one more process that potentially contributes to the mono-jet signature in the i2HDM, namely, qq → h 1 h 2 + g (gq → h 1 h 2 + q), which we will refer to as the h 1 h 2 j process. Feynman diagrams for this process are presented in Fig. 6. This process contributes to the mono-jet signature when the mass splitting between h 1 and h 2 is small, of the order of few GeV. In this case, h 2 will decay to h 1 and soft jets or leptons from a virtual Z which escapes  This process gives rise to a mono-jet signal if the mass difference ∆M is small enough, such that the decay h 2 → h 1 + X gives rise only to E miss T + soft undetected leptons or jets. Here, the cross section was evaluated for the initial cut p jet T > 100 GeV.
detection. In spite of the fact that there is one mediator for this process, i.e. the Z boson, one can see that t− and s−channel topologies with a light quark in the propagator make this process different from simplified models with fermionic DM and a vector mediator which have been studied so far in literature, so it is worth exploring it in detail. The parameter space for this process is characterised by two variables, M h 1 and M h 2 , which fix its cross section for a given collider energy. It is also convenient to use ∆M = M h 2 − M h 1 , the mass difference between the two particles, instead of M h 2 . In Fig. 7 we present the cross section for the h 1 h 2 j process in the (M h 1 , ∆M ) plane. The cross section has been evaluated with an initial cut, p jet T > 100 GeV. One can see that, in this plane, the pattern of the cross section iso-levels takes a simple form. One can also note that in case of M h 1 50 − 60 GeV and small ∆M the cross section is of the order of 100 fb, which could be in the region of the LHC sensitivity. It is important to stress that the cross section for the h 1 h 2 j process is independent of λ 345 , therefore this process would provide a probe of the i2HDM parameter space which is complementary to the h 1 h 1 j process 2 .

Validity of the effective ggH vertex approach
The SM ggH vertex is dominantly generated by the top-quark loop (with a small bottom quark contribution). It is known that integrating out the top quark is a good approximation for Higgs production processes when considering inclusive rates, as long as the Higgs boson is not far off-shell or with high transverse momentum. The literature on this subject is vast and we refer the reader to the corresponding sections in Ref. [58] and references therein. In case of our study, however, the selection of large transverse momentum of the jet (done to increase the signal-to-background ratio), which is typically bigger then the top-quark mass, is likely to lead to the breakdown of the heavy top-quark approximation. From one of the representative one-loop diagram presented in Fig. 8 for the gg → h 1 h 1 g process one can see that a high p T jet emitted from the top-quark loop can 'resolve" the top-quark in the loop if the transverse momentum of the jet is large enough. This effect is crucial since the mono-jet p T and E miss T distributions from the EFT approximation (which one could be tempted to use for the sake of simplicity) could be different from those described by the exact loop calculation. This is even more crucial for us, due to the E miss T shape-analysis techniques which we use in our study. Therefore, we have compared the E miss T shapes for the events simulated using the EFT heavy top-quark approximation to those from the exact oneloop calculation. For this purpose we have simulated the process of Higgs boson production in association with a jet and scanned over the mass of the Higgs boson, corresponding to the different invariant masses of the DM pair.
For this specific study, our simulations have been performed with MadGraph5 [59, 60] using the NNPDF2.3 PDF set [57]. We have compared results for two models: Mad-Graph5 native SM implementation with the effective ggH vertex and the SM at one-loop implementation. Using this setup we have scanned over a range of Higgs boson masses and compared the Higgs boson p T and η distributions for the effective ggH vertex and one-loop level implementations. The results presented in Fig. 9 are evaluated for different benchmarks, corresponding to different Higgs masses (for both effective vertex and one-loop simulations)   distributions are quite large for large transverse momenta and the role of the bottom quark in the loop is -as one may expect -rather marginal. The pseudo-rapidity distribution is not affected as much. It is however interesting to notice that larger invariant masses shift the distribution from a central-peaked shape to a more forward-backward behaviour. As a sanity check we have also evaluated the effect of setting the top-quark mass to 10 TeV in the one-loop calculation, so that it can be effectively cross-checked with the effective vertex results and, as one can see, indeed it agrees with those.
As a result of this comparison, we have defined a k-factor It is finally very important to stress that the k-factor which was found in two recent studies at Next-to-Leading Order (NLO) in QCD [61,62] is very close (within few percent) to the one we have found here at the LO only. Hence, based on our findings in this section, for our analysis below we use one-loop results (i.e at LO in QCD) and take into account the contributions from both top-and bottom-quarks.
3.2 LHC potential to probe the i2HDM parameter space 3

.2.1 Benchmarks
Taking into account all the constraints, and especially the recent XENON1T ones, we suggest a set of six Bench-Mark (BM) points, BM1 to BM6, summarised in Tab. 1 and described below.
• BM1 -Both M h 1 and M h 2 are below M H /2 contributing to about 20% of the invisible Higgs boson decay and yielding about 800 fb of cross section for the mono-jet signature (which is high enough to be tested at the HL-LHC as we will discuss below) coming from the cumulative sum of the h 1 h 1 j, h 1 h 2 j and h 2 h 2 j processes. To measure the XENON1T sensitivity we use the SI DM scattering rate on the proton (σ p SI ) accompanied by its ratio to the experimental limit from XENON1T, following re-scaling with the relic density, R XENON1T , which is equal to 0.29 for this benchmark, i.e. about a factor of three below the current XENON1T sensitivity. The DM relic density for this point is below the Planck constraints because of the h 1 h 2 co-annihilation.
• BM2 -Only M h 1 is below M H /2 and the value of λ 345 is chosen to be small enough for the DM relic density to match both the upper and lower Planck constraints. In this case, the invisible Higgs boson decay to DM is only 2% and the respective rate of the h 1 h 1 j mono-jet signal is only 74.6 fb. This point, with R XENON1T SI = 0.75, is likely to be tested with future DM DD experiments since its value is not far from the present XENON1T limit.
• BM3 -Only M h 1 = 60 GeV is below M H /2, but M h 2 = 68 GeV is quite close to it.
Because of the large invisible Higgs boson decay to DM with Br(H → h 1 h 2 ) = 0.25%, the leading signal at the LHC will be mono-jet from h 1 h 1 j, with a rate above 800 fb, complemented by the h 1 h 2 j process with rate 77.4 fb, which are high enough to be tested at the HL-LHC.
• BM4 -M h 1 = 60 GeV and M h 2 = 68 GeV as in BM3, but λ 345 is chosen to be low enough such that the DM relic density, governed by h 1 h 2 co-annihilation, is within the upper and lower Planck constraints. This point is unlikely to be tested by DD DM experiments in the near future while the LHC could potentially test it shortly via a combination of h 1 h 2 j, h 1 h ± j, h 2 h ± j and h ± h ± j signatures, which are outside the scope of this paper.
• BM5 -With all inert scalars close in mass, M h 1 = 70 GeV, M h 2 = 78 GeV, M h ± = 78 GeV, so all h 1 h 2 j, h 1 h ± j, h 2 h ± j and h ± h ± j channels contribute to the mono-jet signature (since both h 2 and h ± promptly decay to h 1 and soft leptons escaping detection) with a total rate of about 250 fb, which is close to the exclusion limit at the HL-LHC as we will see below.
• BM6 -With all inert scalars even more close in mass in comparison to BM5, since M h 1 = 80 GeV, M h 2 = 81 GeV and M h ± = 81 GeV, as well as λ 345 = 0 (hence h 1 h 1 j and  h 2 h 2 j are not possible) so that all h 1 h 2 j, h 1 h ± j, h 2 h ± j and h ± h ± j channels contribute to the mono-jet signature with a total rate of about 210 fb, again close the exclusion limit at the HL-LHC.
The masses of DM for BM1-BM6 were chosen below 100 GeV in anticipation of the LHC sensitivity to the parameter space which we present below. At the time of writing, the LHC experimental collaborations ATLAS and CMS do not have specific searches for the i2HDM, however, the results for generic DM searches in the jet+E miss T channel can be reinterpreted in the context of such a model. In order to compare the i2HDM to those limits, the following procedure is followed.
• The matrix elements that describe the hard interaction are simulated with CalcHEP and event samples for different values of M h 1 are produced. In order to concentrate on a region of phenomenological interest and simulate events with enhanced statistics, a lower threshold on the final state parton (either q or g) is set at p T > 100 GeV. The event samples are produced in the Les Houches Event format for further processing.
• In order to accurately describe the p T distribution each event is weighted with the kfactor estimated in the previous section, according to its parton p T and the invariant mass of the DM-DM system.
• Each event sample is then passed to PYTHIA 8.2 [63,64] for the proper treatment of parton showering, hadronisation and underlying event effects. The aforementioned NNPDF set is again deployed through the LHAPDF6 tool [65].
• The DELPHES 3 framework for fast simulation of generic collider experiments [66] is used to simulate the event reconstruction by the CMS experiment. Specifically, the detector parametrisation for CMS described as standard in the card delphes card CMS.tcl from the DELPHES distribution is used.
• A set of selection criteria is applied to the simulated reconstructed events. In the experimental collaborations, these criteria aim to reduce both the SM backgrounds (mainly composed of inclusive W/Z boson production) and instrumental noise that mimics the appearance of a single, highly energetic jet in the event. We disregard the effect of the latter phenomenon in our analysis, though.
After the fast detector level simulation described above we have performed an analysis of the missing transverse momentum distribution E miss T of the signal events. We compare the signal mapped in E miss T to direct limits and/or to backgrounds published by the LHC collaborations. For the latter case, the theta framework [67] for modelling and inference is used for the statistical analysis and limit-setting over the i2HDM parameter space. All limits are set using an asymptotic approximation to the CL S technique [68,69].
We study the jet+E miss T signature from two signal processes: h 1 h 1 j and h 1 h 2 j, in presence of a small (a few GeV) M h 2 − M h 1 mass gap making h 1 h 2 j to contribute to the mono-jet signature. In our study, we analyse h 1 h 1 j and h 1 h 2 j separately because of two reasons: a) the rate of these processes depends upon different parameters, so they complement each other as i2HDM parameter space probes; b) these processes have different shapes in the E miss T distribution because of the different nature and mass of the mediators.

Results from Run 2 data
At the beginning of Run 2 the LHC@13TeV delivered a total integrated luminosity of 4.2 fb −1 . The CMS collaboration released a public result where 2.3 fb −1 of data were used to search for DM production in association with jets or hadronically decaying vector bosons [70]. Supplementary material -data and Monte Carlo (MC) background distributions as well as their uncertainties -were made available by the collaboration and used to set limits on the i2HDM. Tab. 2 summarises the experimental selection used for the CMS result while Tab. 3 presents the data used for our study at 13 TeV.
Jets considered for the jet multiplicity and angular configuration selections are required to have p jet T > 30 GeV and |η jet | < 2.5.
The main change for the Run 2 selection was the update of the angular discriminant to suppress QCD multijet contributions: whereas in Run 1 a strict requirement was imposed on the jet multiplicity and leading jets azimuthal distance in Run 2 CMS opted instead for an overall requirement of azimuthal separation between the measured E miss T and the four leading hadronic jets. The selection efficiency for both the h 1 h 1 j and h 1 h 2 j processes can be seen in Fig. 11 and is around 10-25% for the former and 18-40% for the latter. We can understand this difference by noticing that h 1 h 2 j production is mediated by a Z boson while h 1 h 1 j production is mediated by the SM-like Higgs boson, which leads to a different E miss T spectrum. Fig. 12   background. One can notice that the E miss T distribution for the h 1 h 2 j signal is indeed harder than the one for the h 1 h 1 j case. This difference in E miss T shapes are related to the difference in the invariant mass of DM pair distributions, for h 1 h 2 j and h 1 h 1 j signals: as discussed in [13], a scalar mediator defines a softer invariant mass of the DM pair than a vector mediator (for similar masses), while the invariant mass of the DM pair in its turn is correlated with the shape of the E miss T distribution. TeV Instead of setting a lower threshold on the E miss T variable and making a counting experiment, like it was done for instance in the CMS Run 1 analysis˜ [71], one can benefit from using the full information from the shape of the E miss T distribution and perform a binned shape analysis. It can be observed from Fig. 12 that the E miss T spectrum of the signal is harder than that of the background for the whole range of DM masses sampled, especially for the large values, which agrees with the findings of [13], where it was shown that distributions at larger values of M (DM,DM) have a flatter E miss T shape. Eventually, for higher values of M h 1 , M (DM,DM) will also be higher. Fig. 13 shows the difference amongst four different analysis strategies: three counting experiments, with respective E miss T cuts of 200, 470 and 690 GeV, and a shape analysis with a lower threshold of 200 GeV.
One can see that higher E miss T thresholds in the counting experiment make the expected limit become worse while the shape analysis is able to leverage the coherent enhancements in all bins of E miss T that arises from the signal presence to set a better limit, an order of 30% improvement. We will therefore adopt the shape analysis strategy for the rest of this study.  When M h 1 M h 2 , h 2 will decay to h 1 plus very soft products, thus h 1 h 1 j, h 2 h 2 j and h 1 h 2 j production will contribute to the jet+E miss T signature. The h 2 h 2 j and h 1 h 1 j channels proceed via the same mediator and can be combined since they have the same E miss T shape (for small values of ∆M = M h 2 − M h 1 ). We indicate the predicted combined h 1 h 1 j and h 2 h 2 j cross section by the purple dashed line for λ max 345 in Fig. 14 (left panel). One can see that 2.3fb −1 mono-jet data are not quite sensitive even to the combined h 1 h 1 j and h 2 h 2 j signal at λ max 345 . As mentioned above, the h 1 h 2 j production is mediated by Z boson exchange (see Fig. 6) and has therefore a differentE miss T so we investigate it separately. The right panel of Fig. 14 presents the limit for the h 1 h 2 j production process (for ∆M = 1 GeV) and indeed demonstrates that the cross section limit for this process is different from the h 1 h 1 j one because of their different kinematics. This process does not depend on the λ 345 coupling and thus the cross section is determined by the masses of h 1 and h 2 only, here, the expected signal rate represented by the red line indicates that it is well below the present limit.

Projections for the HL-LHC
As a next step in our study we have found the projected LHC potential at higher integrated luminosities of 30, 300 and 3000 fb −1 with the last value posited as the ultimate benchmark for the HL-LHC. For this study, we made the following simplifying assumptions.
• The SM background to the mono-jet searches at the HL-LHC is still going to be dominated by inclusive EW production of W and Z bosons, with strong production of tt pairs being a minor background.
• The upgraded experiments will be successful in maintaining the physics performance demonstrated during Run 1 and Run 2, even in view of a much higher pileup in the range of P U = 140-200.
• The change from 13 to 14 TeV centre-of-mass energy will not change the kinematic distribution of the reconstructed object in any significant way, neither for the SM background nor for the i2HDM processes.
• The overall analysis strategy will be kept very similar to that in Tab. 2. As such, shape, yield and uncertainty of both signal and background can be scaled to the desired The blue dashed line is the combined contribution h 1 h 1 j + h 1 h 2 j that is still allowed by XENON1T data (see text). Right: Expected and observed limits on the h 1 h 2 j process for 2.3 fb −1 of 13 TeV pp collision data. The red solid line is the cross section for M h 2 = M h 1 + 1 GeV. The blue short dashed line is the cross-section for a full degeneracy between h 1 , h 2 and h c , where additional processes involving the charged scalar could mimic the h 1 h 2 j process. The cross-section is plotted for values of M h 1 larger than ∼ 70 GeV to comply with the LEP bound on the charged Higgs mass [45]. In all cases the isolated symbols represent the benchmark points discussed in Table 1. All cross sections are given for a p jet T > 100 GeV requirement. luminosities.
While the extrapolation of the signal distributions to the HL-LHC is a simple rescaling, the estimate of the tails of the W/Z inclusive p T distributions is far from trivial. For the purposes of our study, we estimated the shape of the SM background directly from a simulation of Z → ννj produced with CalcHEP, shown in the left panel of Fig. 15 while the normalisation is approximated by a rescaling of the CMS results, since the efficiency of the selection is assumed to be the same. Since the background is primarily estimated from data distributions in control regions, we expect that the overall uncertainty in the E miss T prediction follows approximately a 1/ √ N distribution. The right panel of Fig 15 shows the relative errors in each bin from Tab. 3 as function of the bin content. One can see that, indeed, it follows the aforementioned distribution, but in addition it also has a constant term (∼ 0.6%) that can be understood to represent uncertainties that are not statistical in nature. We use the following equation for our bin-by-bin error estimate:  Our final background estimate for the extrapolation to the HL-LHC is therefore done through the following procedure.
• We find the shape of the E miss T distribution from the pp → Z → ννj process (Fig. 15).
• We normalise the histogram such that the integral I L in the range 200-1250 GeV is: where L target is the target luminosity (30, 300 or 3000 fb −1 ), L 2015 = 2.3 fb −1 is the integrated luminosity of Ref. [70] and N events = 61978.6 is the total number of events in the aforementioned range, from Tab. 3. This normalisation is produced to approximate the efficiency of the CMS selection on the real SM background.
• We find the bin-by-bin errors according to the formula in Eq. (10).
This procedure guarantees that our background estimate has a reasonably correct shape, normalisation and uncertainty. Fig. 16 shows the background extrapolation for 30, 300 and 3000 fb −1 together with the errors. The signal shapes are the same as in Fig. 12 and, with these inputs, we then evaluate the expected limits for the values of integrated luminosity under consideration.
In Fig. 17 we present the limits for the h 1 h 1 j/h 2 h 2 j process, together with production cross sections versus M h 1 for λ 345 = 0.019 (red solid line) and λ 345 = 1.7 (green short dashed line) with M h 2 = 200 GeV for both values of λ 345 . The blue dashed line is the combined cross section for h 1 h 1 j + h 1 h 2 j production for M h 2 = M h 1 + 1 GeV and the maximal value of λ max 345 allowed by XENON1T data. We find that. with 30 fb −1 of integrated luminosity, one can exclude masses very close to M H /2, for the maximum allowed value of λ 345 . Further, with 3000 fb −1 of the HL-LHC, one will be able to exclude all the region of M h 1 < M H /2 for λ 345 = 0.019. At the same time one can see that for values of λ 345 allowed by XENON1T LHC it will not be possible to probe M h 1 > M H /2 with 3000 fb −1 with h 1 h 1 j/h 2 h 2 j process even if its cross section is maximized for M h 1 M h 2 . In Fig. 17 we also present the relevant benchmark points discussed in Table 1. One can see that BM1 and BM3 with large (but still experimentally allowed) Br(H → h 2 h 2 ) and Br(H → h 1 h 1 ) respectively can be probed at the LHC at high luminosity. One should note that these benchmarks predict a too low DM relic density, requiring additional source for DM from somewhere else. At the same time the BM2 scenario with a DM relic density which is in agreement with the upper and lower limits from Planck collaboration, requires too low values of λ 345 and respectively too low Br(H → h 1 h 1 ) = 0.022 to be observed at the LHC even in the high-luminosity stage. We would like to stress, however, that future DM DD experiments including XENON will be able to probe this benchmark since, as one can see from Table 1 the σ p SI is already close to the XENON1T exclusion limit. . The isolated symbols represent the benchmark points discussed in Table 1. All cross sections are always given for a p jet T > 100 GeV requirement.
In Fig. 18 we present the limits for the h 1 h 2 j process. Only for very high luminosity and for lower M h 1 M h 2 masses the LHC might be sensitive to this process alone. It is important to stress one again that this process does not depend on λ 345 and therefore very complementary to the Higgs boson mediated one. One should notice that, in the M h 1 M h 2 region, the actual limit should be given by a combination of this process with the h 1 h 1 j and h 2 h 2 j ones. The h 1 h 1 j and h 2 h 2 j combination is a trivial one, we just sum both cross sections and the limit is given by Fig. 17. However, the combination with the h 1 h 2 j process is not trivial since it has a different shape of E miss T distribution and the relative weights of h 1 h 1 j/h 2 h 2 j and h 1 h 2 j distributions are eventually depend on the value of λ 345 . One should also note that the sensitivity of the LHC to the h 1 h 1 j/h 2 h 2 j process is very limited for M h 1 > M H /2 as one can see from Fig. 17 since XENON1T puts a very stringent upper limit on the λ 345 coupling. Therefore, the h 1 h 2 j process is likely to be a unique one for the LHC to probe the i2HDM parameter space beyond for M h 1 > M H /2. If all (pseudo)scalar masses, M h 1 , M h 2 and M h + , are similar, the LHC will be sensitive to the M h 1 up to about 100 GeV with 300 fb −1 and up to about 200 GeV with 3000 fb −1 as demonstrated in the right and bottom frames of Fig. 18 respectively. The red solid line in this figure gives the cross section for M h 2 = M h 1 + 1 GeV while the blue short dashed line is the cross-section for the case when all inert scalars are close in mass (M h 2 = M hc = M h 1 + 1 GeV) and the processes with the charged scalar(s) mimics the signature from the h 1 h 2 j process. The Fig. 18 shows that benchmarks BM5 and BM6 with all nearly degenerate inert scalars can be tested already with 300 fb −1 integrated luminosity, while BM1, BM3 and BM4 with nearly degenerate h 1 and h 2 can be excluded with 3000 fb −1 .
One can finally use the dependence of the cross section upon λ 345 to calculate an exclusion region on the (M h 1 , λ 345 ) plane. Fig. 19 shows the excluded values of λ 345 as function of M h 1 for 3000 fb −1 . A mono-jet search at the HL-LHC will therefore exclude values of λ 345 larger than 0.011-0.02, for the range of masses M h 1 < M H /2. For higher values of M h 1 , one would instead need a coupling value as large as λ 345 = 4.9 in order to exclude M h 1 < 100 GeV. Also shown are the experimentally excluded regions from the invisible Higgs decay constraints as well the theoretically allowed maximum of λ 345 from vacuum stability.

Conclusions
In this paper, we have assessed the scope of the LHC in accessing a mono-jet signal stemming from the i2HDM wherein the lightest inert Higgs state h 1 is a DM candidate, produced in pair from gluon-gluon fusion into the SM-like Higgs H and accompanied by (at least) a hard jet with transverse momentum above 100 GeV, i.e. a h 1 h 1 j final state. The second-lightest inert Higgs boson h 2 can also contribute to a mono-jet signature, whenever it is degenerate enough with the lightest one so that its decay products produced alongside the h 1 state are too soft to be detected. This can happen in h 2 h 2 j (again produced by gluon-gluon fusion into the SM-like Higgs) as well as h 1 h 2 j (induced by Z mediation) final states.
Before proceeding to such an assessment, we have established the viable parameter space of the i2HDM following both theoretical and experimental constraints. The former are dominated by vacuum stability requirements whereas the latter are extracted from LEP, EWPT, LHC, relic density as well as LUX and, especially, XENON1T data, which greatly reduce the accessible volume of i2HDM parameter space. The established impact of XENON1T results is in fact one of the main results of our analysis.
Over the surviving i2HDM parameter space, we have defined a several benchmark points, wherein M h 1 varies from 55 to 80 GeV and M h 2 is between 1 and 55 GeV apart, and tested them against a CMS inspired selection. However, in relation to the latter, we have adopted a somewhat orthogonal approach, as we have exploited the shape of the E miss T distribution (as opposed to a standard cut based analysis). Through this, we have been able to establish a  [45]. The isolated symbols represent the benchmark points from Table 1. All cross sections are always given for a p jet T > 100 GeV requirement.
better sensitivity to our signals than previously attained using the same amount of available data, of a few fb −1 . Furthermore, we have extrapolated such a sensitivity to much higher luminosities, typical of the end of Run 2, Run 3 and high luminosity LHC. By adopting an improved version of standard analysis tools (i.e matrix element, parton shower and hadronisation generators as well as detector software) which further accounts for a k-factor enabling to correct the EFT approach for the emulation of the explicit loop entering the gg → H process in the signal and a sophisticated background treatment, we have  Figure 19: Expected exclusion region on the (M h 1 , λ 345 ) plane for 3000 fb −1 . The curve corresponding to Eq. (7) is given by the red dashed contour whilst the expected result for 3000 fb −1 is given by the black solid contour. Also shown are the limits from vacuum stability (hashed blue region, dotted contour) and the XENON1T direct detection search (shaded green region, dotted contour).
been able to establish that the advocated shape analysis has significant scope in constraining mono-jet signals induced by i2HDM dynamics.
We have found that h 1 h 1 j (plus h 2 h 2 j) and h 1 h 2 j processes are very complementary to each other in probing the i2HDM parameter space. The former covers the M h 1 < M H /2 region and will allow to put constraints on (in case of void searches) or else extract (in case of discovery) two fundamental parameters of the i2HDM entering the leading mono-jet process. These are the h 1 mass and the trilinear self-coupling λ 345 connecting the SM-like Higgs to the DM candidate pair. For example, for M h 1 < M H /2, no values for λ 345 above 0.01-0.03 would be allowed in the case of no discovery. At the same time this process is not sensitive to M h 1 > M H /2 for values of λ 345 allowed by DM DD constraints. On the other hand λ 345 -independent h 1 h 2 j process can be used to probe the M h 1 > M H /2 region of the parameter space via mono-jet signature in case M h 1 M h 2 . Moreover, the h 1 h 2 j process has a slightly less steeply falling E miss T distribution than the h 1 h 1 j one because of different mediator (Z boson instead of Higgs boson) and respectively slightly better LHC limit. If all the (pseudo)scalar masses, M h 1 , M h 2 and M h + , are similar, the LHC will be sensitive to M h 1 up to about 100 GeV with 300 fb −1 and up to about 200 GeV with 3000 fb −1 .