Model Independent analysis of MeV scale dark matter: II. Implications from $e^-e^+$ colliders and Direct Detection

Dark matter particles with masses in the sub-GeV range have escaped severe constraints from direct detection experiments such as LUX, PANDAX-II and XENON100 as the corresponding recoil energies are, largely, lower than the detector thresholds. In a companion paper, we demonstrated, in a model independent approach, that a significantly large fraction of the parameter space escapes the cosmological and astrophysical constraints. We show here, though, that the remaining parameter space lends itself to the possibility of discovery at both direct detection experiments (such as CRESST-II) as well as in a low-energy collider such as Belle-II.


Introduction
The evidence for Dark Matter, at least as far as the manifestations of its gravitational interactions are concerned, has been continuously building up. Whether it be rotation curves in spiral galaxies [1], the observation of gravitational microlensing [2,3], observations of cluster collisions (Bullet Cluster) [4], or the temperature anisotropy in the spectrum of Cosmic Microwave Background Radiation [5][6][7], there exist a large class of observations, spanning very different length scales, for which the Dark Matter (DM) hypothesis provides the most compelling explanation. And while efforts to circumvent particulate DM have been made, primarily through modifications of Einsteinian gravity at cosmological scales [8,9], neither can a single such modification explain all data, nor are theories incorporating such modifications necessarily unrelated from a model involving particles as DM [10].
On the other hand, no direct (i.e., laboratory) evidence for such DM particles has been forthcoming despite a large variety of experiments having been operative. All such efforts hinge upon the assumption that the DM would have some interaction with the Standard Model (SM) particles 1 . Such search strategies can be broadly categorized into three classes, namely a) satellite based indirect detection experiments like Fermi-LAT [11], PAMELA [12] and AMS [13], b) specialized terrestrial direct detection experiments and c) generic collider experiments. Despite occasional claims of anomalies in the data, putative positive sightings have never been validated by a different experiment, thereby leading to progressively stronger constraints on the parameter space of any theory of DM.
Most of the aforementioned search strategies have concentrated on a relatively heavy (i.e., heavier than a few GeVs) DM particle. Indeed, indirect search experiments depend upon the annihilation of a pair of DM particles into SM particles, leading to aberrant cosmic rays (such as those generated by antiparticles like positrons or anti-protons), very high energy neutrinos, monochromatic photons or even an anomalous component of the continuous γ-ray spectrum. Corresponding particles from the annihilation of light DM particles would have energies typically well below the threshold of current satellite-based detectors. Similarly, in direct detection experiments, the scattering of a light DM-particle off the target nuclei would, typically, impart too little a recoil to the latter to be distinguishable above the background (due to both thermal fluctuations as well as the scattering of the ambient neutrinos). Entirely analogous arguments would hold for, say, the Large Hadron Collider, where the associated production of such DM particles would lead to a relatively small recoil of the visible particle system, with the consequent missing transverse energy spectrum being hardly recognizable from that due to neutrinos (appearing in corresponding events in the SM background). In short, sub-GeV DM affords a much larger room as far as the constraints from canonical experiments are concerned.
This, as well as several other theoretical compulsions have engendered much recent interest in sub-GeV DM particles [14][15][16][17][18]. In particular, towards the explanation of perceived anomalies in the 511 keV γ-rays observed by the INTEGRAL satellite, the cosmic γ-ray background at 1-20 MeV and the details of large scale structure, quite a few such models [19][20][21][22] have been invoked over the years. The wide plethora of physics scenarios that can, generically, lead to such ultralight particles makes the subject a very fascinating one. And since standard methods do not work, the exception being those emanating from anomalous decays of certain mesons [23], new methods need to be devised for exploring these. Indeed, quite a few diverse ideas have already been proposed, such as the absorption signal in the 21-cm spectrum [24,25], the scattering of DM off atomic clocks [26], the use of optical cavities [27], the use of leptonic beam-dumps [28], the use of a cryogenic point-contact germanium detector [29] or more canonical setups such as the LDMX [27].
In this paper, we examine, instead, the viability of searching for such light DM in an existing collider facility, namely Belle-II 2 . In a companion paper [31], hereafter designated Paper I, we have examined the cosmological constraints on such a DM paradigm. Here, we consider, primarily, the sensitivity reach for a host of different final states at Belle-II. The relatively low energy, the clean environment, and the high luminosity all work in our favour. While the insistence on low-energy might seem counterintuitive, we explicitly show the advantage thereof by comparing with the reach that would have been possible at LEP. Also considered are the prospects of Direct Detection experiments.

Higher Dimension operators
Rather than consider an intricate and ultraviolet-complete model, we take recourse to a model-independent approach, with the only assumption being that the light DM candidate ϕ is a spin-0 particle. While the effective field theory approach pertaining to our case has been detailed in Paper I [31], we recount this here for the sake of completion. With the mediator connecting the dark sector to the SM particles considered to be heavy enough to be integrated out 3 , the only new relevant field is the scalar. Since we are interested in a DM with a mass of at most a few GeVs, the only relevant SM states are the photon and the gluon, the leptons (including neutrinos) and the quarks of the first two generations. Furthermore, flavour changing operators are omitted so as to be trivially consistent with low-energy constraints.
Assuming SU (3) ⊗ U (1) em symmetry 4 , the lowest-dimensional operators are where f is an arbitrary SM fermion and Λ is the scale of new physics. Note that the first two operators are dimension-5 ones while the rest are dimension-6. This difference would manifest itself in the experimental sensitivities. With the dimensionless Wilson coefficients C's, corresponding to the various operators, being normalized to either zero as a full experimental search [30], these have been in the context of specific models unlike in our approach. 3 Similarly, any other new species is also assumed to be too heavy to be relevant in the contexts of both terrestrial experiments/observations as well as the cosmological evolution of the relic density. 4 Had we imposed the full gauge symmetry of the SM instead, the first two operators, viz. O f s,p would suffer a further suppression by a factor of v/Λ where v is the electroweak symmetry breaking scale. We return to this point later. or unity (denoting the absence or presence of the said operator), the results would be functions of the mass of DM and the scale Λ alone. The translation to the parameter space of a UV-complete theory would, then, be a straightforward one.

Monophoton signal at Belle-II
In this section, we explore the sensitivity of the low-energy e − e + collider Belle-II to such a DM candidate. The relatively low center-of-mass energy ( √ s = 10.58 GeV), alongwith the high luminosities available (1-50 ab −1 ) in this experiment, renders Belle-II a very attractive theatre for the search of such light DM particles. It is instructive to examine this contention carefully. Purely on dimensional grounds, a typical cross section of interest driven by either of the first two operators in eq.(2.1) would scale, with the center-of-mass energy, as Λ −2 ln(s/m 2 e ). Similarly, those driven by the other operators in eq.(2.1) would scale as s Λ −4 ln(s/m 2 e ). On the other hand, the various components of the SM background would, naively, be expected to fall as s −1 ln(s/m 2 e ) or even faster. Thus, a larger center-of-mass energy would, seemingly, serve to increase the signal to background ratio. This is more than offset, though, by the nature of the signal and background. With the DM being stable and largely noninteracting, the signal final state would comprise of a visible particle accompanied by missing energy-momentum. The latter, within the SM, accrues primarily from neutrino-production (apart from the experimental effect 5 of having missed ostensibly visible particles). The corresponding rates fall dramatically as √ s falls well below M Z , and, in the regime of interest, would scale as G 2 F s ln(s/m 2 e ). In other words, the energydependence of such background is the same as that for the dimension-6 operators, and, potentially, worse than that for the dimension-5 ones. This is what renders an experiment such as Belle a very interesting arena for the search of light DM candidates.
While, at a given collider, the DM particle can be produced in many different processes, only a few of them are, potentially, of interest. With the DM particle being produced only in pairs, there must be at least one visible particle in the final state for the event to be triggered. The simplest of such processes is where a single photon is emitted alongwith the pair of DM particles, viz.
e + e − → ϕ * + ϕ + γ , (3.1) leading to an observable final state comprising of a monophoton with missing energymomentum. An obvious background to this is given by In addition, final states where one has missed a putative visible particle can also contribute. 5 As we shall see later, this instrumental background tends to overwhelm that from neutrino production.
The leading such processes are 6 e + e − → / γ + γ with one photon missing, where two of the photons are missing, where the leptons are missing.
In the above "missing" implies that at least one of three conditions hold, namely (a) the energy of the said particle is below the threshold energy of the detector; (b) it travels along a path lying outside the angular coverage of the detector; (c) it is too close to another particle to be resolved, as a separate entity, by the detector or (d) the particles is lost in the gaps between the electromagnetic calorimeter (ECL) segments and/or an endcap. Given the simple final state we are looking for, the third possibility is very rare indeed. Similarly, with a substantial cut on the missing energy-momentum, for (a) above to contribute would require the emission of multiple such particles, and has only a small probability. The dominant backgrounds, thus, are those in which the event contains a second, and, maybe, a third (or more) photon, which either fall(s) outside the angular coverage, or, more importantly, fall(s) within the detector but go(es) undetected into the gaps [32]. Hence, the total background composition is strongly dependent on the details of the detector geometry. However, inspiration may be drawn from the search for a Dark Photon [32], wherein the major backgrounds for this search were found to arise from high cross section QED processes such as e + e − → e + e − γ(γ) and e + e − → γγ(γ) with all particles, except for a single photon, going undetected. In adopting the strategy and the background rates from ref. [32], care must be taken, though, to account for the fact that the study in ref. [32] was based on the dataset corresponding to an integrated luminosity of only 20 fb −1 .
As it would turn out, our simulations lead to a noticeably larger background count. Indeed, to suppress the backgrounds to their levels, we need to impose cuts stronger than they have done. Nonetheless, a comparison with ref. [32] constitutes an useful countercheck and we incorporate this in our study. We begin by briefly recounting the details of the experimental setup. The KEKB-II accelerator system collides a beam of e + with an energy of 4 GeV against an electron beam of energy 7 GeV. We consider the direction of the latter as the reference against which the polar angle is measured. For the Belle-II detector, we have • ECL coverage: The electromagnetic calorimeter has an angular coverage of (12.4 • , 155.1 • ).
In other words, e ± , γ closer to the beampipe (in either direction) would not be registered.
• ECL gaps: In addition, the ECL has gaps between the endcaps and the barrel at polar angle ranges (31.3 • , 32.2 • ) and (128.7 • , 130.7 • ). Associated with extremely low detection efficiencies, particles falling in these gaps would not be registered and would essentially contribute to missing momentum.
• Energy threshold: For a e ± , γ falling within the 'live' part of the ECL to be visible, it should have energy greater than 0.2 GeV.
• Trigger: Furthermore, for a event to be triggered, at least 2 GeV of particulate energy needs to be deposited in the (18 • , 140 • ) window (other than in the dead zone). The corresponding trigger efficiency is 95%.
No detector, of course, has infinite resolution. For the Belle-II detector, the energy resolution of the ECL is given by [33] σ where the different components are to be added in quadrature and σ E represents a Gaussian smearing. The relative angular resolution is much finer, and is of little concern to us, as it contributes but little to the mismeasurement of momentum.

1D normalised distributions
Before we proceed further, let us examine the phase space distributions (for signal and background) for the leading visible photon in the final state, irrespective of the detector geometry details. In Fig.1, we display the normalized distributions in the center-of-mass energy E * , the transverse momentum p T and the scattering angle θ lab for both the signal (for a particular m ϕ ) and the background. Also presented is a scatter-plot corresponding to a particular double differential distribution. To facilitate an easier appreciation, we have deliberately switched off the initial and final state radiations as far as this figure is concerned.
Let us try to understand the distributions. To begin with, consider e + e − → γγ, where the photons, perforce, would have a center-of-mass energy of E * = √ s/2. This particular background (already small as the second photon would be missed in only a small fraction of events) can, of course, be trivially eliminated by vetoing photons with E * close to √ s/2. As for the e + e − → 3γ background, this can be thought of as an additional photon being radiated off in the basic e + e − → γγ process. This immediately tells us why this background still peaks close to E * = √ s/2 for the leading photon (as is seen in Fig.1). The e + e − → e + e − γ process, on the other hand, essentially consists of a photon radiated off in a Bhabha scattering and, hence, is dominated by relatively low-energy photons. As for the signal events, note that only for O γγ does the cross section increase with E * . This is easy to understand as the very structure of the matrix element mandates this growth, at least in the absence of cuts. As for angular distribution of the backgrounds, clearly, in the center of mass frame, it would be highly-peaked in both the forward and the backward directions. That this is more peaked in the forward (e − ) direction, is an obvious consequence of the larger energy, in the laboratory frame, of the e − beam.
As we have already argued, the background processes are dominated by amplitudes where the photon leg(s) is(are) associated with soft and collinear singularities. Similar is the case for the signal processes corresponding to the fermionic operators (i.e., the first four) in eq.(2.1). Thus, the event-distributions for all these cases would be dominated  Figure 1. (a-c) Normalized 1-D differential distributions of kinematic observables E * , p T and θ lab for the leading photon (highest p T ) corresponding to the various background processes(without initial and final state radiation) obtained after applying basic cuts. Also shown is the signal process for the different operators, each corresponding to m ϕ =100 MeV and (d) 2-D differential distribution for total background. by final-state configurations with low-p T photons. For the last two operators in eq.(2.1), though, the DM particles come off the photon and, hence, the latter must be imbued with a non-negligible p T . The consequent distribution is quite distinctive and is given by where m 2 ϕϕ is the invariant mass for the (invisible) scalar-pair and is uniquely related to E γ . For s ≫ 4m 2 , this can be trivially integrated to yield This feature could, in principle, be used not only to enhance the signal-to-noise ratio, but also, in the event of discovery, to distinguish between the fermionic and the photonic operators.
Understandably, the normalized profile for the signal (e + e − → ϕ * ϕ + γ) events would look remarkably similar to that for the last-mentioned background, especially for very light DM. With increasing m ϕ though, differences emerge, most notably for the photon-spectrum endpoint. We show the distributions only for O f s , O f v and O γ as the ones for O f p , O f a and Oγ are, respectively, almost identical to those for the former operators.
Finally, in Fig.1(d), we present a background event scatter plot in the plane spanned by E * and θ lab . As we have already discussed, the background processes are dominantly concentrated in parts of the phase space where the photon either has low energy or is travelling reasonably close to the beam pipe. With the background being demonstrably small in the region 30 • < θ lab < 130 • , we use this as a selection cut, as has been advocated in the Belle physics book [32].
At this stage, we would like to point out that our simulations of the backgrounds, with ostensibly the same kinematic cuts, leads to a larger cross section than that presented in ref. [32]. For example, as a comparison of Fig.1(d) with Fig.204 of the said reference shows, the latter is almost totally bereft of the dense curved arm in the region π/2 < ∼ θ lab < ∼ 9/4. Presumably, such events were excluded on the back of detailed detector-level simulations that have not been spelled out. In the absence of such information, we must accept the larger backgrounds as represented by Fig.1(d). We would, subsequently, seek to suppress this by the imposition of a strong cut on the photon transverse momentum, one that is absent in ref. [32]. This, however, would eliminate a non-negligible fraction of the signal events as well.

Selection Cuts
Based on the 1D distributions, we impose only a simple set of selection cuts. An event should contain one and only one photon satisfying These cuts are chosen to reject the background due to (a) the three photon final state, where two of the photons are not registered; typically this is dominated by the case where one of the photons makes a relatively small angle with the beam pipe while the second could make a slightly larger angle but still fall outside the ECL coverage area, or hit one of the ECL gaps and (b) analogous radiative Bhabha events with both e ± being missed similarly.

The analysis
Imposing the aforementioned cuts, we now perform a χ 2 -test, with the statistic defined as Here, ij denotes a particular bin with N NP ij (N tot ij ) being the number of signal events (total number of events) in the bin. With the photon being the only visible particle, we have only two independent phase space variables associated with the final state. In particular, we choose to work with the two-dimensional distribution defined by the center-of-mass energy     Figure 2. 99.7% C.L. contours in the m ϕ -Λ plane. The left panels are obtained using the χ 2 analysis. The right panels are obtained using simple S/ √ B criteria, but with the much smaller backgrounds of Ref. [32]. The upper(lower) panels correspond to integrated luminosity of 1 (50) ab −1 . E * and the laboratory-frame scattering angle θ lab , and divide the associated space into uniform bins of size 0.1 × 0.2 GeV each. With the SuperKEKB slated to deliver a peak luminosity of ∼ 8 × 10 35 cm −2 s −1 (or, an integrated luminosity of ∼ 8 ab −1 per year, for a nominal year of 10 7 s) [33], we consider a representative 7 value of the total integrated luminosity, namely L = 1 ab −1 , allowing us to obtain the corresponding reach/sensitivity of the experiment. This is displayed, in the form of 3 σ contours in the m ϕ − Λ plane, in Fig. 2. For comparison, we also present the reach obtainable using a simple S/ √ B statistic but using the considerably smaller background estimates of ref. [32]. It is intriguing to see that the two sets of contours differ by only about 25%.

Discussion
That the sensitivity to operators O f s,p are much higher than that to the rest is but a reflection of the fact that the former are only dimension-five, while the rest are dimensionsix. Similarly, the insensitivity to the parity structure (scalar versus pseudoscalar and vector versus axial vector) can be understood by realizing that, with √ s ≫ m e , these differences between the interactions would only have been manifested had we considered polarized beams. Indeed, were a signal to be established, the use of polarization would be invaluable in the unravelling of the underlying interaction.
The large sensitivity to operators O f s,p renders this experiment one of the best for such small DM masses. The fall-off for m ϕ > ∼ 3 GeV is, of course, expected on kinematic grounds. And while the sensitivities to operators O v,a,γ,γ are not as high, they are still better than those achievable at other current collider experiments. Although, naively, it could be argued that new physics at Λ ∼ 250 GeV should have been visible at, say the LHC, this is not necessarily true if the DM were hadrophobic. At a future high-energy linear collider, however, even such a scenario should be manifestly visible.

Monophoton searches at LEP.
Temporarily turning to other colliders, both present and past, we consider, next, the potential of each in this context. At the LHC, such a light DM would manifest itself essentially just as neutrinos do, but with considerably smaller cross-sections (which would be further suppressed if the coupling of ϕ to the first-generation quarks are sub dominant). Given the fact that, at a hadronic collider as complicated as the LHC, there are numerous other sources for missing transverse momentum, the sensitivity is low indeed. Much the same was true for the erstwhile Tevatron too.
At the LEP, though, the prospects were much better, courtesy the extremely clean environment. Indeed, neutrino number-counting was one of the successes of the four experiments. The most sensitive test was the lineshape at the Z-peak, and led to N ν = 2.9840 ± 0.0082 [39]. This deficit, nominally, would impose a very strong bound on any extra sources of missing energy-momentum. However, this clearly is of little consequence in the present context as the DM would not really manifest itself at the Z-peak and, hence, in the lineshape. On the other hand, monophoton searches (exactly analogous to what we propose at Belle-II) were sensitive indeed. In fact, such a search was also performed just below the Z-peak (primarily, when the collider was being ramped up) and constituted the first worthwhile neutrino number-counting exercise leading to δN ν ∼ 0.1.
Much better bounds are available, though, from dedicated mono-photon searches at LEP-II, where one looked for highly-energetic photons in association with missing energy resulting from the process e + e − → γ + (invisible). The major background to this process is, obviously, e + e − → ν i +ν i + γ and, unlike in the case of Belle-II, dominates the instrumental background. In particular, we draw inspiration from a particular study at DELPHI [40] based on an integrated luminosity of 650 pb −1 at √ s between 180 GeV and 209 GeV.
To determine the sensitivity, we have executed an analysis similar to that in ref. [41], implementing detector efficiencies and resolution as detailed in ref. [42]. We, however, effect one simplification (one that facilitates both presentation and an easy understanding). Rather than simulate events for each of the actually implemented √ s values in the 180-209 GeV range, we consider the weighted (with the respective integrated luminosities) mean and simulate events only for √ s = 200 GeV. This reproduces very well the SM background as obtained in Ref. [40], except very close to the edge of the phase space, viz.
The remaining small discrepancy is as much a consequence of our inability to effect a full detector simulation as that of the aforementioned approximation. To avoid such effects, we shall omit events with x γ > 0.9 from our analysis. For our simulations of the SM background as well as the signal, we use MadGraph5 [43] in conjunction with an appropriate implementation of FeynRules [44]. Having standardised this (by comparing with Ref. [40]), we use the latter (i.e., the DELPHI Monte Carlo) for the background events, thereby ensuring a very accurate rendition of the same. Before we decide on phase space cuts etc., we must decide on the triggers. At DELPHI, three different triggers were used to select single-photon events. Events with a photon with polar angle in the range 45 • < θ < 135 • were detected in the High Density Projection Chamber (HPC) with an energy threshold of E γ = 6 GeV. The trigger efficiency for photons in the HPC, in the analysis, was assumed to increase linearly from 52% at E γ = 6 GeV to 77% at 30 GeV, and then to 84% at 100 GeV. This trigger efficiency is then multiplied by the reconstruction and analysis efficiency, which was assumed to increase linearly from 41% at 6 GeV to 78% for E γ = 80 GeV and constant thereafter.
The Forward Electromagnetic Calorimeter (FEMC), located at 12 • < θ < 32 • and at 148 • < θ < 168 • could accept events with a single photon of energy E γ > 10 GeV. The corresponding trigger efficiency increases approximately linearly from 93% at 10 GeV to 100% at 15 GeV and above, and then it is multiplied by the analysis efficiency (related to reconstruction and event selection efficiency) which increases linearly from 57% at 10 GeV to 75% at 100 GeV. This has to be further multiplied by 89% to account for the additional loss of events due to noise and machine background. Very forward (3.8 • < θ < 8 • or 172 • < θ < 176.2 • ) photons with an energy threshold of 30 GeV produced a signal in the Small Angle Tile Calorimeter (STIC). Here, the efficiency is assumed to be 48%, based on ref. [41]. This is then multiplied by overall analysis efficiency of 48%.
In estimating the measured energy from the simulated one, we need to incorporate the resolution of the electromagnetic calorimeter, and this was given by [42] σ where the energy was measured in GeVs and the three contributions added in quadrature.
The errors due to the finite angular resolution were too small to be of any consequence.
Events with x γ < 0.06 fell below the trigger threshold and were not registered. On the other hand, in the region 0.7 < ∼ x γ < ∼ 0.9, the background is highly enhanced on account of the radiative return to the Z and, hence, contributes little to sensitivity to new physics. And, as already explained earlier, we altogether omit the x γ > ∼ 0.9 window from our analysis. The rest of the phase we divide into x γ bins of width 0.05 each and compare the simulation for signal events with the background as in Ref. [40]. To this end, we effect a χ 2 -test, with the statistic defined as (4.2) -11 - Here, i denotes a particular bin with N NP i (N tot i ) being the number of signal events (total number of events) in the bin and ∆N tot i is largely dominated by the SM background as in Ref. [40]. The χ 2 , thus calculated, can be translated to 3 σ contours in the m ϕ − Λ plane, as displayed in Fig. 3.  Understandably, there is little dependence on m ϕ , far less than that at Belle-II. This is but a reflection of the fact that, for center-of-mass energies as large as that at the LEP, a DM mass in the range we are interested in is virtually indistinguishable from zero. For very analogous reasons, the chirality structure of the fermionic current (or the difference between O γ and Oγ) is immaterial. Similar to the Belle-II limits, the sensitivity to the operators O f s,p is much higher than that for the other operators. This, of course, is primarily due to the fact that the former are only dimension-five operators, while the others are dimension six. Amongst the latter, naively, one would have expected that O f v,a would lead to lower sensitivity than the photonic ones on account of the fact that the signal matrix element has a structure similar to that for the background 8 . On the other hand, processes due to O γ and Oγ suffer from additional s-channel suppression leading to smaller cross sections and, hence, lead to lower sensitivities. Overall, it is easy to see that despite LEP having large √ s, the limits obtained for Belle-II are much stronger. This is but a consequence of the much higher luminosity at Belle-II as well as the virtual absence of the neutrino background.

Complementary Signals at Belle-II
Until now, we have considered only the monophoton final state, both at Belle-II and at LEP, primarily because it constitutes the simplest search strategy. Given the high luminosities achievable at Belle-II, it is, however, worthwhile to consider more complicated final states. Even accounting for a suppression of the cross section, as well as experimental issues (such as analysis efficiencies), these might, yet, lead to additional and nontrivial sensitivity. We, now, consider three such cases, each involving the sighting of one or more charged lepton ℓ (= e, µ) alongwith missing energy-momentum.
In particular, we turn our attention to three distinct cases, namely, • Case I: 1) or, in other words, Bhabha scattering with a pair of DM particles being radiated off one of the four legs.
• Case II: While analogous to Case I above, this is simpler, and would turn out to be more sensitive than the former.
• Case III: This is very similar to Case II above, with the exception that only a single muon should be visible.
While an analogue of Case III could be defined with a single e − (or e + ) instead, it has a low sensitivity and, hence, we are omitting it. It should also be realized that each of Cases II and III could as well be defined with the tau lepton instead of the muon. Indeed, the analysis would be very similar, except for the fact that tau-identification and/or reconstruction would be associated with a further loss in efficiency. Consequently, the results for the muonic channel can be trivially extended to the tauonic ones at the cost of inclusion of such efficiency factors. The corresponding irreducible backgrounds arise primarily 9 from e + e − → n / γ + ℓ +l with all the photons missing, e + e − → ν +ν + ℓ +l (5.4) with the understanding that, for Case III, one of the two leptons should also go missing.
For the photons and the e ± , the requirements for one to be seen (or, equivalently, being missed) remains, of course, as in Sec.3. The muons, though, escape the ECL and are caught, instead by the KLM (K L and muon detector), which consists of an alternating sandwich of thick iron plates (which also serve as the return yoke for the magnetic flux from the superconducting solenoid) and active detector elements (glass-electrode resistive plate chambers). The consequent instrumental requirements are Note, in particular, that unlike in the case of the ECL, there are no gaps in the KLM, and, hence, muons cannot thus escape the detector unlike either of e ± , γ. We generated the SM process e + e − → γ + ℓ +l with BabaYaga [35][36][37][38] and signal events with MadGraph5 [34] with the following basic cuts: • Minimum total transverse momentum of charged lepton(single or both), taken to be 1.0 GeV, • Minimum energy of the charged leptons is taken to be 0.5 GeV,

1D normalised distributions
To decide the cut strategy, we discuss next the normalized distribution for various individual kinematic observables. For case I and case II, these are given in Fig.4 and Fig.5 respectively, where we have, for reasons of brevity, displayed only the leading and next-to-leading background. We discuss each case in turn. An understanding of this is best achieved by considering the lost photon. Discounting, for the time being, the gaps in the calorimeter, the photon can be lost only if it either goes down the beam pipe or has too small a energy. Since we require that the missing energy be substantial, the latter alternative is ruled out (unless the missing energy momentum is shared by multiple missed photons, a final state with only a small production cross section). Recognizing that the ECL extends to smaller angles in the forward (e − ) direction than in the latter, it is immediately obvious that the bending of the electron would, typically, be much smaller than that suffered by the positron (Figs.4(b,e)). Combined with the larger laboratory frame energy of the initial e − beam, it translates to a smaller degradation, even in the center-of-mass frame, of the electron energy. On the other hand, the larger (smaller) initial energies of the e − and the e + , convoluted with the smaller (larger) scattering angles implies that the transverse momenta are less dissimilar.
• While the discussion above encapsulates the leading behaviour of the background that owes itself to the "t-channel" part of the underlying Bhabha-scattering (with photons having been radiated off), it does not explain all the features, in particular the secondary peaks. These, though, can be readily understood once one includes the "s-channel-like" diagrams.  • As for the signal events induced by the fermionic operators, these can be thought of as e − e + → e − e + X where X denotes a pseudoparticle of variable mass m ϕϕ . Consequently, the kinematics would, to a large extent, be analogous to that for the background. However the larger effective mass of the X ameliorates the strong forwardbackward peaking to a significant degree. As for the differences between O s,p on the one hand and O v,a on the other, these can be traced to the tensorial nature of the operator corresponding to the pseudoparticle.
• Of particular interest is the distribution of the cone angle ∆R between the e ∓ momenta, defined, in terms of their separation in pseudorapidity and in azimuthal plane, as (∆R) 2 = (∆η) 2 + (∆φ azim ) 2 . As Fig. 4 shows, the background is concentrated at larger values of ∆R, owing directly to their radiative origin. The signal events, though, are dominated by events with large m ϕϕ which, in turn, forces the e ∓ to be relatively closer. Therefore we can profitably use this feature to enhance the single to background ratio.
The same kinematic feature is also played up, to a smaller degree, in the distributions for the missing transverse momentum or the sum of the lepton energies.

Case II : the
• Despite the lack of the "t-channel-like" diagrams, the distributions for the signal events (Fig.5) are not very dissimilar from those for the preceding case. This can traced to the fact that the kinematic scale, in either case, is being set largely by the mass (m ϕϕ ) of the pseudoparticle X.
• For the background events, though, the strong forward-backward peaking is ameliorated, leaving behind a muted dependence reminiscent of e − e + → µ − µ + . The remaining forward-backward asymmetry is but a consequence of the boosting of the center-of-mass frame.
• Another obvious consequence is the near-identical nature of the E * distributions for the µ ∓ .
• The preceding arguments (and a look at Fig.5) suggest that we are faced with a reduced difference between the shapes of the signal and background differences, and, hence, reduced sensitivity. However, the last row of Fig.5 amply demonstrates the fact that the differences in the the missing p T , the E and, more particularly, the ∆R distributions persist.   Figure 5. As in Fig.4, but for the µ − µ + + / E T final state instead.

Case III : the mono-muon final state
With the basic process being similar to the preceding case, but with the restriction that only one of the two muons be visible, the kinematic observables reduce immediately to the same number as in the mono-photon case. With the kinematics being similar too, there exists a consequent similarity of the distributions (see Fig.6) with those for the mono-photon case (as presented in Sec.3). That they are are not exactly identical is easily understood on realizing that the latter was dominated by "t-channel-like" diagrams whereas the present case has only "s-channel-like" ones.

Selection Cuts and Analysis
Cases I and II offer us multiple independent kinematic observables and, hence, the possibility of effecting a detailed multivariate analysis so as to enhance the signal significance. However, given the relatively small signal strength, and the level of sophistication of our event simulation (especially, the treatment of subtle detector effects), we deliberately desist from adopting such a course. Instead, we choose to enhance the signal to noise ratio through the imposition of cuts and, thereafter, attempt a far more conservative analysis of the data.
A careful perusal of the distributions motivates us to define the selection cuts that we choose: • Cut-1: An event should contain only one pair of opposite charged leptons with a missing transverse momentum / p T larger than 2 GeV.
• Cut-2: The total visible energy should be less than 5 GeV.

Cross sections in pb Underlying process Basic Cuts
3.2×10 −5 6.8×10 −6 6.3×10 −6  While the background process with a single hard photon, naturally, has, to start with, a much larger cross section than that with two hard photons, it suffers more severely due to the cuts (see Table 1 & 2). This is easy to understand as the balancing of the leptonic p T and, even more, the missing energy by a single photon, makes it very difficult for the latter to have escaped the detector by going either sufficiently forward or backward. Thus, it has to, essentially, fall into the ECL cracks. On the other hand, if the missing momentum were to be shared by two photons, there is a much higher probability of the event satisfying the selection cuts. It might seem, at this stage, that even higher order processes such as e − e + → ℓ − ℓ + + 3 / γ could contribute non trivially to the background. This, however, is not so. The addition of one further photon does not make it any easier for the photons to have all missed detection as well as satisfy the two selection cuts. Rather, with the additional suppression by a factor O(α em ), the ensuing contribution is actually much smaller.
Having imposed the aforementioned selection cuts, we now constitute a uniform two dimensional grid in the ∆R-E ℓ plane. Comparing the signal and background event count in each of the bins (sized 0.1 × 0.2 GeV), we perfrom a χ 2 test. The consequent exclusion contours are presented in Fig.7. Note the large improvement in sensitivity going from an integrated luminosity of 1 ab −1 to 50 ab −1 , an improvement much larger than the corresponding obtainable in the case of the monophoton signal. This only reflects the fact that, for the smaller luminosity, these processes are statistics-limited, whereas the monophoton signal quickly became systematics-limited.  While the sensitivities, as shown by Fig.7 are systematically lower than that available from the monophoton case, note that the suppression factor is not too large, especially for the large luminosity case. Thus, these channels do serve to provide additional information. It is obvious that combining the two channels would lead to even better constraints. Case III, on the other hand, shows weaker sensitivity.

Additional Remarks
We now comment on the apparent lack of full SU (2) ⊗ U (1) symmetry of the O f s,p operators. As has been remarked earlier, this symmetry can be restored if we consider, instead, operators of the form and, analogously, for O f p . Here H SM is the SM Higgs field. Post electroweak symmetry breaking, the relevant piece of the Lagrangian can be written as where v = H 0 SM is the symmetry breaking scale. If we assume that ξ f is comparable to the usual fermion Yukawa coupling, C f s would be tiny for the light fermions, rather than O(1) as we have assumed. Consequently, the sensitivity to Λ reduces enormously. Indeed, sensitivity to Λ > 10 GeV (a must for the effective theory paradigm to be valid at Belle) requires an integrated luminosity > ∼ 8 ab −1 . While ref. [45] claims a much better sensitivity for low luminosities, note that they seek to benefit from an enhanced coupling to the charm quark by looking at e + e − → ϕ + ϕ * + J/ψ or e + e − → ϕ + ϕ * + η c . Nonetheless, for an integrated luminosity of 50 ab −1 , the sensitivity of our (monophoton) mode is only a factor of ∼ 1.5 worse than Ref. [45], and hence this mode constitutes an important additional probe. Note, further, that the J/ψ (or η c ) modes are kinematically inaccessible for m ϕ > ∼ 3 GeV, and the monophoton mode would be the best bet for such masses.
Since the reduced sensitivity is a consequence of our having assumed that ξ f are of the order of the usual Yukawa couplings, it is interesting to consider the opposite case of ξ f ∼ O(1) instead. Such a situation could transpire if the operator of eq.(6.1) were the result of some strong dynamics. For such a case, the consequent bounds on Λ would be only marginally weaker than those on the corresponding dimension-5 operator of eq.(2.1). A more interesting outcome of such a value for ξ f would be a four-body decay of the form While these rates are far smaller than those for the two-body decay modes, they are significantly larger than the SM rates for H 0 → ν i +ν i + f +f . Although they are still too small to have been identified at the LHC (with a further experimental complication on account of the spread in the invariant mass of the ff pair), it would be interesting to look for these as the integrated luminosity mounts.
Finally, if ϕ were a real scalar field (rather than a complex one) with identically defined couplings, the cross sections would be larger by a factor of two (owing to there being two identical particles in the final state). Consequently, for a given m ϕ , the constraints on Λ realϕ would be a factor of √ 2 (2 1/4 ) stronger for the analogues of operators O f s,p (O f γ,γ ).

Direct Detection
The ambient (in the vicinity of the earth) density of DM can, in principle, be probed through their interaction with terrestrial detectors. The sensitivity, of course, would be dependent not only on the experimental configuration, but also on the profile of the DM distribution in the immediate neighborhood. Several such profiles, defined not only in terms of the density but also in terms of velocity, have been extensively studied in the literature [46,47]. Fortunately though, the consequent differences are stark only close to the galactic center, while at the periphery, the experimental expectations are quite similar. Consequently, for the rest of the section, we will make use of the following standard assumptions: a Maxwell-Boltzmann velocity distribution with its high-velocity tail truncated at the galactic escape velocity of 544 km/s, a local velocity dispersion of the DM halo, v rms = 270 km/s, and a local dark matter density of 0.3-0.4 GeV/cm 3 .
Such Direct Detection experiments, typically, involve the use of a bolometric device working at a very low temperature (so as to eliminate thermal noise to the maximum extent possible). With the WIMP-nucleus cross sections being much larger than WIMP-electron ones (for similar-sized couplings), these experiments are much more sensitive to the former, to the extent of often neglecting bounds on the latter. In particular, nucleon collision cross sections for a DM with a mass in the 0.5-5 GeV range are severely constrained by the CRESST-II [48] experiment. On the other hand, for masses below 1 GeV, the typical recoil energy is lower than the detector thresholds, rendering such experiments quite insensitive.
In the case DM is hadrophobic, its interaction with the detector material would proceed primarily through its interaction with the electrons therein and this is what we would start this section with. However, we shall end our analysis with the study of nucleon-DM interaction, which will be relevant if DM interacts with quarks also.

DM scattering off electrons
Even for masses below 1 GeV, DM scattering off electron can lead to a measurable signal. Consequently, many experiments (including those for which the primary mode is a different one) have investigated this, for example, in the context of inelastic electron scattering leading to ionization of atoms. For semiconductor targets, excitation of an electron to above the band gap is also of interest. Sensitivity to such processes has been studied in context of the XENON10 detector in Ref. [49]. The rate for such processes are dependent on three factors: the ionization form factor, the elastic WIMP-electron cross section and the density profile for the DM particle.
To begin with, we consider the leading elastic WIMP-electron cross sections for the operators in eq.(2.1). These are where the Wilson coefficients C e γ,γ (for C γ,γ = 1) are both given by (see Appendix B for details) The loop has to be calculated using a gauge-invariant regularization procedure such as Pauli-Villars or the dimensional (calculating in 4 − ǫ dimensions) method and we choose to use the latter. The ubiquitous factor of (2/ǫ − γ E ) has been traded, as usual, for the logarithmic factor. Note that, compared to the electron-DM operators in eq.(2.1), these have an extra factor of m e /Λ. The overall factor of Λ −2 is, of course, a legacy of the "tree-level" ϕϕ * γγ parent. The factor of m e , on the other hand, appears as a chirality flip is essential for the fermions to couple to the (scalar) DM. The Wilson coefficients, understandably, are dependent on the momentum transfer q. It has been argued [50] that the appropriate scale is that operative for atomic transitions, namely q = α em m e . Note that such a choice also serves to enhance the sensitivity of these experiments. The calculation of the elastic WIMP-electron cross section is now straightforward and yields and similarly for Oγ. Using values of (m ϕ , Λ) that reproduce exactly the Planck measurements of the relic density (see Fig.5 of [31]), we present, in Fig 8(a), the DM-electron elastic cross sections. Also presented, for comparison, are the XENON10 results. It might seem paradoxical that the cross sections for O f v are larger than those for O f s despite the p-wave suppression (see eqn. 7.1). However, note that Λ(O f v ) ≪ Λ(O f s ), owing to the corresponding p-wave suppression in DM-annihilation. This more than makes up for the extra factor of m 2 ϕ /Λ 2 in eqn. (7.1).
Note that we have used Λ-values corresponding to the case of the democratic coupling. Had we considered a leptophilic DM instead, the value of Λ max for each of O f s,v would have been smaller by a factor ranging between 1.5-2 for the m ϕ values of interest. This would translate to an increase in the cross sections. However, as Fig.8 shows, the cross-sections for O f s would still continue to be below the XENON10 level. On the other hand, those for O f v would start being comparable to the experimental upper bounds; in particular, the range ∼5-150 MeV is already ruled out for such a theory. It should also be remembered that while the relic density constraint imposes an upper bound on Λ, the direct detection experiments impose a lower one. Thus, with only a little improvement, these experiments, would start to rule out the parameter space allowed by relic density.
A caveat needs to be entered here. In calculating the effective Wilson coefficients C e γ,γ , we made two key assumptions in choosing the cutoff scale and in setting the momentum scale. Both the particular choices served to maximize C e γ,γ while remaining within the ambit of effective field theories. Uncertainties in these scales (related, as they are, to the ultraviolet completion) can significantly relax the bounds obtained from the non-observation of any signal at XENON10 .

DM scattering off nucleons
For a DM with a mass greater than 0.5 GeV, the parameter space can be constrained using the negative results of the CRESST-II experiment. To this end, we begin by evaluating the nuclear matrix elements N |O f |N for all the operators at a scale µ =1 GeV. This leads to where m N is the mass of the nucleon. The induced coupling constants F s,N and F v,N parametrizing the effective DM-nucleon interaction are given by where m q denotes the mass of quark, while f N q are the proton form factors. The latter can be calculated within various different frameworks, with chiral perturbation theory giving some of the best results. For example, the proton form factors are f p u = 0.017, f p d = 0.036, f p s = 0.043, f p c,b,t = (2/27) (1 − q=u,d,s f p q ) = 0.067 as calculated at µ =1GeV [51]. As for interactions mediated by a photon(s), the DM can scatter off a single nucleon as well as an entire nucleus [52,53]. Let us begin by focussing on the former, especially on the DM-nucleon interaction generated on account of a DM-photon vertex. At the one loop level, the effective operators O q s,p (eff.) would be generated, just as in the case of the electrons (see eq.7.2). The expressions for the corresponding Wilson coefficients C q γ,γ would be exactly identical to those in eq.(7.3), apart from a multiplicative factor of Q 2 q where Q q is the charge of the quark under consideration. Quite apart from this, the DM can also interact with the entire nucleus via the exchange of two virtual photons [53]. This coherent scattering implies that the (two-photon) amplitude must scale as Z 2 where Z is the atomic number of the nucleus under consideration. Scaling down this amplitude by a factor of A (the atomic weight) would then give us the average nucleon-DM amplitude. Using the results of Ref. [53], the induced operator for DM-nucleon Rayleigh scattering can, then, be parametrized as whereq ≡ q/Q 0 with q being the momentum transfer and the nuclear coherence scale 2x)) .

(7.9)
The two amplitudes-those due to O q s (eff.) and O Ray -add coherently, and, together, yield The CRESST-II experiment uses cryogenic detectors to search for nuclear recoil events induced by the elastic scattering of dark matter particles in calcium tungstate crystals. With the DM-nucleus scattering cross section being proportional to m 2 A , the naive expectation is that the dominant contribution to the scattering off a CaW O 4 molecule would be that due to the tungsten nucleus. On the other hand, the energy transferred in a scattering event is approximately q 2 /(2m A ), where the transferred momentum q ≈ m ϕ v ϕ . For large m A , this would fall below the detector threshold energy, which, in this case is ≈ 307 eV. Consequently, a large fraction of the events corresponding to scattering off tungsten and a slightly smaller (yet large) fraction of those off calcium nuclei would not register. Acting in concert with this is the fact of there being four times as many oxygen nuclei as the others. Thus, using Z = 8 in the formulae above is a very good approximation and is in very good agreement with the simulations for DM-nucleus scattering given in Fig. 7 of ref [48].
The consequent size of the Rayleigh scattering contribution to the amplitude is smaller as compared to that induced by O q s (eff.). Moreover, they interfere destructively. Using values of (m ϕ , Λ) that satisfy the relic density measurements (Fig.5 of [31]), we present the DM-nucleon elastic cross sections for all the operators in Fig 8(b). As is immediately apparent, constraints from the CRESST-II results are very weak for m ϕ < ∼ 400 MeV and essentially irrelevant. This is but a reflection of the fact that such light DM particles cannot transfer sufficient energy to the nucleon for the event to register. Moreover, even for m ϕ > ∼ 400 MeV, the operators O γ,γ continue to escape the bounds from CRESST-II. So would be the case for fermionic operators that do not involve the light quark fields (note that the DM need not be hadrophobic, per se). And, once again, caveats such as that in the preceding subsection do hold; in fact, even more so on account of uncertainties in the calculation of hadronic matrix elements at such low momentum transfers.

Conclusion
In this, the second of a two-part investigation of the interactions of a light (MeV scale) scalar DM particle with the SM sector within the framework of an effective field theory, we effect a systematic study of the sensitivity of existing experiments of a varied hue to such a DM particle. In this, we were guided by our analysis-presented in the first paper [31]-of the cosmological constraints. Encompassing not only relic density constraints but also those from the requirements that the annihilation of the DM does not significantly alter either the ratio of the neutrino and photon temperatures or the shape of the CMB spectrum, these calculations took cognizance of the fact that sub-GeV DM particles, on annihilation into colored fields, can only manifest themselves in bound states rather than, say, quasi-free quarks. Considerations such as these, on the inclusion of higher-order effects in bound-state dynamics, result in a non-trivial shape of the allowed parameter space of the EFT, and this is what we have considered here.
With the LHC being unsuited to the investigation of such light states, we take recourse to the clean environment of the high-luminosity KEK-B accelerator that is already under operation. A DM particle with unsuppressed couplings to the electron or to the photon can be looked for at an e ± collider. As the DM particles can only be produced in pairs, and as there must be at least one visible particle in the final state, the simplest process is e − e + → ϕϕ * γ i.e., a photon accompanied by missing energy-momentum. In fact, at a lowenergy facility such as the Belle-II, this is indeed the most sensitive channel 10 . Analysing the sensitivity of different DM-SM interactions through the two-dimensional differential kinematic distributions corresponding to this channel, we find that most of the parameter space allowed by relic density can be probed at Belle-II. Furthermore, based on a χ 2 comparison of the one-dimensional normalized differential kinematic distributions (given in Fig.1) corresponding to an integrated luminosity of 1-50 ab −1 , we find the sensitivity to be robust and pronounced.
While the sensitivity limits obtained here are independent of the details of the ultraviolet completion, they are certainly dependent on the tensorial structure of the effective current-current interaction. In particular, in the event of a positive signature, the phase space distributions would distinguish between different interaction Lagrangians. Moreover, not only can Belle-II discover thermal DM candidates in the parameter range that is allowed as of now, it can also access parameter space that is not amenable to a thermal DM explanation. In this sense, a discovery by Belle can, in principle, shed light on the mechanism of DM production and sustenance in the early universe.
Not limiting ourselves to the mono-photon channel alone, we also examine other final states namely e − e + → ϕϕ * ℓl, where ℓ is any of electrons and muons. Owing to the smaller cross-section, the sensitivity is statistics-limited, and is weaker than that available to the mono-photon channel, especially for low luminosities. However, once the design integrated luminosity is reached, these additional channels are not much worse off. More interestingly, in the event of a discovery, these complementary channels could be very useful in unravelling the tensorial structure of the couplings and, hence, act as a pointer to the UV completion.
Each of the final states discussed above could also have been investigated at the LEP. Looking at the published data and archival analyses, we find that while the LEP studies in the mono-photon channel could be easily reinterpreted in terms of the EFT parameters, the constraints so derivable were much weaker than those obtainable at Belle-II. The main contributing factor, of course, is the much higher luminosity at the KEK-B.
Finally, in a fashion similar to collider experiments, direct detection experiments such as XENON100 or CRESST-II can also be used to constrain the effective Lagrangian for the DM. While a naive reading of the negative results at the latter experiment would seem to suggest that a DM governed by this effective Lagrangian and reproducing the correct relic density must satisfy m ϕ < ∼ 0.5 GeV, this conclusion too can be evaded, for example, if the DM were hadrophobic. On the other hand, the channel at the Belle-II that we propose would be unaffected by such an assumption. In other words, such experiments offer a welcome complementarity of sensitivities. Figure 9. Typical Feynman diagram generating a ϕ * ϕf f coupling from "tree-level" ϕ * ϕγγ coupling.
the integral is (A.1) where we have used C γ = 1 and Cγ = 0. Using Feynman parametrization, we may write For the sake of simplicity, we have set the external fermions to be on-shell (p 2 2 = p 2 4 = m 2 ).
Thus, the integral can be written as Quite expectedly, the integral is quadratically divergent. Consequently, we shall effect a dimensional regularization, working in d (= 4 − ǫ) dimensions. We can, then, effect a Wickrotation (k µ → k Eµ ≡ l µ ) followed by a shifting of the integration variable. This leads to Retaining only those terms in the numerator that lead to divergent terms, we have In other words, a quantum of the operator O f s is generated. Here the factor of m can be understood from the need to effect a chirality flip. It should be noted that even O f v would be generated, and without the factor of m. However, this would arise only from the finite pieces of the integral and, hence, would be suppressed. Effecting the usual replacement for the (2/ǫ − γ E ) factor, we finally have where µ is the momentum scale of interest. Had we started from Oγ with Cγ = 1 instead, we would have had Once again, a suppressed but nonzero C f a (eff.) would also be generated. In a similar vein, starting with the fermion operators, one could, analogously, generate C γ,γ (eff.) as well.