Semi-Annihilation of Fermionic Dark Matter

The continued non-observation of events emanating from dark matter (DM) annihilations in various direct and indirect detection experiments calls into question the mechanism for determining the relic density of a weakly interacting massive particle. However, if the relic density is determined primarily by a semi-annihilation process, as opposed to the usual annihilation, this tension can be ameliorated. Here, we investigate a Z3 symmetric effective field theory incorporating a fermionic dark matter that semi-annihilates to right-handed neutrinos (RHN). The dynamics of the RHN and the impact of its late decays are also scrutinised while obtaining the correct DM relic. Finally, indirect detection bounds on the semi-annihilation cross-sections are drawn from the gamma-ray observations in the direction of Dwarf Spheroidal Galaxies (Fermi-LAT), and including the projections obtained for the H.E.S.S. and the CTA detectors.


Introduction
Despite its stupendous success, the Standard Model (SM) leaves many questions unanswered, and, in this article, we aim to address two of them in an unified way. One of these pertains to the existence of Dark Matter (DM) and the other to the problem of neutrino masses. While both the questions have been addressed independently, and, on occasions, even together, we consider a mechanism for establishing the DM relic density that is relatively less explored. Simultaneously, this mechanism would be seen to naturally yield light neutrino masses of the right orders, accommodating equally well both the normal and inverted hierarchy.
A popular mechanism for generating neutrino masses is to invoke the Type-I seesaw mechanism via the introduction of the right-handed neutrinos (RHNs) [1][2][3][4][5]. Being SM gauge singlets, these rarely play any discernible role in laboratory physics other than to generate neutrino masses and mixings and, hence, this sector can hardly be probed, whether directly or indirectly. Attempts have been made to ascribe to these a role in determining the DM relic density, whether through freeze-out or the freeze-in mechanisms [6][7][8][9][10][11][12][13]. In such freeze-out scenarios, the DM particles pair-annihilate primarily to these RHNs which, thereafter, decay to SM particles. Since the RHN has to be lighter than the DM for this to take place, unless the DM itself is very heavy, correct neutrino phenomenology would demand that the neutrino-sector Yukawa couplings be small, thereby delaying the decay of the RHN and, hence, the freeze-out epoch [14][15][16]. Analogous effects are evinced in the freeze-in mechanism as well [17].
The situation takes an interesting turn if the symmetry ensuring the absolute stability of the DM is not the simple and popular Z 2 . A more complicated symmetry (such as the Z 3 or Z 5 ) may dictate that the the leading interaction is not a pair-annihilation of the DM (whether directly into SM particles or other exotics), but one where, say, three DM particles D must be involved [18,19]. This could manifest itself in two different ways, a 3D → α S α annihilation (where S α denote Z 3 -neutral particles, whether within the SM or exotic) or of the form D + D → D * + α S α . The first mechanism, seemingly, is very efficient in reducing the DM density. It, however, is associated with low cross sections, simply on account of the initial-state flux factors, and is relevant only for a very light DM particle [18,20,21]. The second process, on the other hand, reduces the number of DM particles by only one, instead of the customary two in the case of the usual pair-annihilation processes. With the ensuing slow-down of the freeze-out process, the couplings required, in either case, to reach the observed relic density [22] would be markedly different leading to differing consequences for the direct or indirect detection experiments.
The semi-annihilation process, as the leading mechanism, has been studied in the context of both scalar [23][24][25][26][27][28] and vector [26,[29][30][31] DM particles. For fermionic DM, though, such a semi-annihilation process requires the participation of at least one more fermion and, in a UV-complete theory, at least one additional boson [26,32]. Gauge invariance is most easily ensured if the the extra fermion in question is a singlet under the SM and this brings into focus the possibility that it is nothing but (one of) the aforementioned RHNs.
To this end, we concentrate on a fermionic DM D whose relic abundance is determined primarily by semi-annihilation (D D → D c N i ) into RHNs in a Z 3 -symmetric framework. Rather than focus on a specific construction, we adopt a model-independent approach focusing on effective operators. Almost irrespective of the UV-completion (we offer one in an appendix), the RHNs further decay or annihilate into SM particles. The Yukawa couplings responsible for the latter are allowed a wide range of values; in particular, small values can delay the freeze-out with interesting cosmological ramifications.
It has been argued that models with DM annihilation to RHN are severely constrained by direct and indirect detection [10,14] experiments. We investigate these aspects in the context of our model and find that while direct detection constraints are very weak, the measurement of energetic photon fluxes in the Fermi-LAT [33] and H.E.S.S. [34] experiments do serve to constrain the parameter space. The viability is comparable to (or even better than) models wherein a bosonic DM undergoes semi-annihilation [23].
The rest of paper is planned as follows. In section 2, we describe the scenario and delineate the effective Lagrangian at the lowest order. Section 3 examines semi-annihilation as well as the other relevant processes and thereby obtains the DM relic abundance for various regions of the parameter space. Bounds on semi-annihilation cross-sections from FermiLAT and H.E.S.S, as well as sensitivity projections for the CTA are obtained in section 4. We present our conclusions in section 5. While a prospective UV completion is described in Appendix Appendix A, calculational details are presented in Appendices Appendix B and Appendix C respectively.

The Model
We augment the SM with two kinds of fermions, the RHN 1 N i (i = 1, 2, 3) and the Dirac field χ for the DM. While all the new fermions are SM-singlets, we ascribe a nontrivial transformation of the χ-field under a Z 3 , namely The N i may have Majorana masses as well as lead to Dirac masses through Type-I Yukawa couplings. The corresponding Lagrangian may be parameterised as where L i and H are, respectively, the SM lepton doublets and the Higgs field. Without any loss of generality, the (symmetric) Majorana mass matrix m N may be considered to be diagonal. After electroweak symmetry breaking, the usual Type-I seesaw mechanism generates a mass matrix for the light neutrinos of the form where v = 246 GeV . The matrix m ν , on diagonalization, would yield the light neutrino masses and mixings. With the N i being heavy, their mixings with the light species are tiny indeed. Nonetheless, for m N > ∼ 100 GeV , the small mixing still allows for prompt decays 2 such as N i → i +W + and N i → ν i +Z/h with the branching fractions scaling approximately as 2 : 1 : 1. Since we would be typically interested in the range 100 GeV < ∼ m N < ∼ 1500 GeV , the Yukawa couplings required to explain the neutrino masses would lie in the range 10 −7 < ∼ y N < ∼ 10 −6 . Precise choices for the individual couplings can always reproduce the observed masses and mixings, for both normal and inverted hierarchies. However, since this is not the primary goal, we desist from such an exercise, limiting ourselves, instead, to admitting only such couplings as would produce neutrino masses of the right order.
Given the particles and their quantum numbers, the DM field admits only a free Lagrangian, namely L χ =χ (i ∂ − m χ ) χ , (2.4) and the introduction of interaction terms would require either the breaking of the Z 3 symmetry (which would destroy the stability of the DM) or the presence of additional fields. Remaining agnostic to the nature of such additional fields, except that these are heavy SM singlets, one may, instead, express the consequences of their inclusion in terms of an effective theory. Restricting ourselves to the lowest dimensions, we have where Λ is the cutoff scale and we have dropped the index on the N -fields. The Dirac matrices Γ a are to be chosen such that Lorentz invariance is maintained. With (χ c γ µ χ) and (χ c σ µν χ) vanishing identically, the only remaining operators are 3 It might be argued, and rightly, that we have omitted other possible terms such as (N Γ a N )(χΓ b χ) or even analogous ones with the N replaced by the SM fermion fields. With no symmetry precluding their presence, such terms cannot be wished away in their entirety. However, the corresponding Wilson coefficients could be small depending on the UV completion of the model. A trivial solution would be to postulate that the coupling of the mediator to the N χ current is much smaller than that to the χχ current. One such completion is discussed at length in Appendix A. However, independent of the details of the actual UV completion, we assume that such terms (viz. (N Γ a N )(χΓ b χ) etc.) are indeed suppressed (compared to those in Equation 2.6), for a strong violation of such a suppression would bring us back to the normal regime of the complete annihilation process (as opposed to semi-annihilation) as the driving force behind the determination of the relic density. This would render operative the direct detection bounds (which, we will see, are manifestly relaxed for semi-annihilation). In other words, we deliberately choose to work in a regime where semi-annihilation dominates. The Lagrangian of Equation 2.5 would result in processes such as χχχ ↔ N or χχ ↔ N χ c . Of these, the 3χ → N process is naturally suppressed on account of the small incident flux, while the decay N → 3χ can proceed only if m N > 3m χ . Concentrating on the 2 → 2 scattering the amplitude, then, reads, ) T with C being the charge conjugation matrix. Once again, we see explicitly that Γ b = γ µ , σ µν would lead to vanishing matrix elements. Henceforth, we make the further simplifying assumption that only one of the remaining c ab is non-zero and, for convenience, scale it to unity. One might worry about the renormalization group evolution of the Wilson coefficients. However, unless the hidden sector is a strongly interacting one, this evolution is not expected to be any different from that in an usual ultraviolet complete theory of DM. In particular, within the standard freeze-out paradigm, it is only within a small range of momentum scales that most of the interesting physics happens, and the Wilson coefficients do not vary appreciably over such a range.
of temperature and polarization anisotropy in the cosmic microwave background, mainly, the relic abundance of DM, Ω DM h 2 = 0.1199 ± 0.0012 [22]. As it will turn out, the mechanism of interest is the freeze-out of the DM. In this model, it proceeds via its semi-annihilation to RHNs through χχ → χ c N . The Yukawa coupling of the RHNs allows these to decay through a multitude of channels, viz Although the first two of the decay modes proceed only through the mixing of the N i with the SM neutrinos, it is easy to see that the leading modes in each subclass have similar partial widths.
Here, the thermally averaged cross-section [35] is given by where E cm is the center of mass energy, K n is the modified Bessel function of the second kind of order n and λ(a, b, c) is the usual Källen function. The expressions for the cross sections and the decay widths of the RHN can be gleaned from Appendix B.
A particular aspect needs to be appreciated at this juncture. It is commonplace to write σv as a power series in the relative velocity v. In most scenarios, the terms involving the higher powers fall off quickly, allowing one to understand the scattering in terms of s-wave or p-wave dominance etc. In the present situation though, while the terms do continue to fall off similarly (see Figure 2), additional complications arise on account of the peculiarity of the effective interaction (and, thus, the amplitude). No single partial wave dominates over the entire range of evolution; rather, different terms can be important during the decoupling of DM, with the relative importance being a function of both the DM and the RHN masses. For example, as Figure 2 demonstrates, for a larger m χ /m N , the v 2 term falls faster with m χ /T whereas the v 6 term falls slower. Consequently, we use the full integral in Equation 3.2. The yield of dark matter, as of today, is well approximated by Y χ (∞) and can be used to calculate the relic abundance using With the RHN playing a prime role in determining the DM relic abundance, it is imperative to examine the evolution of their density. The mass ratio m N /m X as well as the size of the couplings, both play important roles resulting in a somewhat complicated behaviour as summarised in Figure 3, Here, the yields of the DM and the RHNs are shown as a function of x = mχ T for m χ = 120, m N i = 100 GeV and Λ = 500 GeV. To highlight the individual epochs of freezing out, we employ a ruse wherein the yellow and blue curves represent the abundances of DM and RHN, respectively under the (unphysical) assumption of these never having fallen out of equilibrium. The actual evolution for the DM are depicted by the green curves while those for the RHN with y N i = 10 −7 (10 −10 ) are shown by the red(purple) ones.
At temperatures of the order of the electroweak scale, all the SM particles would, of course be in equilibrium. For m N ∼ O(100 GeV), a sizable y N (∼ 10 −7 ) would ensure that 10 -6 the aforementioned decays of the N i , along with the reverse processes, would keep these particles in equilibrium with the leptons and the weak bosons, and, hence with the entire SM sector. Furthermore, as the RHN are postulated to be lighter than the χ, a large enough y N i = 10 −7 (red curve) for all three RHNs, would imply that they would decouple at an epoch later than χ (green curve) and, hence, would have a much smaller relic density, as can be seen in Figure 3(a).
The situation changes dramatically if the y N i is/are much smaller, as is illustrated by Figure 3(b) with y N i = (10 −7 , 10 −7 , 10 −10 ) and Figure 3(c) with y N i = (10 −7 , 10 −10 , 10 −10 ). While the densities of the N i with a larger Yukawa coupling (red curve) continue to behave as in Figure 3(a), those with small Yukawa coupling(s) (purple) behave differently as their interactions with the SM sector are now much weaker than the Hubble expansion rate. Hence, while the N i with larger Yukawas are still in equilibrium with the SM (and, through χχ → χN , so is the DM), these other RHNs decouple slightly before the DM does. Consequently, post decoupling, their number densities continue to be much larger than that of the DM. This has an interesting consequence. Owing to the larger density of such N i , the process χN i → χχ would occur more frequently than the reverse, and the DM density would be slightly replenished before becoming constant at a slightly lower temperature. A careful comparison of Figure 3(b) and Figure 3(c) shows that this effect is more pronounced in the latter, which is but a consequence of the fact that the latter case corresponds to two N i with a small Yukawa coupling as opposed to just one for the former.
For the sake of illustration, in Figure 3(d) we depict the case where y N i = (10 −10 , 10 −10 , 10 −10 ), a situation that cannot explain all the neutrino masses. Owing to all the Yukawas being tiny, the RHN (purple) and, hence, the DM (green) both decouple from the equilibrium distribution very early. However, the tiny Yukawas also mean that the RHN decay late. Consequently, the χN i → χχ keeps on replenishing the DM relic abundance for a longer period.
With the sizes of the Yukawa couplings having such marked imprints on the thermal history, it is natural to expect that the relic densities too would be quite different. However, the dependence is not too severe as the value is primarily driven by m χ and Λ. Indeed, on the scale of Figure 3, the differences do not show up. A careful calculation, though, shows that, in the four cases examined, We now turn to other consequences of the presence of the N i . First and foremost, while their lifetimes are not ultrasmall, they still decay fast enough for overclosure to be an issue. On the other hand, their not so inconsiderable lifetimes could, presumably, alter physics at the era of big bang nucleosynthesis (BBN) or even the CMBR epoch. The latter alongwith the relative abundance of the light elements (BBN) encode information about the thermal history of the early universe, and is well described by SM physics. The decay of particles post such epochs have the potential of destroying the agreement between theoretical predictions and the observations. Fortunately though, in the present context, the longest lifetime of RHN is at most 0.001 s (corresponding to y N ∼ 10 −10 and m N = 100 GeV ) whereas BBN occured in the cosmic time window 0.1s − 10 4 s and CMB is even later. Thus, the abundance of RHN would have depreciated to a very large extent. To be quantitative, studies carried out in ref. [36] show that the BBN constraints dictate that the fractional abundance of particles of mass O(100 GeV ) and decaying into hadronically at t > ∼ 1 s after the inflationary phase should be less than 10 −12 . Thus, given the smallness of the N i lifetimes, BBN constraints are trivially satisfied. Similar is the case with the CMBR observations.
On the other hand, late decays of the N would presumably leave their mark in the sky. The energetic neutrinos can be coming from RHN decay can be detected by experiments like SuperK, IceCube, etc [17]. However, such bounds are found to be much weaker compared to the ones coming from photon spectrum as discussed in section 4.

Scale and DM mass dependencies:
Having studied the various dependencies, we now proceed to determining the parameter space corresponding to the observed relic density. The various panels of Figure 4   mass m N . Understandably, the area above the curves (larger Λ) corresponds to smaller interaction strengths and, hence, an overabundance of DM. On the other hand, the area below the curve leads to an under abundance and, hence, is still allowed (though only at the cost of postulating an additional component of the dark matter). Note that these results are obtained for m χ > m N . For m χ < m N , three body decays such as N → ν + Z * /h * → νf f or N → + W * → f f would be important. However, we do not explore this in the present work.
The curves in Figure 4 imply that obtaining the correct amount of energy density requires the mass of dark matter particles to increase with Λ. This could be easily understood by looking at the functional dependencies of the cross sections. With the DM particle being non-relativistic at the epoch of decoupling, the center of mass energy (E cm ) for the process χχ → χN is close to 2m χ and the thermal average of the cross section scales roughly as m 2 χ F (m N /m χ )/Λ 4 where F (m N /m χ ) < 1 is specific to the form of the interaction Lagrangian. This immediately indicates that Λ should scale roughly as m 2 χ with deviations depending on the form of F (m N /m χ ). Furthermore, that the SS (≡ P P ) and SP struc-tures require a cut-off scale Λ to be nearly half of what is required for V A or AA can be easily traced to the fact that the two disparate sets of cross sections differ by a factor of nearly 16.
As for the m N dependence, note that for the VA and SP structures, F (m N /m χ ) is a monotonically decreasing function of the argument. The consequent decrease in the total cross section with an increase in m χ would need to be compensated by an associated decrease in Λ so as to lead to an unchanged relic abundance, and this is reflected in a comparison of panels (a), (c) and (d) of Figure 4. On the other hand, for the SS and AA cases, the cross sections do not change appreciably with m N . Finally, Figure 4(b) corresponds to the case where one of the RHN has small decay width (y N i = 10 −7 , 10 −7 , 10 −10 ), which leads to a nominally larger relic abundance (as shown in Figure 3(b)) as compared to the case where all the RHNs have same Yukawa coupling, y N i = 10 −7 .
Before we end this section, it is contingent upon us to examine the validity of the effective field theory paradigm vis á vis the parameter space required to satisfy the correct relic density. Clearly, the cut-off scale Λ for such a description must be larger than the masses of any particle present in the theory. In the present case, this is definitely satisfied by the entire parameter space for the VA and the AA cases. For the (pseudo) scalar currentcurrent structures, though, this is not so for larger m χ values (although, for smaller m χ , this does hold). In other words, the effective theory approach may not be entirely valid and the use of the full theory seems to be called for. It is, however, easy to ascertain that the consequent changes in this part of the analysis would not be severe, and that the results obtained herein are precise enough for an exploratory study that this is.

Constraints from indirect detection of DM
The semi-annihilations of the DM would also lead to electrons, positrons, quarks, neutrinos, gamma-rays etc. with different energy spectra. Of these, gamma-ray observations have been used prominently to probe the presence of DM decay, annihilations and semi-annihilations. The reasons for the choice are twofold: γ−ray emissions from various astrophysical objects are well understood, and unlike charged particles, photons travel through interstellar and intergalactic matter suffering relatively little obstruction. Thus, by comparing the expected gamma-ray flux to the measured one, one can derive upper limits on DM interactions in the absence of any excess.
The processes relevant to the present study is the semi-annihilation of DM to RHN which, subsequently, decay via W ± , Zν and hν with the bosons cascading down to quarks (hadronizing almost instantly) and leptons. The resultant charged particles from the decay yield gamma rays, either during hadronization or as a result of final state radiation processes. To this end, we use observations from the following three experiment set-ups to constrain the effective operators analysed in this work: • Fermi-LAT: The satellite covers a wide energy range, starting from 0.5 GeV and going up to 0.5 TeV. To derive the exclusion limits, 6 years of data from 15 dwarf Spheroidals (see Table 1 of [33]), based on the pass 8 event analysis [33] is used.
• The High Energy Stereoscopic System (H.E.S.S.) is sensitive to high energy γ−rays spanning energies ranging from 0.2 TeV to 30 TeV. The initial phase (H.E.S.S. I) had already published competitive limits on DM annihilations cross sections [37]. With the addition of a large dish at the center of the array, the second phase (H.E.S.S. II) is expected to lead to much stronger constraints [34]. To exploit this expected sensitivity, we first tune our analysis to reproduce the H.E.S.S I data [37], and follow it by rescaling it with the projected sensitivity in the bb channel as calculated in Ref. [34].
• With more than a 100 individual telescopes, the Cherenkov Telescope Array (CTA) would be well-placed to detect high-energy gamma-rays over a very wide energy range (∼60 GeV to ∼300 TeV) and is projected to be more than ten times as sensitive as the currently operating ones. This unprecedented sensitivity is expected to yield very strong constraints (if not an actual discovery) especially when looking at gamma-rays from Galactic center.
Note that the Fermi-LAT experiment is more sensitive to gamma-rays at low energies, whereas H.E.S.S. and CTA are sensitive at larger energies.

The γ-ray spectrum:
A gamma-ray signal from DM annihilation or semi annihilation is inferred from the number of photons at a given energy bin from a portion of the sky. The quantity that captures this information is the differential flux, which is proportional to: (i) the square of the number density of the dark matter particle; (ii) the thermal-averaged product of the annihilation cross-section and the velocity ( σv ); (iii) the number of photons produced per dark matter process as a function of energy, i.e the energy spectrum (dN/dE); and (iv) the size and density of the region of the sky under study, as captured in the J-factor. On inclusion of the appropriate normalization factor, the differential flux for dark matter semi-annihilation is and includes an integration over the line-of-sight between the observatory and the source. This factor obviously depends on the dark matter distribution which we have assumed to follow the NFW profile [38]. We compute the energy spectrum dN/dE due to the semi-annihilation processes 4 etc., including the hadronization effects, using a suitably augmented version of PYTHIA8.3 [39]. The resultant normalised energy spectrum for different values of m χ at a fixed RHN mass (with the RHN decaying to a final state containing an e ± ) in Figure 5. Since the DM is non-relativistic, the energy of the RHN can be estimated to be As expected, and as illustrated by Figure 5, a heavier DM leads to a more energetic RHN.
With the extra energy being transmitted to the N 's decay products, this would lead to a harder gamma-ray spectrum. Given the expression for E N , it is obvious that it changes substantially with m N only when the latter is comparable to m χ . This property is naturally transmitted to the N 's daughters, including to the gamma-ray spectrum ( Figure 6). For a given m χ , the heaviest of the N s would lead to the hardest γ-spectrum. On the other hand, by virtue of being most boosted, the lighter ones lead to a larger number of energetic photons ( Figure 6). Finally, for a given (m χ , m N ) combination, with each being heavier than a 100 GeV, the leptons from the N 's decay would have a virtually flavour-independent energy spectrum. Nonetheless, the τ -channel would result in more energetic photons owing simply to its decay and subsequent hadronization. Understandably, the spectrum for the muonic decay channel would lie in between the two.

Results
Armed with the discussions of the preceding subsection, we are now in a position to compare the expectations of our model with the observational data from Fermi-LAT and H.E.S.S. on the one hand, and compare with the expected sensitivity of the CTA on the other. To this end, we use the GamLike package [40] to compute the likelihood function L(μ,θ|D) where D represents the observational data,θ the (unknown) parameters describing the background andμ a set of parameters encapsulating the features of the dark matter model. The limits on the semi-annihilation cross-section are then derived through the statistical test (TS), with T S > 2.71 corresponding to 95% C.L. exclusion. Here,μ 0 corresponds to the null hypothesis, i.e., the absence of dark matter. The consequent constraints, in the m χσv plane, from the Fermi-LAT and H.E.S.S. as also the projected CTA sensitivity are depicted in Figure 7. The area above the curves are (would be) ruled out at 95% C.L. Understandably, the limits for the tau-channel are slightly more stringent as it leads to a slightly larger gamma-ray production. As pointed out earlier, the FermiLAT experiment is more sensitive at lower energies, whereas H.E.S.S. is sensitive in the 0.2−30 TeV range. Similarly, the energy threshold of the CTA is lower than that of H.E.S.S., making the CTA much more sensitive to photons of intermediate energies.
Thus, CTA would furnish the strongest bounds for dark matter masses above 600 GeV almost independent of m N , whereas the FermiLAT gives better limits for m χ < ∼ 500 GeV . For m χ greater than a few TeV, H.E.S.S. already provides bounds equally significant as as those the CTA would lead to.
The shape of the constraint curves are determined by a convolution of the γ-ray spectrum with the energy sensitivity. To understand the final outcome, several aspects have to be borne in mind: • As we have discussed earlier, the three detectors have peak sensitivity at different E γ ranges. Fermi-LAT is the most sensitive at lower energies (0.5 GeV to 0.5 TeV) while for H.E.S.S" this lies in the few TeVs range. As for the CTA, although it covers a very wide range (∼60 GeV to ∼300 TeV), for low-energy (in tens of GeVs) γ-rays it obviously loses out to Fermi-LAT. For somewhat higher energies, it competes very will with the H.E.S.S., and goes much beyond.
• Thus, we would expect the Fermi-LAT to be the most sensitive detector for relatively low DM masses, as is borne out by Figure 7.
• Naively, one would expect that a larger value for m N would, typically, translate to more energetic decay products and, hence, a harder γ-ray spectrum. Consequently, the constraints from non-observation would be stronger. This is reflected by the fact of the constraints being stronger for m N = 1.5 TeV than they are for m N = 0.75 TeV. This argument, though, fails for the m N = 100 GeV case.
• For m N m χ , the RHN is highly boosted and this is transferred to its daughters, thereby allowing them to radiate off more photons. Some of these would have energies between 20 GeV and 0.5 TeV and visible to the detectors under discussion. With Fermi-LAT's sensitivity to low energy γ-rays being the highest, the improvement is most pronounced for this case.
Compared to DM annihilation [10,41] these indirect bounds are less constrained for the semi-annihilation processes [23], as expected owing to less photon in the final state.
We also analysed the sensitivity of semi-annihilating DM through their neutrino spectrum in detectors like Super-kamiokande [42]. We found that the limits are atleast O(1) magnitude weaker than the DM annihilating to two neutrinos [43] and 2-3 orders of magnitude weaker than the complimentary photon channel discussed above. This is due to the fact that the neutrino spectrum is weaker in comparison to photon spectrum and additionally, neutrinos interact weakly with the detectors. Thus, a dedicated analyses that include direction and energy information might provide the stronger limits. Similarly, neutrino experiments/detectors like DUNE [44] and Hyper-Kamiokande [45] can provide stronger limits for low-χ whereas IceCube [46] and ANTARES [47] can be utilized for heavy-χ. Even so, these are expected to be weaker than the complementary photon channel.
Before end this section we would like to comment on the fact that semi-annihilation processes inn an effective theory does not have mediator to couple to the nucleons, thus do not put any direct detection constraints. However, for an UV complete theory as discussed in Appendix A can lead to the restriction on nucleon-DM couplings.

Conclusion
The continued absence of any signal for particulate Dark Matter, other than the astrophysical or cosmological inferences calls for a reevaluation of the paradigm. One particular aspect is the mechanism that renders it stable against decay. While in most theories, this is ensured by the imposition of a Z 2 symmetry, it certainly is not the only avenue. Alternatives such as Z n (n > 2), or even more complicated ones, are not only feasible, but also dramatically alter the expectations in both laboratory (direct detection) as well as satellite-bound (indirect detection) experiments.
In this study we consider the simplest alternative, namely a Z 3 symmetry under which the DM χ transforms nontrivially while all the Standard Model fields transform trivially. While similar efforts have been made earlier for scalar DM, we eschew that path for every new fundamental scalars brings along its own hierarchy problem. Instead, we choose to work with a fermionic DM.
We further augment the theory by the inclusion of right-handed (SM-as well as Z 3 neutral) neutrino fields N i . These serve twin purposes. Allowed Majorana masses as well as nonvanishing Yukawa couplings connecting with the SM neutrinos, they allow the generation of correct masses for the light neutrinos (via the seesaw mechanism). Furthermore, they offer an efficient avenue for the disappearance of the DM so as to obtain the correct relic density.
Rather than adopt a particular UV-complete theory (although we do offer one in an Appendix), we have chosen to work in the effective theory framework (with a cutoff scale Λ) through the inclusion of dimension-six operators involving χ and the N i . While complete annihilations (χ + χ → N i + N j ) are possible, for a very wide range of parameters (as exemplified in the Appendix), it is the semi-annihilation process (χ + χ → χ c + N i ) that dominates.
The Boltzmann equations governing the evolution of the DM density changes from the usual paradigm not only because it is the semi-annihilation that dominates, but also because it is coupled with the evolution of the N i densities, which decay to the SM particles through their Yukawa couplings. If the size of these couplings are to be commensurate with the observed neutrino masses, they do help in keeping the N i in equilibrium with the SM plasma until relatively late. The exact sizes of these couplings understandably have a very big effect on the N i decoupling era and, hence, their densities at that epoch. However, the effect on the χ relic density is not as pronounced. Given this, we have obtained the region in the EFT parameter space that correctly reproduces the relic density for various choices of the operators in the effective Lagrangian. On the adoption of a specific UV-complete theory (as in Appendix A), this can be easily translated to a corresponding statement for the said theory.
While direct detection experiments would obviously be insensitive to such a scenario, the indirect detection theatre is a very interesting one. The decays of the N i (themselves the products of the semi-annihilation process) yield charged particles (gauge bosons/quarks and charged leptons). These, as well as their cascade decay products, radiate energetic photons which could be detected by the satellite experiments like FermiLAT or earth based telescope like the H.E.S.S. and CTA. We explored such detection possibilities in detail, simulating the photon energy spectrum and convoluting with the detector responses. While nonobservation of such signals the Fermi-LAT already imposes interesting bounds, especially for a light DM, future data from H.E.S.S.-II and the CTA are expected to lead to strong bounds in the the heavy-χ region. As an example, for m χ ∼ 100 GeV, a thermal annihilation cross section σv ∼ 5×10 −26 cm 3 s −1 would be ruled out. For heavier m χ , the bounds are weaker, and most of the parameter space satisfying relic abundance is allowed by the Fermi-LAT, H.E.S.S and C.T.A data. And although, the neutrino spectrum of RHN's, can, in principle, be detected in several neutrino detectors, the corresponding constraints are much weaker compared to those obtained from the gamma-ray telescopes.
With the effective theory presented here easily escaping constraints from current and near-future experiments, it is obvious that such a scenario can indeed alleviate the tension between the Planck measurements of the DM relic density on the one hand and the null results from direct and indirect detections of DM on the other. It also needs to be appreciated that the quest for an UV complete theory (such as that discussed in Appendix A) could engender DM-nucleon interactions as well as self-interactions of the DM. While the former would open new windows to both direct and indirect detection, the latter could be expected to leave a discernible mark on the small-scale structure in galaxies and clusters. The paradigm, thus, holds much promise.
Of course, analogous terms such asχΓ i χN Γ j N would be generated as well. However, for |y 3 , y 3 | |y 2 , y 2 | -a technically natural choice -such operators are relatively suppressed with the consequence that semi-annihilation wins over annihilation.
It is instructive to consider the potential for the scalars in the theory, namely φ and the usual SM doublet Φ. The most general form is given by A nonzero value for φ would result in φ − Φ mixing, and thereby to direct χχ annihilation to the SM particles through the Higgs portal. However, the spontaneous breaking of Z 3 that this entails would also allow the DM to decay. While the lifetime could be suitably extended by choosing parameters appropriately, the solutions tend to be somewhat unnatural and we eschew that path.

Appendix B Getting to the cross sections
We list below the squares of the matrix elements for the process in Equation 2.7 using the Lagrangian of Equation 2.5 with the assumption that only one of the c ab is non-zero and set to unity. Defining the Lorentz-invariant quantities A 4 ≡ q 1 · q 3 q 2 · q 4 + q 1 · q 2 q 3 · q 4 + q 1 · q 4 q 2 · q 3 A 2 ≡ q 1 · q 3 + q 2 · q 3 + q 3 · q 4 A 2 ≡ q 1 · q 2 + q 1 · q 4 + q 2 · q 4 (B.1) we have Note that spin-summing (and averaging) is yet to be done. The corresponding expressions for the reverse process (χ c → χχ) or analogous ones (such as χN → χ c χ c etc.) can be obtained from those above using crossing symmetry.
Similarly, the decay widths for the RHN are given by where the masses of the charged leptons have been neglected.

Appendix C Boltzmann Equation
We, now, derive the Boltzmann equation relevant for the Dark Matter density evolution in the present context. With 2 → 2 scattering being the dominant process, the calculation proceeds quite similarly to the standard case, except for the fact that only one DM particle is produced (or destroyed) per collision process. On the other hand, there would be two identical particles in either the initial or the final state, thereby necessitates the inclusion of a factor of (1/2!) in the phase space calculation. This gives, for the 1 + 2 → 3 + 4 process, the time-variation of the number density n χ to be where, for the sake of simplicity, we have suppressed the Pauli blocking 5 factors. Here, f i s represent the appropriate statistical distribution factors, H is the instantaneous Hubble expansion rate and with appropriate spin-sum and averaging being understood. If we assume time-reversal invariance, we have |M 12→34 | 2 = |M 34→12 | 2 and, thus, the right hand side of eqn. C.1 can be written as Rather than continue with the number density n χ , it is customary to consider the yield which is defined as it ratio with the ambient entropy density s, viz. Y χ ≡ n χ /s. This immediately leads to dn χ dt + 3Hn χ = s dY χ dt .
Noting that, during the cosmological evolution, there exists a monotonic relation between the time elapsed and the temperature of the universe, it is useful to consider a change of variables x = m χ T such that we have where H(m χ ) = π √ g * m 2 χ √ 90M pl and s = 2π 2 g * T 3 45 = 2π 2 g * m 3 χ 45x 3 .
with g * denoting the number of degrees of freedom relevant at that temperature. It is an excellent approximation to consider these particles (N and χ) to be in kinetic equilibrium during the freeze out. This allows us to write f (E, t) = n(x) n eq. (x) f eq. (E, t) where f eq. = 1 e E/T + 1 and n eq. = g 2π 2 ∞ m (E 2 − m 2 ) 1/2 e E/T + 1 E dE both χ and N i being fermionic. Now, decoupling occurs, typically, at x ≡ m χ /T > ∼ 20 and, hence, in f eq (χ), we may safely approximate (e E/T + 1) −1 ≈ e −E/T . And, since we would not be contemplating a large hierarchy between m χ and m N , an analogous approximation would hold as well for f eq (N i ). Furthermore, using conservation of energy, we have f eq. we obtain the set of equations describing the evolution of number density of DM as well as RHN in the early universe: (C.6) Note that the second term in dY N /dx carries the information of the RHN decaying into the SM particles.