Disentangling the Seesaw in the Left-Right Model -- An Algorithm for the General Case

Senjanovic and Tello have analyzed how one could determine the neutrino Dirac mass matrix in the minimal left-right model, assuming that the mass matrices for the light and heavy neutrinos could be taken as inputs. They have provided an analytical solution for the Dirac mass matrix in the case that the left-right symmetry is implemented via a generalized parity symmetry and that this symmetry remains unbroken in the Dirac Yukawa sector. We extend the work of Senjanovic and Tello to the case in which the generalized parity symmetry is broken in the Dirac Yukawa sector. In this case the elegant method outlined by Senjanovic and Tello breaks down and we need to adopt a numerical approach. Several iterative approaches are described; these are found to work in some cases but to be highly unstable in others. A stable, prescriptive numerical algorithm is described that works in all but a vanishingly small number of cases. We apply this algorithm to numerical data sets that are consistent with current experimental constraints on neutrino masses and mixings. We also provide some additional context and supporting explanations for the case in which the parity symmetry is unbroken.


Introduction
The observation of neutrino oscillations [1] proved that at least two neutrinos are massive particles. However, the origin of neutrino mass is still an open question; better understanding of this fundamental issue could yield key insights into the nature of physics beyond the Standard Model (SM). The seesaw mechanism is an appealing possibility that could account for the smallness of the neutrino mass scale [2][3][4][5]. There is not one unique seesaw model for neutrino mass, however, so even confirming that the seesaw mechanism is the source of neutrino mass does not necessarily lead to a complete understanding of the underlying model. Seesaw models generically contain two types of Yukawa terms that couple the Higgs and lepton fields. The first is familiar from the SM and couples the left-and right-handed projections of the lepton fields. In the neutral sector, the resulting mass matrix is called the Dirac mass matrix and is denoted by M D . The second couples the left-handed projections of the lepton fields to their charge-conjugates, and similarly for the right-handed fields, giving rise to so-called Majorana mass terms. In the seesaw mechanism, the mass matrix for the light neutrinos results from the interplay between these Dirac and Majorana mass terms.
As noted in Ref. [6], it is interesting to compare the situation for neutrinos to that for the charged fermions. In the Standard Model, the charged fermions receive their masses through their Yukawa interactions with the Higgs field. As a result, the measured values of their masses lead directly to predictions for the partial widths for Higgs decays into fermion-antifermion pairs. Measurements made at the Large Hadron Collider (LHC) have so far been compatible with the SM predictions [7]. The situation is considerably more complicated in the neutrino sector if the seesaw mechanism is at play. In this case, following the analogy from the charged fermions, one would want to determine the elements of the Dirac mass matrix M D as a function of the light neutrino masses and mixings and those of the heavy states. 1 The former are being carefully investigated at current neutrino experiments and the latter could possibly be measured at the LHC or some future collider [6].
In the simplest formulation of the seesaw mechanism, the Dirac mass matrix M D cannot be uniquely determined from the light and heavy mass matrices (M ν and M N , respectively), since it can always be redefined by an arbitrary complex orthogonal matrix [12,13]. Within the context of the left-right symmetric extension of the SM [14][15][16][17], however, where the seesaw mechanism arises as a direct consequence of spontaneous left-right symmetry breaking, the matrix M D is defined only in terms of physical quantities (see Refs. [6,[18][19][20]). At the level of the underlying model, left-right symmetry may be implemented by imposing a generalized charge conjugation symmetry, C, or a generalized parity symmetry, P. The charge conjugation approach has been analyzed in Ref. [18]; in this case it is argued that the relation between M D and the masses and mixings of the light and heavy neutrino states is significantly simplified due to the fact that M D is symmetric. As noted in Ref. [6], however, the generalized parity case is highly nontrivial and contains two distinct possibilities, depending on whether or not P remains unbroken in the Dirac Yukawa sector. If P is unbroken in the Dirac Yukawa sector, M D is Hermitian 2 and may be determined analytically, given the masses and mixings of the light and heavy neutrino states [6,20]. We shall refer to this as the "parity-conserving" scenario in the remainder of this work. By way of contrast, in the "parity-violating" scenario M D is no longer Hermitian 3 and the analytical approach to determining M D breaks down. Nevertheless, the authors of Ref. [6] develop a phenomenological analysis that would possibly allow one to determine M D through the study of specific processes in this scenario. In summary, the left-right model, in which the parity-violating nature of the weak interactions follows from the spontaneous breaking of the left-right symmetry, turns out to be a theoretical picture that not only results in non-zero neutrino mass but also elucidates its origin.
In this article we develop a general method to determine M D in the left-right model in the case that the underlying model (before spontaneous symmetry breaking) is invariant under P. Our determination of M D only relies on the knowledge of M ν and M N , as well as information related to the VEVs of the bidoublet Higgs field. In this way it is analogous to the analytical solution found in Refs. [6,19,20] for the parity-conserving limit, where M D is obtained directly from these two matrices and the ratio of the bidoublet Higgs VEVs. What distinguishes our approach from previous work is that our approach does not assume that parity is conserved in the Dirac Yukawa sector after spontaneous symmetry breaking. That is, we assume that M D could be non-Hermitian. In previous approaches, determination of M D in the parity-violating case required the study of additional specific processes. 4 To determine M D from M ν and M N one needs to solve a system of non-linear matrix equations. The solution presented for the parity-conserving case in Refs. [6,19,20] is obtained through an ingenious procedure that makes use of the hermiticity of M D . When parity is broken, M D may no longer be assumed to be Hermitian and the procedure breaks down. In this case, the authors show that a solution can in principle be obtained numerically as an expansion in a small parameter, although they do not present a specific numerical algorithm to implement this strategy. A key feature of their proposed strategy in the parity-violating case is that the non-Hermitian matrix M D is replaced by a Hermitian matrix that is a function of M D . The method presented in the current work extends the approach outlined in Ref. [6] and provides a systematic prescription to solve for M D numerically, even in the case that it is non-Hermitian. In a sense one could say that it adds a piece to the puzzle of unwinding the seesaw as the origin of neutrino mass. In the parity-conserving case, the analytical solution described in Refs. [6,19,20] allows one to resolve the seesaw by determining M D in terms of M ν and M N . In the parity-violating case one could use the phenomenological approach outlined in Refs. [6,19] to determine M D . Of course, one could also apply that phenomenological analysis when parity is conserved, which would allow one to cross-check the values of M D extracted from experiment with those corresponding to the analytical solution. Likewise, in the parity-violating case, the matrix M D that would result from experiment could also be compared with the method proposed in this study, allowing one to further resolve the puzzle.
To illustrate our method and study its performance we undertake a numerical analysis using several theoretical data sets that are compatible with the current experimental constraints on the lepton masses and mixings. We use a Monte Carlo algorithm to generate these data sets, following the framework and approach described in Refs. [28,29]. The Monte Carlo algorithm provides M ν , M N and M D in each case, allowing us to take the matrices M ν and M N as inputs and to determine whether our approach is able to recover the corresponding matrix M D . Most of the data sets are of the parity-violating variety, but we also consider one parity-even data set so that we can test our method in this case as well. We find that our method successfully obtains a solution for M D for all of the data sets that we study. Throughout this analysis we assume the normal ordering of neutrino masses 5 and ignore the possibility of light sterile neutrinos.
Our paper is organized as follows. In Section 2 we outline the specific version of the Left-Right Model that we employ, including various details about our notation. Section 3 contains a detailed description of our method for determining M D . This method is then illustrated with three numerical examples in Section 4. In Section 5 we briefly discuss alternative methods that we had previously used in our attempts to determine a solution for M D . These approaches were successful for some of the data sets, but were unstable for others, illustrating the significant challenge posed by solving the set of nonlinear matrix equations to determine M D . Section 6 contains a proof of a key mathematical relation used in Section 3, as well as derivations of several mathematical properties for the parity-conserving case considered in Refs. [6,20]. We conclude with a brief discussion of our results in Section 7. Finally, the Appendices outline the diagonalization of the charged and neutral mass matrices, details of the notation and specifics about the method and also include some mathematical results related to the parameterization of complex orthogonal matrices.

The Model
In this section we provide a brief summary of the Left Right Model (LRM), primarily following the notation and conventions used in Ref. [28]; the interested reader is referred to Ref. [28] for more detail.
The underlying symmetry of the LRM is based on the gauge group SU (2) L ×SU (2) R ×U (1) B−L . The specific formulation of the LRM considered in Ref. [28] contains two Higgs triplet fields, as well as a bidoublet Higgs field, The Yukawa terms for the charged and neutral leptons may then be written as [28] where C = iγ 2 γ 0 and φ = τ 2 φ * τ 2 , and where ψ iL,R represent the left-and right-handed lepton doublets in the gauge basis, where i is a generation index. The matrices G and H are taken to be Hermitian, while F may be assumed to be complex symmetric without loss of generality. 6 The model also contains an extra left-right parity symmetry, P [6,28,30], under which The neutral Higgs fields obtain VEVs upon spontaneous symmetry breaking; the Higgs VEVs may be parameterized as follows, where k 1 , k 2 , v L and v R are all taken to be real and positive. If the phase a is a multiple of π, then φ respects the generalized parity symmetry even after spontaneous symmetry breaking; we refer to this as the "parity-conserving" case insofar as the Dirac Yukawa sector is concerned. Experimental constraints suggest v R k 1 , k 2 v L ; also, we have [32] As noted in Ref. [28], it is natural to assume that the ratio The Yukawa terms in the Lagrangian lead to mass terms for the charged and neutral leptons when the neutral Higgs fields acquire VEVs. The mass matrix for the charged leptons in the gauge basis is given by Ref. [28] Recalling that G and H are Hermitian, we see that M is Hermitian if the phase a is a multiple of π (i.e., in the so-called parity-conserving case).
The neutral lepton sector is more complicated than the charged lepton sector, since the Yukawa Lagrangian generically leads to Majorana mass terms in addition to the "ordinary" Dirac mass terms. For three lepton generations, the mass matrix is a 6 × 6 complex symmetric matrix, where is a 3 × 3 Dirac mass matrix and where and 6 See the discussion in Ref. [28], as well as Refs. [30,31]. 7 Reference [28] uses the phase α = π − a; here we follow the notation of Ref. [6] for the phase.
Senjanović, et al (Ref. [6]) and present work Kiers, et al (Ref. [28])  [28] for details), which leads to 3 × 3 complex symmetric mass matrices for the (mostly) leftand (mostly) right-handed fields. The mass matrix for the right-handed neutrinos is simply M RR , so that the right-handed neutrinos are generically quite heavy (due to the assumed large value of v R ). The mass matrix for the left-handed neutrinos is The first term is small, since it is proportional to v L . The second term is suppressed due to the presence of M −1 RR ; this suppression is known as the seesaw mechanism.
We have so far been working in the gauge basis. To make connections to measurable quantities, one needs to diagonalize the mass matrices for the charged and neutral leptons, which yields the physical lepton masses, as well as the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix. These quantities may then be measured or constrained by various types of experiments. The basis in which the mass matrices for the charged and neutral leptons are all diagonal is called the mass basis.
In the remainder of this paper we adopt the notation used by Senjanović et al in Ref. [6] and work in a basis that is part-way between the gauge basis and the mass basis [6]. In this basis, which we refer to as the "charged-diagonal" basis, one diagonalizes the mass matrix for the charged leptons and implements a corresponding transformation on the neutrino mass matrices. The neutrino mass matrices are not generally diagonal in this basis. Table 1 shows the correspondence between the mass matrices in the charged-diagonal basis (of Senjanović et al) and those in the gauge basis (described above and in Ref. [28]). The table does not explicitly include the unitary matrices that are necessary to go from the gauge basis to the charged-diagonal basis. The interested reader is referred to Appendix A for the precise relations between quantities in these two bases.
It is straightforward to derive the following three relations between various mass matrices in the charged-diagonal basis, where tan β ≡ k 2 /k 1 and where the unitary matrix U e is associated with the transformation from the gauge basis to the charged-diagonal basis [see Eq. (77) in the Appendix for a precise definition]. Also, s a , t β and t 2β stand for sin(a), tan β and tan (2β), respectively. Note that there is an overall sign ambiguity for U e in the sense that Eqs. (14)- (16) are unchanged under U e → −U e .

Description of Method
Our primary goal in this work is to describe a method that can be used to solve Eqs. (14), (15) and (16) for U e and M D , taking M N , M ν , v l e iθ L /v R , a and β as inputs. The authors of Ref. [6] outlined such a procedure in the case that s a t 2β = 0. In that case, Eqs. (14) and (15) reduce to the expressions M D = U e M † D U e and m e = U e m e U e , respectively. Recalling that m e is a real, diagonal matrix, we see that U e = diag(±1, ±1, ±1) in this case, and that M D U † e is Hermitian (we say that M D is "sign Hermitian"). Armed with this knowledge, Senjanović et al were able to work out an analytical method to solve for M D . They were also able to classify the solutions into various categories.
When s a t 2β = 0, the analytical method devised in Ref. [6] breaks down, since M D is no longer sign Hermitian. As we shall see, in this case it is possible to find a solution of Eqs. (14), (15) and (16) by using an iterative approach. We have applied this approach to various data sets and it appears to be quite stable (see Sec. 4 for further details). 8 The starting point for our method is to define the matrix M as in Ref. [6]: The algorithm described below is designed to determine M, which allows one to determine U e (see Appendix B) and then finally to calculate M D via A convenient property of M is that it is Hermitian. 9 Inspired by the mathematical manipulations that led to Eqs.
We then define H, B andH as follows 8 We have also devised a number of other iterative approaches that are stable for some data sets, but not for others. These are described in Sec. 5. 9 This follows from Eqs. (14) and (15).
so thatHH where Note that H is Hermitian, since M is Hermitian and M N is complex symmetric. Also, S is complex symmetric. The reader may note a certain amount of similarity between Eqs. (20)-(23) (above) and Eqs. (40) and (41) in Ref. [6]. A crucial difference, however, is thatH is not generally Hermitian.
Since S is symmetric, we may write Eq. (23) in "symmetric normal form" as follows [33], where O is a complex orthogonal matrix. In principle, the matrix s could be block diagonal; in practice, we assume that it is diagonal. Note that the elements in s are typically complex. As shown in Section 6,H itself can be written as whereẼ is a complex orthogonal matrix. Since H is Hermitian, we may write which may be rearranged to givẽ Denoting the jth element of the diagonal matrix √ s by |s j |e iγ j , where γ j is taken to be real, and defining we see thatẼ in which there is no implied sum over repeated indices. For future reference, we also define According to Eq. (30), of course, this quantity should be zero for all i and j. Our numerical procedure seeks to determine values for theẼ ij such that the ∆ ij are zero, or very close to zero. OnceẼ is determined, one can eventually compute U e and M D .
3.1 Iterative procedure to solve for U e and M D We now outline an iterative procedure that may be used to solve for U e and M D . Except for certain edge cases (to be described below), this procedure appears to be quite robust. A key to the success of this algorithm is the fact that, in practice, Eq. (24) is relatively well-approximated by making the replacement U e →Ĩ, whereĨ is a diagonal matrix with ±1 (as appropriate) down the diagonal.
In the description of the algorithm that follows, we denote quantities evaluated during the n-th iteration of the algorithm with an "n" subscript. In the first step of the n-th iteration of the algorithm, for example, we insert M ν , M N , a, β and U e,n into Eqs. (21) and (24) and use those expressions to calculate B n and S n . Our shorthand notation for this is as follows: "Eqs. (21) and (24): M ν , M N , a, β, U e,n → B n , S n ." Adopting this shorthand throughout, we summarize the algorithm as follows, We note the following: • In the first step of the first iteration it is necessary to have a starting "guess" for U e . Since s a t 2β is assumed to be small, a reasonable starting point is to choose one of the eight possi-bilitiesĨ = diag(±1, ±1, ±1) .
• In Step 2 we use Eq. (25) to determine O n and s n . In practice, we compute the complex eigenvalues and eigenvectors of the complex, symmetric matrix S n numerically (and assume that the eigenvalues are non-degenerate). Then we construct the complex matrix O n and check that it is approximately orthogonal.
• Step 4 is the most challenging part of the algorithm. We have found in practice that if an incorrect set of signs has been chosen forĨ, it will not be possible to solve forẼ n (hence the instruction at the end of Step 4).
• Once the correctĨ has been determined, 10 it is typically sufficient to iterate through Steps 1-9 three to five times in order to determine U e and M D to within a reasonable amount of accuracy. We refer the reader to Sec. 4 for further details regarding the accuracy of the method.

Determination ofẼ
The most challenging step in the algorithm described above is Step 4, in which we use Eq. (30) to solve for the elements of the complex orthogonal matrixẼ. In this subsection we describe how this may be accomplished. Throughout this subsection we suppress the index n that denotes the iteration number.
As we show in Appendix C,Ẽ can typically be parameterized as follows, 11 where c η i ≡ cos η i and s η i ≡ sin η i (i = 1, 2, 3), and where the angles η 1 , η 2 and η 3 are assumed to be complex. The parameter ξ is either +1 or −1 and is equal to the determinant ofẼ. The goal in Step 4 is to determine three complex angles η i and the sign of the discrete parameter ξ such that Eq. (30) is satisfied for all i and j. ParameterizingẼ as in Eq. (33) allows us to solve for the real and imaginary parts of the η i in a relatively straightforward, prescriptive manner.
The real and imaginary parts of the 2-2 element of Equation (30) give the following two relations, in which the R and I superscripts refer to the real and imaginary parts [e.g., c R η 2 ≡ Re(cos(η 2 ))]. Equations (34) and (35) are equivalent to each other, as may be seen by noting that √ s∆B is anti-Hermitian (see Eq. (29)). Rewriting Eq. (34) in terms of the real and imaginary parts of the complex angle η 2 yields the following expression, which has the two solutions and 10 Recall that there is an overall sign ambiguity in Ue, so one actually expects two choices forĨ that yield solutions for MD. 11 As explained in Appendix C, the parameterization in Eq. (33) does break down in certain edge cases (for example, when the 2-2 element ofẼ is equal to unity). In most such cases, another angular parameterization could be used.
as long as The 1-2 and 2-1 elements of Equation (30) also give redundant relations, again due to the fact that √ s∆B is anti-Hermitian. The same may be said for the 2-3 and 3-2 elements. As a result, we are left with the following two complex relations, The above expressions allow us to express the complex quantities s η 1 and c η 1 as follows, where A, B, C and D are functions of s η 2 and s * η 2 . Imposing the constraints s 2 where c * Multiplying the two expressions in Eq. (46) by each other leads to a quartic equation in s * η 3 , although in practice we typically solve the expressions in Eq. (46) as stated, since this seems to be more stable numerically.
To summarize, for a given value of η I 2 satisfying Eq. (41) there are two solutions for η R 2 (see Eqs. (37) and (38)). For both of these we may calculate η 2 = η R 2 + iη I 2 , and so compute s η 2 and then A, B, C and D. For each value of η 2 we may solve Eq. (46) to obtain a total of four solutions for s * η 3 and the corresponding values of c * η 3 . Back substitution into Eqs. (44) and (45) then yields s η 1 and c η 1 . Thus, for a given value of η I 2 satisfying Eq. (41) we generically expect a total of eight solutions for the sines and cosines of the complex angles η 1 , η 2 and η 3 . It remains to ensure that the 1-1, 1-3 and 3-3 elements of Eq. (30) are satisfied. Recalling the definition for the quantity ∆ ij in Eq. (31), we define, which will generically have eight values for a given value of η I 2 . The goal of our algorithm is to find value(s) of η I 2 (and corresponding values for the various sines and cosines of the complex angles) that correspond to a zero (or, numerically, a minimum) of Eq. (47).
Then the method proceeds as follows: 4(a) Choose a value for ξ (either +1 or −1).

4(d) Repeat
Steps 4(b) and 4(c), searching for combinations of the complex angles that yield |∆| 2 0 (in practice, we use an algorithm that searches for a minimum of |∆| 2 ).
4(e) If no solutions are found that satisfy |∆| 2 0, return to Step 4(a) and repeat the process for the opposite sign of ξ.

Numerical results
For our numerical analysis we implemented a Monte Carlo algorithm as described in Refs. [28,29] to generate various data sets. The algorithm scans over random values of the Yukawa matrices defined in Eq. (3), searching for sets of parameters that are consistent with experimental constraints. For each set of Yukawa matrices, we were then able to compute m e , M ν , M N , M D , U e andẼ. We generated, and subsequently analyzed, 16 data sets in this way; in each case, the neutrino masses and mixings satisfied the current experimental constraints at the 2σ level as given in Ref. [34]. 12 In addition, we verified that the effective neutrino mass for neutrinoless double beta decay is always below the present limit [35]. For every data set we fixed tan β = k 2 /k 1 to 3/181 and v R = 50 TeV [28]. 13 The remaining independent parameters, s a and |v L |, generally varied within ranges of O(10 −2 − 1) and O(10 −3 − 10 −1 ) eV, respectively, whereas the phase of v L took values between 0 and 2π. For one of the data sets we purposely set s a to zero so that we could test our method on a parity-conserving data set. With these choices for the various parameters, the spectrum of the heavy neutrinos spanned from about 8 to 100 TeV. Our main focus in this section is not on performing a phenomenological analysis but on showing that the method described in Sec. 3 can be successfully applied to data sets that are consistent with experimental constraints, allowing us to recover the matrices U e and M D in each case. It is in this spirit that we are not particularly interested in a phenomenologically inspired spectrum for the heavy neutrinos. 14 In the following subsections, we illustrate how the method described in Sec. 3 leads to solutions of Eqs. (14), (15) and (16) for three different scenarios. 15 One of the three respects the generalized parity symmetry in the Dirac Yukawa sector after spontaneous symmetry breaking and the other two do not. Of the latter two, one has a relatively small value of |s a | (and is thus relatively "close" to the parity-conserving limit), while the other has a larger value of |s a |. While we only consider 12 The charged lepton masses generated by the routine typically agree with their corresponding experimental values to within a few parts in 10 4 . 13 The authors of Ref. [28] introduce an extra U (1) symmetry into the left-right model broken by a small dimensionless parameter . One of the advantages of this framework is that it allows scenarios consistent with neutrino phenomenology for a relatively low vR scale. Since we followed this work, we also implemented the U (1) horizontal symmetry in our analysis and fixed the value of to 0.3. Note that while the horizontal symmetry sets hierarchical scales for the Yukawa matrices, it does not impose relations among their elements. Therefore, the inclusion of this symmetry does not imply any loss of generality regarding the original problem of unwinding the seesaw.
14 In any case, a lower heavy-neutrino spectrum could in principle be generated by decreasing the value of vR. 15 The computer code that implements the method is written in Mathematica and is available from the authors upon request. these three data sets in detail, we emphasize that the method was successful for all 16 of the data sets.The calculation takes approximately 5 minutes for each dataset on a Desktop PC with Intel Core i7-9700K Processor (8x 3.60 GHz) and 16 GB RAM.
To study these three data sets we show plots of the quantity |∆| 2 , defined in Eq. (47), as a function of Im(η 2 ). Minimizing |∆| 2 is a key step in determining the elements of the matrixẼ, which then allows us to determine U e and M D . For each value of Im(η 2 ) we determine the sines and cosines of the complex angles in the matrixẼ (see Eq. (33)) that are consistent with the "central cross" elements in Eq. (30). |∆| 2 is a measure of how well the remaining elements in this equation are satisfied; it is zero for solutions of Eq. (30). In principle, minima of |∆| 2 may need to be found several times, since the overall method is iterative. In the following, we show plots of |∆| 2 that are obtained after an appropriate number of iterations have been performed.

Parity conserving scenario
We first consider the parity conserving scenario (s a = 0) that was noted above. For this example, in which we have expressed M D in units of GeV. The matricesẼ and U e both have a form consistent with one of the expected forms for the parity-conserving case (see Section 6, as well as Ref. [6]). Also, aside from small numerical errors, M D U † e is Hermitian, as is expected from Eq. (14). Figure 1 shows a plot of |∆| 2 as a function of Im(η 2 ). 16 As is evident from the figure, the curves approach zero for Im(η 2 ) 0. We normally expect eight solutions for each value of Im(η 2 ); to within numerical rounding errors, there are two sets of degenerate curves in this case (one set for each value of Re(η 2 ) for a given value of Im(η 2 )). We expect the degeneracies to be removed within parity-violating scenarios. In the following subsections we consider two data sets with s a = 0. While both data sets illustrate the expected breaking of the degeneracy, one of them has s a 1 and exhibits some qualitative similarities to the parity-conserving scenario.
We note that since this data set has s a = 0, we could also use the analytical solution method presented in Section III.A of Ref. [6] in this case. We have analyzed this data set using both the approach of Ref. [6] and the method described in the present work and have verified that they lead to consistent values of U e and M D , up to numerical rounding errors. Applying the method from Sec. 3 actually required some care in the parity-conserving scenario. Technically, in this scenario A and C are both zero in Eq. (46), so the equation for s * η 3 is quadratic. In practice we have found that small numerical errors lead to small but non-zero values for A and C. Attempting to solve Eq. (46) as a quartic equation in this case was not numerically stable, so we resorted to setting A and C to zero by hand and solving the resulting quadratic equation. 16 As is noted in Sec. 6, our method assumes thatẼ22 = ±1, a condition that is satisfied for this data set.
in which we have dropped small numerical errors. The negative of the above expression for M D is also returned. One can confirm by direct substitution that the new values for M D are also solutions of Eqs. (14) and (16).

Parity-violating scenario I
We next consider a scenario with a small degree of parity violation (s a = 0.00187). In this example, Im(η 2 ) Figure 2: |∆| 2 vs. Im(η 2 ) for parity-violating scenario I. Note that the horizontal and vertical scales are different than those of Figure 1.
All other elements ofẼ are small compared to 1. Thus, as one might expect,Ẽ is "close" to one of the possible parity-conserving forms. Furthermore, M D is quite close to being "sign-Hermitian" in this example, and U e is close to being a diagonal sign matrix.
We applied our method to this data set and were able to determine the matrices M D and U e numerically. The results are numerically consistent with Eqs. (52) and (53). Figure 2 shows the corresponding 8-fold family of |∆| 2 versus Im(η 2 ) curves. In this semilogarthmic plot, there are slower changing curves that correspond to larger values of |∆| 2 and curves that fall and rise abruptly in the solution region. It is seen that |∆| 2 approaches zero for |Im(η 2 )| ≈ 0.0065. This case demonstrates several similarities to the one shown above in the parity-conserving scenario. The curves still have relatively low |∆| 2 values and two of them approach |∆| 2 = 0 at very small values of Im(η 2 ). New features compared to the parity-conserving example are that the degeneracies in |∆| 2 have now been broken and that there are two values of Im(η 2 ) (located symmetrically about zero) that yield solutions. These two values of Im(η 2 ) yield the same solutions for M D , ignoring small numerical errors. In practice we use the positive root.

Parity-violating scenario II
We now consider a scenario with a larger degree of parity violation (s a = −0.51). For this example, TheẼ matrix possesses larger values in this case and loses its resemblance to theẼ of the parityconserving case. The matrices M D and U e are still relatively close to being sign-Hermitian and diagonal sign matrices, respectively, but they are not as close to those forms as were the corresponding expressions for the "almost" parity-conserving case (see Eqs. (53) and (52), respectively).
We were able to successfully apply our method to this example and recover values for U e and M D consistent with Eqs. (55) and (56). Figure 3 shows the corresponding |∆| 2 versus Im(η 2 ) curves, with the solutions evident near Im(η 2 ) ≈ ±0.45. In general, the curves reach significantly larger |∆| 2 values compared to the previous cases and their shapes noticeably differ from the curves of the parity-conserving scenario.
We have analyzed the convergence of the method for all of the parity-violating data sets and the conclusions are similar for all of them. As an illustration of that analysis we show the results obtained for the current scenario. Figure 4 shows two different measures of the relative error between the output of the method for U e (M D ) and the true matrix   As can be seen from the figure, the convergence is extremely fast, precise and stable for both matrices.

Alternative Methods
In this section we describe some other numerical methods that we explored in our attempt to solve for U e and M D . It is worth noting from the beginning, however, that these alternative methods were not successful in finding solutions for all 15 of the parity-violating data sets. Only the method described in Section 3 (or slight variations on that method) was successful in this regard.
It is perhaps useful to restate the goal of our analysis. The equations that define the original problem are Eqs. (26), (27) and (31) in Ref. [6]; these are stated here as Eqs. (14), (15) and (16). Equations (14) and (15) come from the relations between M D , m e and the Yukawa matrices, and Eq. (16) is the seesaw formula for the light neutrino mass matrix. These equations constitute a system of coupled complex matrix equations for which we assume M N and M ν as experimental inputs and U e and M D as unknowns to be solved for.
In the following we briefly describe two alternative methods that implement different numerical approaches to solve U e and M D : A) a least squares minimization method and B) a fully iterative method.

Least squares minimization method
In our first attempt we employed a least squares minimization technique to solve the three coupled matrix equations (Eqs. (15) and (16), together with an equation expressing the unitarity condition for U e 17 ). In this method the system of 3 × 3 complex matrix equations is transformed into a system of 51 equations for 36 unknown matrix elements. The quantity to minimize is the sum of the squared differences between the left-and right-hand sides of those 51 equations when the solved values for the matrix elements of M D and U e are substituted. The least squares minimization was achieved employing a variant of the Newton method [36].
One complication in this approach is that the equations that we are attempting to solve have different dimensions -two of the matrix equations have dimensions of mass, while the third (expressing the unitarity of U e ) is dimensionless. Moreover, the order of magnitude of the matrix M ν is 10 −11 or 10 −12 in units of GeV compared to 1 in the case of the unitarity condition. In view of this, we employed three normalization strategies: 1) no dimensional normalization was applied, but Eq. (16) was multiplied by a dimensionless numerical factor to compensate for the smallness of the matrix M ν ; 2) the three equations of the system were transformed so that they have a unit matrix on their right-hand side; and 3) the three equations were transformed so that they have m e M ν m −2 e on their right-hand side. We also considered a hybrid approach in which we first applied the third strategy and then used the output for U e from that approach as a starting point when using the second strategy. This approach improved the accuracy in several cases. We analyzed these four strategies on the 15 data sets.
The least squares minimization approach to our problem attempts to minimize a particular sum of squares while traversing a 36-dimensional space of unknowns. This approach depends on the initial values chosen for the various unknowns. In our calculations we set the initial values for the matrix U e to be U e = diag(±1, ±1, ±1). These are reasonable starting points, since the actual U e matrices for the parity-violating data sets are still somewhat well approximated by parityconserving ones with a particular choice of signs. We found that this method does not converge when an incorrect combination of signs in diag(±1, ±1, ±1) is used. In general, we found that while it was possible to achieve a solution for all of the data sets using some of the above-mentioned strategies, none of these strategies worked for the entire collection of data sets. This illustrates the considerable difficulty in solving the problem at hand. By way of contrast, the method described in Sec. 3 does yield a solution for all of the data sets that we studied.
For the cases where the least squares minimization method leads to a solution, we compared the accuracy achieved to that achieved by the method in Sec. 3, using δ (1) Ue as a measure. The accuracy obtained by the method in Sec. 3 turns out to be two orders of magnitude higher on average. That method also ensures safer control over the solution search since it reduces the multi-dimensional space of unknown variables to lower dimensional regions at each stage of the procedure, whereas the least squares minimization method attempts to find all unknown variables at the same time.

Fully iterative method
Given the relatively low accuracy and the instability of the least squares minimization method described in the previous subsection, we have also investigated iterative approaches that are based on the analytical solution for the parity-conserving case (see Ref. [6]). In this subsection we describe various attempts along these lines. The goal of these approaches is to overcome the limitation imposed by the fact that M D is not Hermitian when parity is violated.
We take as our starting point Eqs. (25) and (26). Noting thatH can also be written as we find from which it follows thatẼ We also note that we may write a Riccati equation for U e that derives from Eq. (15), A solution of this equation is given by where by the notation √ B Riccati m e we mean the principal root of the matrix B Riccati m e . Equation (62) actually has 8 solutions, which can be constructed by multiplying the principal root by diagonal matrices diag(±1, ±1, ±1).
The first algorithm that we designed consists of the following steps: 1. Set initial values for the matrices U e andẼ (we take random complex matrices).
We can write this more succinctly using the shorthand notation adopted in Section 3: 1. Eq. (16): 3. Eq. (14): 4. OrthogonalizeẼ i+1 ; back to Step 1 Unfortunately, this algorithm ended up being stable for only about the half of the 15 parityviolating data sets (in some cases we found that even when starting from the solution the algorithm diverges, leading to very distant regions in the parameter space). We also tried two different variations, setting instead the initial values for U e and M D as random complex matrices. These variations can be summarized as follows, We also designed an alternative algorithm that did not use the Riccati equation. For this algorithm we only set the initial value for U e as a random complex matrix. The algorithm proceeded as follows, One might expect this algorithm to be more stable, since all four of the steps use the i-th iteration (instead of both the i-th and (i+1)-th iterations at a given same step, as in the previous algorithms) and because it depends only on the initial value of U e . However, this algorithm did not find a solution for any of the data sets.
Finally, we modified one of these algorithms using an approach inspired by the difference-map algorithm [37]. The difference-map algorithm is known to be able to find solutions to iterative mapping algorithms that are unstable. Unfortunately, while this approach seemed to show some promise, our attempts along these lines were also not successful for all of the data sets.
In summary, the fully iterative algorithms were not successful for all of the data sets. Our experience with these algorithms underscores the considerable difficulty of solving this system of complex, nonlinear matrix equations in order to unwind the seesaw mechanism in the parityviolating case. Fortunately, the prescriptive method described in Sec. 3 and illustrated in Sec. 4 does appear to be able to solve these equations, at least for all of the data sets considered.
6 Detailed analysis of the equation HH T = S In our efforts to extend the ideas of Ref. [20] to the parity-nonconserving case, we found it quite helpful to first understand the justification for every step in the analytical solution proposed in Ref. [20] for the parity-conserving case. It became clear to us how essential it is that H be Hermitian in order to derive an analytical solution. Yet, in case H is not Hermitian, it nonetheless proved beneficial to factor H = O √ sEO † and concentrate on solving for the entries of the complex orthogonal matrix E (which will no longer be a simple signed permutation matrix).
In this section, we thus provide additional context and supporting explanations for the analytical derivation carried out in Sec. III of Ref. [20]. Specifically, we will show how to derive the matrices in Eq. (37) of Ref. [20] under mild assumptions. We also discuss which of these assumptions are necessary and which can be removed.
The mathematical context is the following: we assume H is an unknown, Hermitian 3×3 matrix and that S = HH T is known. The goal is to solve for H given S. As in Eq. (27) of Ref. [20], we assume that S = OsO T has been placed in "symmetric normal form"; here O is a complex orthogonal matrix and s is block diagonal with symmetric Jordan blocks. Precise details can be found in Section XI.3 of Ref. [33]; we will only consider the case when s is diagonal for the sake of simplicity.
The central claim is that H = O √ sEO † , where E is a signed permutation matrix whose form is determined by s. In fact, even in the parity-violating case (when H is not Hermitian), one can decompose H as H = O √ sEO † for some complex orthogonal matrix E, as the next lemma shows. Any further specification of E is precisely linked to the assumption that H is Hermitian. We begin by observing that s is some complex orthogonal matrix, say P −1 . Rearranging, we have In addition to the assumption that s is diagonal (which is made in Section III.D(i) of Ref. [20]), we further assume that (1) s has nonzero eigenvalues, (2) s has distinct eigenvalues.
All of the above assumptions are mild in a probabilistic sense: they hold with probability 1 in the space of all possible matrices H. Later we will show that assumption (2) is necessary for the analytical solution described in Ref. [20], for otherwise one can exhibit infinitely many matrices H with HH T = S. In contrast, we show that assumption (1) is not necessary.

Determination of E I , E II
The number of nonreal eigenvalues of s is even; moreover such eigenvalues come in complex conjugate pairs. As mentioned in Eq. (24) and following of Ref. [20], this is due to the characteristic polynomial of S = HH T having real coefficients whenever H is a Hermitian matrix.
Under the assumption that s is of size 3 × 3, there are thus only two cases: (I) All eigenvalues are real: s = diag(s 0 , s 1 , s 2 ), each s i ∈ R.
In either case, one can find a diagonal matrix √ s such that √ s √ s = s. Moreover, one may assume that the complex entries of √ s come in conjugate pairs. Proposition 1. Assume that s is diagonal, satisfies (1) and (2), and has eigenvalues ordered as in (I) or (II) above. Then H must be equal to one of the matrices where E comes from the finite list of possibilities: in case (II), here = ±1 and we are just emphasizing that the two corner entries must be equal.
By Lemma 1, we can write H = O √ sEO † for some complex orthogonal matrix E. Now we observe the following: therefore H is Hermitian if and only if √ sE is. Since we assumed H was Hermitian, we get the following equations: The last equation can be rewritten as and now taking the inverse of both sides we get Combining Eqs. (64) and (65), we have Therefore E commutes with s, but up to a conjugation. But this can be remedied, for we know that s * is a diagonal matrix whose entries are just a permutation of the diagonal entries of s. In other words, there exists a permutation matrix Q (possibly the identity matrix) such that s * = QsQ −1 , which implies s * −1 = Qs −1 Q −1 by taking the inverse of both sides. As with all permutation matrices, Q is (real) orthogonal: Q T = Q −1 . Putting this all together, we have so the complex orthogonal matrix EQ commutes with s.
Since s has distinct eigenvalues, EQ must be diagonal. The only diagonal 3 × 3 complex orthogonal matrices A are the eight possibilities for because of the requirement that A 2 = AA T = I.
Finally, to establish the form of the matrix E, and thus find all solutions H, we just analyze what Q was in cases (I) and (II), respectively.
(I) If s = diag(s 0 , s 1 , s 2 ) with each s i ∈ R, then s = s * already, so Q = I. Therefore E itself must be one of the 8 possibilities Moreover, all of these possibilities are realizable -that is, O √ sEO † is Hermitian in each case 18 .
(II) If s = diag(z, s 0 , z * ), then However, only half of these are valid possibilities yielding O √ sEO † Hermitian. Indeed, let us write which is Hermitian if and only if 1 = 3 . So E must take on the form

Distinct eigenvalues are necessary
We want to point out that the assumption (2) of Proposition 1 concerning distinct eigenvalues is necessary in order to find only finitely many solutions for H.  (the later eigenvalues may be real or come in complex conjugate pairs). We already know we can find a Hermitian matrix H 1 such that H 1 H T 1 = diag(s 1 , . . .) (so H 1 has dimension 2 smaller). Therefore it suffices to show that there are infinitely many Hermitian 2 × 2 matrices A such that with both a, b ∈ R. Then as long as a 2 − b 2 = s 0 , A is such a solution. The equation a 2 − b 2 = s 0 has infinitely many solutions for a, b (graphically, the points on that hyperbola); for example, if s 0 is nonnegative, then a = ± √ s 0 + b 2 for any choice of b gives a distinct solution.
Once again it suffices to find infinitely many Hermitian matrices A such that AA T = diag(z, z, z * , z * ).
A straightforward calculation reveals that, as long as x 2 + y 2 = z, AA T = diag(z, z, z * , z * ). Again there are infinitely many solutions to the equation x 2 + y 2 = z (pick any x ∈ C and there is at least one solution for y).

Handling an unrepeated eigenvalue 0
Suppose HH T = OsO T as before with s diagonal and having distinct eigenvalues (assumption (2)). Now suppose that s has 0 as one of its eigenvalues. Without loss of generality, we assume that HH T = s (using a change-of-basis as before) and that s = diag(a, b, 0), where a = 0, b = 0.  Therefore the problem of solving for H reduces to solving forH, and the smaller system can be solved as before.

Discussion and Conclusions
It is well known that at least two neutrinos are light massive particles. However, it remains crucial to understand the origin of neutrino mass. The seesaw mechanism is a compelling possibility but it needs to be probed as a consistent explanation. For the charged fermions, experiments support the SM explanation that ties the particles' masses to the corresponding Yukawa terms in the underlying theory. In the case of neutrinos, one would want to be able to determine the Dirac mass (M D ) between the left-handed neutrinos and the new neutral lepton singlets (N ) in terms of the light neutrino masses and mixings (M ν ) and the mass matrix of the heavy states (M N ). Therefore, probing the seesaw requires the measurement of these two matrices and a scheme that allows the determination of M D from them.
Within the context of minimal extensions of the SM, M D is not unambiguously determined in terms of M ν and M N . This problem can be overcome, however, by including more structure in the theory. This is precisely the case for the left-right symmetric model, where the seesaw is a natural outcome of spontaneous symmetry breaking. In this paper we have studied the scenario in which the left-right symmetry is implemented via a discrete generalized parity, P. In this scenario one can solve for M D analytically using the approach described in Refs. [6,20] as long as the bidoublet Higgs field has a real vacuum expectation value. In contrast, for a complex VEV, which induces P parity violation in the Dirac Yukawa sector, the problem is more difficult to handle and an analytical solution is lacking. Although this case can in principle be addressed numerically, we are not aware of any defined numerical procedure in the literature. With the intention of filling this gap, the goal of this paper has been to design and test a prescriptive numerical method that allows one to determine M D only from the physical information contained in the matrices M ν and M N .
For the parity-violating case, we have found that the problem of determining M D is inherently tied to the knowledge of U e , the unitary matrix associated with the transformation from the gauge basis to the charged-diagonal basis. Therefore, both matrices need to be solved simultaneously. The method proposed in this paper, as described in Sec. 3, fulfills this goal through an iterative procedure that has proven to be stable and has led to solutions for all of the tested data sets. We illustrated the procedure explicitly in Sec. 4 for three different data sets that had varying degrees of parity violation.
Finally, it is worth stressing the difficulty of the problem faced in this article. As a matter of fact, in Sec. 5 we presented alternative iterative methods that were stable for some data sets while not for others. The problem of probing the seesaw mechanism when parity is violated is important and challenging enough to require a robust solution method.
A Connection to Notation in Ref. [28] In this appendix we outline the procedure for diagonalizing the charged and neutral mass matrices and we also specify the relations between the charged-diagonal basis (used in Ref. [6] and in the present work) and the gauge basis (used in Ref. [28] and summarized at the beginning of Sec. 2).
Equations (8), (10), (11), (12) and (13) give the explicit expressions for M , M LR , M LL , M RR and the light neutrino mass matrix, respectively, in terms of Yukawa coupling matrices in the gauge basis. The charged lepton mass matrix may be diagonalized using a biunitary transformation as follows, where the elements of m e are taken to be real and positive. The light and heavy neutrino mass matrices may also be diagonalized using unitary matrices, The unitary matrices used to diagonalize the charged and neutral lepton mass matrices are then used to construct the left-and right-handed PMNS matrices, where B φ is a diagonal phase matrix and S L and S R are diagonal sign matrices; these diagonal matrices are used to bring V L and V R into their conventional forms. Defining ν L,R and e L,R to be the neutral and charged lepton fields in the mass basis (i.e., in the basis in which the mass matrices are diagonal), we have where ν L,R and e L,R are the corresponding fields in the gauge basis. The left-and right-handed PMNS matrices appear in the charged-current Lagrangian when it is written in terms of the fields in the mass basis, Finally, we note that the left-handed PMNS matrix is parameterized as follows in Ref. [31] (and in Ref. [28]), where A L is a diagonal matrix that may be written in terms of two Majorana phases, A L = diag(e iα 1 /2 , e iα 2 /2 , 1). The matrix U (0) may be written as where s ij and c ij refer to the sines and cosines, respectively, of the (real) angles θ 12 , θ 13 and θ 23 . The interested reader is referred to Ref. [28] for a parameterization of V R .
Reference [6] works in a basis in which the charged lepton mass matrix is diagonal, but the neutrino mass matrices are not; we have called this basis the "charged-diagonal" basis. The diagonalization of the charged lepton mass matrix was shown above, in Eq. (67). Equations (14)- (16) also include the matrix U e ; this matrix is defined in terms of the matrices V L and V R that are used to diagonalize M , of confusion is that the method described in this Appendix is an iterative one that is itself used in the context of another iterative procedure (i.e., the one described in Sec. 3.1). In this Appendix we will suppress the index for the "Step #" in the larger iterative process. The index m that is used here refers to the mth step in the iterative process used to determine U e for a given (i.e., fixed) step of the procedure described in Sec. 3.1.
We start by recalling the definition of M in Eq. (17), which allows us to reexpress Eq. (15) as follows, where m e is a diagonal matrix containing the charged lepton masses. In the limit that s a t 2β goes to zero, the unitary matrix U e becomes a diagonal matrix whose non-zero entries are ±1. With this in mind, we define the matrix U where U (0) =Ĩ is the diagonal sign matrix defined in Eq. (32). The matrix U e is taken to be the limit of Eq. (82) as m approaches infinity, Each of the unitary matrices U (j) may be expressed as follows, where λ i , i = 1, . . . , 8, are the usual Gell-Mann matrices, λ 9 is the unit matrix, and the α (j) i are real parameters that are to be determined. 20 The idea of the procedure is to determine matrices U (1) , U (2) , U (3) , . . . , in such a way that U (m) approaches the identity matrix for large m (i.e., in such a way that the α (m) i approach zero for large m). This allows one to truncate the infinite product in Eq. (83), so that U (mmax) e (for some m max ) is used as a suitable approximation to U e .
In the first step of the procedure we substitute into Eq. (81), in place of U e , and then expand the expression for U (1) (see Eq. (84)) to linear order in the α (1) i . This gives us the defining expression for the nine unknowns α Defining m e ≡Ĩm e and rearranging, we have To solve for the α (1) i , we multiply the above expression by λ k and take the trace, which yields nine equations (for k = 1, 2, . . . , 9) in the nine unknowns. Once we have determined the α i , we use the series expansion of Eq. (84) to determine the matrix U (1) (summing up enough terms so that the result is very close to unitary). While the linearized version of U (1) was an exact solution of Eq. (86), the "re-unitarized" version of the matrix (i.e., U (1) ) is no longer a solution when Eq. (85) is substituted into Eq. (81). This brings us to the next step in the procedure.
In the second step we substitute into Eq. (81) and expand U (2) to linear order in the coefficients α (2) i , while keeping the matrix U (1) in its exactly unitary form. Performing the same manipulations as in the previous step, we obtain the following expression for the α where m e . This allows us to solve for the α (2) i and to exponentiate the corresponding sum to determine U (2) .
We continue in this manner, linearizing U (m) at the mth step in order to determine the coefficients α (m) i (while using the exactly unitary versions of the matrices determined in the previous steps) and then "re-unitarizing" at the end to obtain U (m) . After several iterations, the coefficients become vanishingly small and we terminate the process, having obtained an approximation to U e that is unitary and satisfies Eq. (81) to a high degree of accuracy.
We conclude with two comments: 1. The matrix M that is produced in Step 7 of the iterative process described in Sec. 3.1 is actually only an approximation to the exact expression and may not be exactly Hermitian.
In practice, therefore, we replace M by 1 2 M + M † wherever it appears in the expressions in this Appendix. 21 2. Equations (88) and (90) and the analogous expressions for the subsequent steps in the process guarantee that the α (j) i will be real if M is Hermitian and if unique solutions exist. When the α (j) i are determined numerically they generically include small imaginary parts. We discard these.
C Angular parametrization of SO (3, C) In this appendix, we show that almost all 3 × 3 complex orthogonal matrices of determinant 1 can be realized as   c η 1 c η 3 − c η 2 s η 1 s η 3 s η 1 s η 2 c η 1 s η 3 + c η 2 c η 3 s η 1 where s η i = sin(η i ) and c η i = cos(η i ), for suitable choices of complex angles η i ∈ C. This generalizes the well-known parametrization of SO(3, R) using Euler angles (see for example Ref. [38]).
We start with a lemma that we will need later.

Lemma 2.
Suppose v 2 + w 2 = 1 for complex numbers v, w. Then there exists a complex angle η such that v = cos η, w = sin η.
Note that (v + iw)(v − iw) = 1, so v + iw = 0. Find any complex angle η such that v + iw = e iη . This is possible since v + iw = 0, and the range of e z is all nonzero complex numbers.  If p and r were both equal to 0, then the normalization of the second column would imply q 2 = 1, which cannot be. So we may assume p = 0 (up to symmetry). Likewise, we will assume y = 0, following a similar argument for the second row.
The discriminant of this equation is where is either 1 or −1: to be determined.

C.3 Other cases
If it happens that q 2 = 1 in the above orthogonal matrix A, then there are other "Euler angles" that may be used to parametrize its entries. The easiest way to obtain this is to replace A by A = P AQ for suitable permutation matrices P and Q, such that the (2, 2) entry of A is not ±1. In other words, even if q 2 = 1, there must be some other entry of A that does not square to 1. Permute rows and columns of A until that entry is now in the (2, 2) position, and then apply Theorem 1.