Which differential equations correspond to the Lindblad equation?

The Lindblad master equation can always be transformed into a first-order linear ordinary differential equation (1ODE) for the coherence vector. We pose the inverse problem: given a finite-dimensional, non-homogeneous 1ODE, does a corresponding Lindblad equation exist? If so, what are the corresponding Hamiltonian and Lindblad operators? We provide a general solution to this problem, including a complete positivity test in terms of the parameters of the 1ODE. We also derive a host of properties relating the two representations (master equation and 1ODE), which are of independent interest.


I. INTRODUCTION
The Gorini-Kossakowski-Lindblad-Sudarshan (GKLS) master equation [1,2] is widely used in modeling the evolution of open quantum systems subject to Markovian dynamics [3][4][5][6][7].It can be written in the following form: L a [⋅] = commonly known simply as the Lindblad equation. 1Here, the dot denotes a time-derivative, ρ is the state (density matrix) of the open system whose Hilbert space is d-dimensional, L is the Lindbladian, H = H † is the system Hamiltonian, {F i } is a Hermitian operator basis, and the constants a ij ∈ C are the elements of a positive semidefinite matrix of rates a (we provide complete definitions below).The positivity property of a is crucial since it is necessary and sufficient for the superoperator L to generate a completely positive (CP) map.This is known as the GKLS theorem. 2  When a ≥ 0, Eq. ( 1) still describes a time-independent Markovian quantum master equation but no longer generates a CP map.This situation arises, e.g., when the initial systembath state is sufficiently correlated [9][10][11][12][13][14].
The Lindblad equation is a first-order linear differential equation, and it can easily be transformed to make this explicit.By expanding the density matrix ρ in the operator basis {F i } one straightforwardly derives the equivalent nonhomogeneous form where the "coherence vector" ⃗ v collects the real-valued expansion coefficients of ρ, Q = −Q T is an antisymmetric matrix determined by H, while R and ⃗ c are determined by a [3].The problem we address in this work is the inverse question: When does a general, nonhomogeneous linear first-order differential equation describe a Markovian quantum master equation, and in particular a Lindblad equation?An elementary necessary condition is that G must have eigenvalues whose real parts are non-positive (otherwise lim t→∞ ⃗ v(t) is unbounded).Here we go well beyond this observation and give a complete solution to the problem of inverting both H and a from G and ⃗ c.We also reformulate the condition for complete positivity in terms of G and ⃗ c.Our results apply in the finite-dimensional setting, and we leave the inverse problem for the infinite-dimensional or unbounded settings open.
We start by stating our main result -the solution of the inverse problem along with the complete positivity condition -in Section II.We devote the rest of this work to providing a general background, a derivation and proof of the solution, and examples.In more detail, the background is given in Section III, where we introduce the representation of general superoperators over a "nice" operator basis such as the Gell-Mann matrices, use it to express the Markovian quantum master equation over real vector spaces, and derive a variety of general properties of Q, R, and ⃗ c.In Section IV, we derive the solution of the inverse problem and provide its proof.In Section V, we illustrate a few aspects of the general theory in terms of examples.Given the question we pose in the title of this work, it is natural to ask about the probability that a randomly selected pair (G, ⃗ c) will give rise to a valid Lindblad equation.We provide a partial answer in Section VI, where we point out that Lindbladians are extremely rare in the space of random matrices.In Section VII we discuss the relationship between our work and previous results.We conclude in Section VIII and provide additional supporting material in the appendices, including the analytical solution of Eq. (3) in Appendix A.

II. MAIN RESULT
Consider a finite-dimensional Hilbert space H, with d = dim(H) < ∞, and the Banach space (a complete, normed vector space) B(H) of operators acting on H equipped with the Hilbert-Schmidt inner product ⟨A, B⟩ ≡ Tr(A † B).Throughout this work, we use the following definitions.Definition 1.A "nice operator basis" for H is a set {F j } J j=0 ∈ B(H), J ≡ d 2 − 1, such that F 0 = 1 √ d I (I denotes the identity operator in B(H)), F j = F † j and Tr F j = 0 for 1 ≤ j ≤ J, and [15]: Definition 2. Any equation of the form Eq. (1), where H = H † is a Hermitian operator in B(H) and a = a † is a Hermitian J × J matrix, is a "Markovian quantum master equation".In this case, the superoperator L in Eq. ( 1) is called a "Liouvillian".When, in addition, a ≥ 0, Eq. ( 1) is a "Lindblad master equation", "Lindblad equation", or "completely positive Markovian quantum master equation".In this case, the superoperator L is called a "Lindbladian" or "completely positive Liouvillian".
The superoperator L a is called the dissipator.
Whenever we refer to Eq. (1) or the superoperator L without explicitly specifying complete positivity, we assume the former case, i.e., we only assume the conditions H = H † , a = a † .
Our main result is the following: Theorem 1.For any J ×J matrix G and any vector ⃗ c of length J with real coefficients there is a pair (H, a) describing a Markovian quantum master equation Eq. (1) which transforms to Eq. (3) with these G and ⃗ c.If, in addition to the above, we require H to be traceless, then such a pair (H, a) is unique and can be computed as follows: Moreover, the matrix a = {a mn } is positive semidefinite [i.e., Eq. ( 1) is a Lindblad master equation] if and only if (iff) for all traceless B ∈ B(H): Condition ( 6) is equivalent to the positive semidefiniteness of a, which is simpler to check.Thus, in practice, it is preferable to first compute a using Eq.(5b) and check the sign of its smallest eigenvalue, rather than to work directly with Eq. (6).
Note that adding a constant, i.e., a term of the form cI with c ∈ R, to H does not change the Liouvillian L. Therefore, if the requirement that H is traceless is not imposed, H can only be recovered from G up to an additive constant.Complete positivity of a superoperator E is equivalent to the statement that E has a Kraus representation [16]: where the {K i } are called Kraus operators.When they satisfy the map E is trace preserving.A quantum dynamical semigroup is defined as a oneparameter family of strongly continuous CP maps, satisfying Λ 0 = I (the identity operator), and the Markov property Λ s Λ t = Λ s+t for all s, t ≥ 0. 3We can more formally state the GKLS theorem(s) [1,2] as follows: a superoperator L ∶ B(H) → B(H) is the generator of a quantum dynamical semigroup iff it is of the form given in Eq. ( 1), with a ≥ 0. I.e., when L is in the form of Eq. ( 1), the solution of ρ = Lρ is ρ(t) = Λ t ρ(0), where Λ t = e Lt is an element of a quantum dynamical semigroup.

A. General superoperators over a nice operator basis
Given a nice operator basis, let us "coordinatize" the operator X ∈ B(H) in the nice operator basis as where we use the notation X = {X j } for the vector of coordinates of the operator X (we interchangeably use the ⃗ X and X notations to denote a vector).I.e., for any X ∈ B(H): In particular, applying Eqs. ( 8) and ( 9) to k⟩⟨l we obtain or Vice-versa, if all the coefficients X i are real, then the operator X = ∑ J j=0 X j F j is Hermitian because it is a sum of Hermitian operators F j with real coefficients.
More generally, for Hermitian matrices, the inner product becomes ⟨A, B⟩ = Tr(AB).Such matrices have d 2 = J + 1 real parameters [d real diagonal elements plus (d 2 − d) 2 independent complex off-diagonal elements], so they can be represented in terms of vectors in R d 2 . Moreover, if A and B are both Hermitian operators then: so that in these coordinates, the inner product ⟨A, B⟩ corresponds to the standard inner product in R d 2 .The matrix elements of a superoperator E ∶ B(H) → B(H) are given by The action of a general (not necessarily CP) superoperator E ∈ B[B(H)] can always be represented for any A ∈ B(H) as where c ij ∈ C (see Appendix D).Thus the matrix c = {c ij } specifies E in the given basis {F i }.CP maps are a special case of Eq. ( 14), where c is positive semidefinite.For superoperators we denote the Hilbert-Schmidt adjoint of E by E † , which is defined via Definition 3. E is Hermiticity-preserving iff: E is Hermitian iff Let us find explicit formulas for [E(A † )] † and E † (A) in terms of the general representation of a superoperator given by Eq. ( 14).First, from Eq. ( 14) we have Second, substituting Eq. ( 14) into the complex conjugate of Eq. ( 15) yields: Since this equality holds for every B, we have: It will turn out to be useful to have another representation of E † (A).Consider two sets of arbitrary operators {L p } and {M q }, which we can expand in the nice operator basis as L p = ∑ J i=0 l ip F i and M q = ∑ J i=0 m iq F i .In addition, let b be some arbitrary matrix in M (d 2 , C) and let c = lbm † .Then, using Eq. ( 14): and, using Eq. ( 20): Proposition 2. E is simultaneously Hermiticity-preserving and Hermitian iff c in Eq. ( 14) is real-symmetric, i.e., Proof.This follows immediately by equating the right-hand sides of Eqs. ( 18) and (20), both of which are then equal to E(A) by Definition 3.
Proof.Using Eq. ( 13), we obtain: In the other direction, assume E ∈ M (d 2 , R). E(A) is represented in a nice operator basis as so that Thus, after coordinatization, a Hermiticity-preserving superoperator E can be seen as a real-valued d 2 ×d 2 -dimensional matrix E.

B. General properties of a Liouvillian
Since a in Eq. ( 1) is Hermitian, it can be written as a = u † γu where u is unitary and γ is diagonal with the eigenvalues {γ α } J α=1 of a on its diagonal.The γ α 's are always real when Eq. ( 1) is a Markovian quantum master equation.When a ≥ 0, i.e., when Eq. ( 1) is a Lindblad equation, they are nonnegative.Defining we have Therefore the Markovian quantum master equation [Eq.( 1)] becomes: The L α ∈ B(H) are called Lindblad operators.
We remark that the representation of the dissipator L a in the form (28c) need not be unique: for example, if a = I (the identity matrix), it can be written as u † Iu for any unitary u, but [as can be seen from Eq. ( 26)] different choices of u lead to different Lindblad operators.More generally, γ is unique only up to permutations, and any permutation redefines the Lindblad operators by modifying the diagonalizing unitary matrix u.
Proof.To show this we can directly compare [L(X)] † with L(X † ) using Eqs.(28b) and (28c).For the Hamiltonian part, we have the following: while for the dissipative part, Thus, Corollary 1.The Markovian quantum master equation ρ = Lρ in a nice operator basis is a real-valued linear ordinary differential equation (ODE) for the vector ρ whose coordinates are ρ j = Tr(ρF j ): Proof.This follows directly from Propositions 1, 3 and 4 with X = ρ and E = L.
Proposition 5.The Liouvillian maps any operator X to a traceless operator.That is, for any operator X Proof.From Eq. (1) we have and, using Tr(AB) = Tr(BA): Proposition 6. Suppose the dissipator L a is of the form (1c) with a Hermitian matrix a. Then the following conditions are equivalent.
1. L a is Hermitian.
4. All the matrix elements a ij are real.
Proof. 2 ⇒ 1.If all the Lindblad operators L α are Hermitian then L † a = L a .Indeed, from Eqs. (22c) and (28c), for any operator A we have 3 ⇔ 4. Conditions 3 and 4 mean that for each i, j we have a ji = a ij and a ji = a * ji respectively.Since a = a † , the righthand sides of these two equations are equal for all i, j.
3, 4 ⇒ 2. We know that a is real-symmetric; hence it can be diagonalized by an orthogonal matrix.Therefore, u in Eq. ( 26) can be chosen to have real matrix elements.For that choice, we have ∀α where ã = (a + a * ) 2 -a matrix with real matrix elements.
Later in Theorem 2 we will show that a is uniquely determined by L a , hence a = ã and a = a * . 2 ⇒ 3.If L α = L † α then it can be expanded in the given basis {F j } with real coefficients: L α = ∑ J j=1 w αj F j , with w αj ∈ R. Substituting this into Eq.(28c) we obtain Eq. (1c) with ã = w T γw in place of a and γ = diag(γ 1 , . . ., γ J ).Thus ã = ãT .Later in Theorem 2 (see also Proposition 18 and Corollary 2) we will show that a is uniquely determined by L a ; hence, a = ã and a = a T .Proof.It suffices to assume that a is symmetric; then using Eq.(1c):

By definition, the generator
Note that the converse is false: unitality does not imply that a must be symmetric (or any of the other conditions in Proposition 6).As a counterexample, consider any nice operator basis containing two or more commuting operators [e.g., the last two matrices in Eq. ( 130) below].Then, if a contains a non-zero matrix element only between these two operators [e.g., a 78 = 1, all the rest zero, for the nice operator basis in Eq. ( 130)], Eq. (38a) gives L a (I) = 0, but a is not symmetric.
C. Markovian quantum master equation for the coherence vector: from H and a to G = Q + R and ⃗ c By Corollary 1, we can expand the density matrix in the nice operator basis as: where ⃗ . ., F J ) collects the traceless basis-operators into a vector, and the corresponding coordinate vector ⃗ v = (v 1 , . . ., v J ) T ∈ R J is called the coherence vector.We have: We note that the conditions guaranteeing that a given vector represents a valid (i.e., non-negative) density matrix are nontrivial.A simple necessary condition follows from the condition that the purity Tr(ρ 2 ) ≤ 1: Tr( i.e., which is saturated for pure states.Consequently, as mentioned in Section I, G's eigenvalues are constrained to have non-positive real parts.This is a necessary condition any candidate Eq. ( 3) must satisfy to represent a Lindblad equation.Additional inequalities have been derived in Ref. [18].
Proposition 8.The coherence vector satisfies Eq. (3).Moreover, the decomposition of L as L = L H + L a with L H and L a given by Eqs.(1b) and (1c) induces the decomposition of G in Eq. (3 This already establishes that ⃗ v satisfies Eq. ( 2), with for 1 ≤ i, j ≤ J.
To prove the remaining claims, recall that L ij = L * ij by Corollary 1.Therefore, by linearity, for 1 ≤ i, j ≤ J: i.e., R, Q ∈ M (J, R).In addition, i.e., Q is antisymmetric.Moreover, by expanding H in the nice operator basis as we can write Q's matrix elements as: As for ⃗ c, since so that for 1 ≤ j ≤ J, Eq. ( 44b) is replaced by: i.e., ⃗ c ∈ R J .
Let spec(A) denote the spectrum of the operator A, i.e., the set of its eigenvalues.
Proof.Note that, as follows from proposition 5, Combining this with Eq. ( 45), we see that in the basis {F j } J j=0 : Computing the spectrum of the superoperator L is equivalent to finding the eigenvalues of L, which are the solutions of the characteristic equation where I n denotes the n × n identity matrix.These solutions are λ = 0 and spec(G).
Proof.The proof is immediate from Eq. ( 53), which shows that ⃗ c is the vector representing the matrix L a (I) d.
A more explicit proof is the following calculation.First, if L a is unital then, using Eqs.(45b) and (50), we have: On the other hand, if hence L a (I) = 0.
Note that using Eq.(1c) we can write down the general formula for ⃗ c's elements in a given nice operator basis: Here by R we mean the J × J matrix with elements {R kl } J k,l=1 .Using Eq. (45b): Note that this is the general formula for R's elements in a given nice operator basis.R is not symmetric in general.However, we have two special cases presented in the following Propositions.
Proposition 11.R is symmetric in the single qubit (d = 2) case.
Proof.By direct calculation, it follows that if we choose the traceless elements of the nice operator basis as the normalized Pauli matrices {σ x , σ y , σ z } √ 2, then R kl = R lk for arbitrary a.
An alternative way to see this is from Corollary 2 below: the dimension of the space of possible antisymmetric components of R is J(J − 3) 2, which is 0 iff J ∈ {0, 3}, i.e., when d ∈ {1, 2}.
Proposition 12.The following statements are equivalent: 2. Any of the conditions in Proposition 6 are satisfied.
Proof.If L † a = L a (one of the four equivalent conditions in Proposition 6) then, using Eq.(45b): where we also used that the matrix elements of R are real (Proposition 8).This proves that 2 ⇒ 1.Similarly, = where in the last equality we used Proposition 5.
The proof that 1 ⇒ 2 is almost identical: which means L † a = L a .In particular, a = a T implies that R is symmetric.Note that, consistent with the comment following Proposition 7, the converse is false, i.e., R being symmetric does not imply that a is real-symmetric: any a yielding a symmetric R and non-zero ⃗ c is a counterexample.As we describe in Theorem 2, from any such pair (R, c) one could recover a via Eq.(5b), which would be a counterexample.For a specific counterexample, see Section V A 2.
According to Corollary 2 below, the dimension of the space of possible antisymmetric components of R is J(J − 3) 2, which is non-zero if and only if d ≥ 3. Thus, for d ≥ 3 the matrix R is not always symmetric.An explicit example of this for d = 3 is the following.Consider a qutrit subject to amplitude damping (spontaneous emission) involving just two of the three levels: which is a non-negative matrix with eigenvalues 0 (7-fold degenerate) and 2. Note that in all the examples given in this work, we use a specific choice of a nice operator basis {F i }: the generalized Gell-Mann matrices normalized to satisfy the normalization condition Eq. ( 4); see Section III G and Ref. [19].By Eq. ( 58) we have which is not symmetric.We revisit this example in Section V B 1.
F. Properties of the linear map a ↦ (R, ⃗ c) Consider a map F ∶ a ↦ (R, ⃗ c) defined by Eqs. ( 57) and (58).This is a linear map of real vector spaces F ∶ M sa (J, C) → M (J, R)⊕R J , where M sa (J, C) is the R-vector space of Hermitian (or self-adjoint; hence the "sa" subscript) J × J matrices over C.
Then, as will follow later from Theorem 2 and Proposition 14, this map is injective, i.e., a ≠ a ′ ⇒ F(a) ≠ F(a ′ ).In other words, if the pairs (R, ⃗ c) and (R ′ , ⃗ c ′ ) are equal, then also the corresponding rate matrices a and a ′ are equal.A direct way to prove the injectivity of F, which does not rely on Theorem 2 (but in part uses similar ideas), is presented in Appendix B.
The following Lemma, together with Corollary 2 below, describes the image of F.
In particular, (R sym , ⃗ 0) is the image of some real-valued symmetric a.
Proof.The subspace of real-valued symmetric a in M (J, R) is of dimension J(J + 1) 2. We know by Proposition 12 that F maps real-symmetric a to ⃗ c = ⃗ 0 and R real-symmetric.The subspace V = {(R sym , ⃗ 0)} has dimension J(J + 1) 2 in the codomain of F. Since, as explained above, F is injective, the image of all real-symmetric a is the subspace V .

G. Properties of a nice operator basis
In this subsection, we define structure constants corresponding to a nice operator basis and provide alternative forms of Eqs.(48b) and (57) using these structure constants.We note that the structure constants are often explicitly defined for the case of (normalized and generalized) Gell-Mann matrices [20].As mentioned above, this is our choice in all the examples in this work.However, the theory presented here applies to any choice of a nice operator basis {F i } J i=0 (see Definition 1).
For any such choice, the elements {F i } J i=1 form a generator set of the Lie algebra su(d). 4We can define structure constants f ijk via where in the second equality we used the Einstein summation convention of summing over repeated indices, which we use henceforth when convenient.
The structure constants are totally antisymmetric, i.e., f jkl = −f kjl = f klj and themselves satisfy a type of orthogonality relation [21][22][23]: Using Eqs. ( 4) and (65) we can then further simplify Eq. (57): We may also derive an explicit expression relating the coordinates h m of the Hamiltonian in the expansion (47) to the matrix elements of Q. Namely, inserting Eq. (65) into Eq.( 48), we have: On the other hand, using we have This implies that H can be computed given Q as We discuss the consistency between this expression and Eq.(5a) in Proposition 15.

IV. SOLUTION OF THE INVERSE PROBLEM
We now set up the necessary mathematical framework to solve the inverse problem in full generality.Since some of the results after Proposition 5 used Theorem 2 and its corollaries, in this section we only use the results up to (and including) Proposition 5, to avoid any circular references.

A. Forward and inverse transformations
For a field F ∈ {R, C} let M 0 (d, F ) denote the subspace of traceless matrices.Let M sa (d, F ) denote the R-subspace of Hermitian matrices.E.g., M sa (d, R) is a vector space of real-valued d × d symmetric matrices.Finally, let M 0,sa (d, F ) be the R-subspace of matrices which are both traceless and Hermitian.
Theorem 2. For any nice operator basis {F n } J n=1 there is a linear bijective correspondence between the following objects.

Pairs (H, a)
where 4. Linear Hermiticity-preserving superoperators L acting on M (d, C) satisfying for each X ∈ M (d, C).
• 3 → 4 is given by Eq. (78) with x in place of x.
• 3 → 2 is given by where We denote the spaces in Theorem 2 as V 1 , . . ., V 6 and the corresponding maps as ϕ ij ∶ V j → V i .Theorem 2 and Lemma 2 below imply that the diagram in Fig. 1 is commutative and all its arrows are R-linear bijections (given by the formulas in Theorem 2 and Lemma 2).Once the theorem is proven, we will use the maps ϕ ij for any indices i, j = 1, . . ., 6.Such maps are defined as the compositions of the maps in Fig. 1; any such composition will provide the same result due to the commutativity of Fig. 1.
Proof.First, note that the dimension of each of the R-vector We would like to prove that all maps are well defined and that the diagram is commutative, i.e., if we start at any object in any node and apply the maps in the diagram until we end up in the same node, we will have obtained the same object as the one we started with. For we need to show that the right-hand side of Eq. (76) satisfies Eq. ( 72).We check this by direct computation: Here we used that H, a, {F n } are Hermitian, swapped the order of two terms with H, and renamed the indices n ↔ m in the summation.To check the second property of x we note that the {F n } are traceless.Hence, the second term in Eq. ( 76) does not contribute and For ϕ 12 ∶ V 2 → V 1 the fact that H and a are Hermitian follows from the first property of x in Eq. (72) and Eqs.(77a) and (77b) for H and a.The trace of the right-hand side of Eq. (77a) evaluates to 0.
If we start with (H, a) and apply the maps V 1 → V 2 → V 1 given by Eqs.(76), (77a) and (77b) we obtain some (H ′ , a ′ ) = ϕ 12 (ϕ 21 ((H, a))).Let us prove that these match the original (H, a), i.e., that ϕ 12 ○ ϕ 21 = id V1 Since F m and F n are traceless, the term corresponding to a in the expression for H ′ is zero.Using the fact that Tr(H) = 0 and Tr(I) = d we obtain Similarly, we can observe that the term with H from Eq. (76) does not contribute to a ′ since the {F n } are traceless.Using the fact that the {F m } are Hermitian and form an orthonormal basis we obtain It follows that ϕ 21 ○ ϕ 12 = id V2 .Indeed, it is a general fact that a left inverse of a linear map between 2 vector spaces of the same dimension is necessarily its right inverse as well [24,Proposition 1.20].We also use this argument for other loops, proving the commutativity of each loop in the diagram for only a single starting point.
The fact that the elements of G and ⃗ c are real follows from Eq. ( 81), the fact that the {F n } form an orthonormal basis, and the fact that L maps Hermitian operators to Hermitian operators.Let us check that the composition ϕ 46 ○ ϕ 65 ○ ϕ 54 = id V4 .Denote the resulting operator given by Eq. ( 82) as L ′ = (ϕ 46 ○ ϕ 65 ○ ϕ 54 )(L).Since {F n } n=0,...,J form a basis in M (d, C) and L and both L ′ are C-linear, it is sufficient to check that L(F l ) = L ′ (F l ) for l = 0, . . ., J.For l = 0 we have, using Eq. ( 82): And for n > 0 we have: Here we again used that {F n } n=0,...,J are Hermitian and form an orthonormal basis, and that Tr[F 0 L(X)] = 0 for any X [Eq.( 74)].Now consider the map ϕ 34 ∶ L ↦ x.Eq. ( 73) follows from Eq. (74) applied to X = j⟩⟨k .Let us prove that ϕ 23 ○ ϕ 34 ○ ϕ 42 = id V2 .Start with x ∈ V 2 and apply the above maps to obtain From Eqs. (78) and (83) we have: where In order to compute x ′ = ϕ 23 (x) we compute b given by Eq. ( 85) by substituting Eq. (93b) into Eq.(85) and using Eq. ( 72).
B. Explicit formulas for H and a from G and ⃗ c We are now finally prepared to complete the solution of the inverse problem and provide explicit formulas for H and a given G and ⃗ c.We first solve the problem more generally: assume we have (G, ⃗ c) ∈ V 6 and would like to know the corresponding object from one of V 1 , . . ., V 5 .If we are interested in L ∈ V 4 or L ∈ V 5 we could directly apply Eq. (82) from Theorem 2. For V 1 , V 2 , or V 3 we would, however, have to compose multiple maps from that theorem.
Here we provide explicit formulas for these compositions.
Lemma 2. Suppose we have (G, ⃗ c) ∈ V 6 .Then the objects from V 1 , V 2 , V 3 corresponding to (G, ⃗ c) in the sense of Theorem 2 are given by the following formulas: • (H, a) ∈ V 1 are given by Eqs.(5a) and (5b).
Once a is computed from Eq. (5b), one can check if the given ODE generates a Lindblad equation (and not just a Markovian quantum master equation) by testing whether a is positive semi-definite.What remains is to formulate a complete-positivity condition directly in terms of G and ⃗ c; we do this next.

C. Complete positivity
We now complete the proof of Theorem 1.
Proof.The condition a ≥ 0 is equivalent to the condition that for any vector b = {b m } J m=1 one has Substituting Eq. (5b) into the right hand side of Eq. ( 108) and simplifying it using Eq. ( 109) we obtain the right-hand side of Eq. ( 6):

D. Complete positivity and convex geometry
There is a fruitful complementary description of our complete positivity results using convex geometry, which we explain in this subsection.For the necessary background in convex geometry, see Appendix C.
The last part of Theorem 1, i.e., the just-proven Lemma 3, describes the set of pairs (G, ⃗ c) corresponding to positive semidefinite a (or, equivalently, to Lindblad master equations).We denote this set by V 6+ (a subset of V 6 ).Moreover, we introduce V i+ (i = 1, . . ., 6) to be images of V 6+ under the maps ϕ i6 described by the commutative diagram Fig. 1.V 1+ consists of pairs (H, a) where H is an arbitrary Hermitian matrix and a is positive semidefinite.
The set M + (d, C) of positive semidefinite matrices is a convex cone with compact support (see Lemma 6 in Appendix C).The complete positivity part of Theorem 1 uses the description of the cone M + (d, C) along with a set of linear inequalities Eq. (108), i.e., the set of supporting hyperplanes.
An alternative is to note that M + (d, C) is a convex hull of its extreme rays (Lemma 8).The extreme rays of M + (J, C) are the rays generated by rank 1 matrices (Lemma 7), i.e., matrices a of the form bb † , where b ∈ C J .
Using the map ϕ 61 this provides a description for V 6+ given by the following: Proposition 13.The cone V 6+ of pairs (G, ⃗ c) generating a (completely positive) Lindblad master equation coincides with the convex hull of the following elements: • Elements of the form (Q, ⃗ 0), where Q is given by Eq. (48) for H ∈ M 0,sa (d, C); • Elements of the form (R, ⃗ c), where Proof.According to the discussion above, V 6+ is generated by elements of the form ϕ 61 ((H, a = 0)) and ϕ 61 ((H = 0, a = bb † )).The former are given by Eq. ( 48).The image of the latter can be computed using Eqs.( 57) and (58b), after identifying b ∈ C J with elements B ∈ M 0 (d, C) via Eq.( 109):

E. Consistency
By construction, the forward maps defined in Theorem 2 are consistent with our previously defined maps.More explicitly, this is stated in the following proposition.Proposition 14.Let H ′ be a Hermitian operator in B(H) and let a be a Hermitian J ×J matrix.Let Q, R, ⃗ c be obtained from (H ′ , a) using Eqs.(44b), (45a) and (45b), and Tr(H ′ )I be the traceless component of H ′ , and let ϕ ij be the maps from Theorem 2. Then Proof.First, note that Eq. (1b) yields the same operator L if we change H by a constant.Hence, we could apply Eq. (45a) to H instead of H ′ and get the same Q.One of the equivalent ways to describe ϕ 16 is ϕ 61 = ϕ 65 ○ ϕ 54 ○ ϕ 41 .Note that Thus, Eq. (113a) follows from comparison of Eq. (45a) with Eq. (81).Similarly, and Eq.(113b) follows from comparison of Eqs.(44b) and (45b) with Eq. (81).Finally, Eq. (113c) is the sum of Eqs.(113a) and (113b).
In the following Proposition, we show that Eqs.(5a) and (71) are consistent, i.e., yield the same result when used to recover H. Proposition 15.For any J × J matrix G, the r.h.s. of Eq. (5a) and Eq.(71) (when Q is replaced with G) are equal to each other.
Proof.Comparing Eq. (5a) to Eq. ( 71) one can see that for a given G the claim is equivalent to where we use the Einstein summation notation.Eq. ( 116) follows directly from the definition of the structure constants Eq. ( 65).
Next, we show that the inverse maps [not only Eqs. (5a) and ( 71)] also yield the same result when used to recover H, and in fact one may interchange G and Q when using these maps, or subtract any matrix from G which can be obtained using the formula for R (even from a different L a ; e.g., any real symmetric matrix).
Proposition 16.Let H, a, Q, R, G, ⃗ c be the same as in Proposition 14.Then all of the following methods recover the same H: 1. Applying Eq. (5a) to G (as is); 2. Applying Eq. (5a) to Q in place of G; 3. Computing Q ′ to be the antisymmetric part of G and applying Eq. (5a) to Q ′ in place of G; for any Hermitian J × J matrix a ′ (possibly different from a) and applying Eq. (5a) to Q ′′ in place of G.
Since Eq. (5a) contracts G nm with a commutator, which is antisymmetric with respect to the order of indices, the result of that application is independent of adding any symmetric matrix to G (statement #3).
Note that an alternative way to obtain the formula for R is to substitute Eq. (5b) into Eq.(58b) (which results in a significantly more complex computation giving the same final result).
Naively, one might be tempted to set Q to be the antisymmetric part of G, or R to be the symmetric part of G, instead of using the expressions given in Proposition 17.This, however, only works for d ≤ 2. This follows from the dimension counting done in Corollary 2 below: the space of antisymmetric matrices R has dimension J(J − 3) 2, which is nonvanishing unless d ∈ {1, 2} (recall that J = d 2 − 1).We give an example illustrating this for d = 3 in Section V B 2.
However, as stated in Propositions 15 and 16 above, taking the antisymmetric part of G would still result in the correct H being recovered.
) can be obtained from some Hermitian matrix a if and only if In particular, this condition is independent of ⃗ c, i.e., for any ⃗ c ′ ∈ R J the pair (R, ⃗ c) can be obtained from some Hermitian matrix a if and only if (R, ⃗ c ′ ) can (from a possibly different Hermitian matrix a ′ ).
Proof.According to Proposition 14 a pair (R, ⃗ c) can be obtained from some Hermitian matrix a if and only if (R, ⃗ c) ∈ ϕ 61 ({(0, a) ∶ a ∈ M sa (J, C)}).According to Theorem 2 this is equivalent to From Lemma 2, this is equivalent to Eq. (5a) evaluating to 0 when R is used instead of G, i.e. when Eq. ( 119) holds.
Corollary 2. The space of matrices R which can be obtained from some Hermitian matrix a is an R-linear subspace of M (J, R) of dimension J 2 − J and includes the space of symmetric matrices.
The space of antisymmetric matrices R, which can be obtained from some Hermitian matrix a has dimension J(J − 3) 2.
Proof.As explained in the proof of Proposition 18, the space of possible (R, c) is a bijective image of a J 2 dimensional space.According to Proposition 18 it contains all possible pairs of the form (0, ⃗ c) --a J-dimensional subspace, hence the space of possible values for R has dimension J 2 − J.
Furthermore, any symmetric matrix satisfies Eq. ( 119) due to contraction with an antisymmetric commutator there.
The final statement follows from the direct calculation: where J(J + 1) 2 is the dimension of the space of symmetric matrices.

V. EXAMPLES
This section provides several examples to illustrate the general theory developed above.Various supporting calculations can be found in Ref. [19].

A qubit with pure dephasing
Consider the Lindblad equation for pure dephasing: Here Q = 0 (since H = 0) and, by Propositions 6, 7 and 10, ⃗ c = 0, since a is real-symmetric.Using Eq. ( 58) we find The inverse problem retrieves H from G and a from (G, ⃗ c).Using Eqs.(5a) and (5b), we indeed find H = 0 and a as given in Eq. (122).

A qubit with amplitude damping
Now consider the Lindblad equation for spontaneous emission: where i.e., a is not real-symmetric, and the Lindbladian is non-unital (in particular, ⃗ c is non-zero).Then, using Eqs.( 48) and (58) we find and using Eq. ( 57) Using Eqs.(5a) and (5b) with G = Q + R we then indeed find H = ωσ z and and a as given in Eq. (125).

Example of G giving rise to a non-CP map
Let ⃗ c = ⃗ 0 and Using Eq. (5b) this yields: whose eigenvalues are {−1 2, 1 2, 0}.I.e., a is not positive semidefinite, which means that the corresponding Markovian quantum master equation does not generate a CP map.Hence, the pair (G, ⃗ c) with matrix G given in Eq. ( 128) and ⃗ c = 0 does not correspond to a Lindblad equation, only to a Markovian quantum master equation.

B. A qutrit
As the nice operator basis for this d = 3-dimensional case, we choose the 8 normalized Gell-Mann matrices, including the normalized identity matrix: A qutrit with amplitude damping Consider the example of a given in Eq. (63), i.e., a qutrit undergoing amplitude damping between its two lowest levels (similar to the qubit example given in Section V A 2).The corresponding R matrix was given in Eq. (64); assuming H = 0 we have G = R.In addition, we find ⃗ c = (0, 0, 0, 0, 0, Computing a using Eq.(5b), we find that it is indeed identical to Eq. (63).

Illustration of Propositions 16 and 17
Note that for symmetric G, the decomposition is trivial: Q = 0, R = G.Therefore, to illustrate Proposition 17, let us choose an arbitrary antisymmetric G; for example Computing the Hamiltonian H that arises from this G using Eq. ( 71) [or, equivalently, Eq. (5a)] yields: When computing Q from this Hamiltonian using Eq. ( 48), we observe that Q does not equal the antisymmetric part of G, as anticipated by Corollary 2 and the comment after Proposition 17.Instead, we find: As expected, if we then use Eq.(117a) to decompose G into Q and R, we get the same Q as the one in Eq. (134).Also, as guaranteed by Proposition 16, if we use that Q to compute H again, we will get the same H as in Eq. (133).

VI. THE RARITY OF LINDBLADIANS
As posed in the title, the question that motivated this work can be interpreted as the probability that a given pair (G, ⃗ c) will give rise to a Lindbladian (given a natural choice of a distribution over the pairs (G, ⃗ c)).Alternatively, one can use a natural choice of a distribution for (H, a) in order to induce the distribution on the pairs (G, ⃗ c) and compute a probability that a given pair (G, ⃗ c) with that distribution gives rise to a Lindbladian.In the remainder of this section, we provide partial answers to these questions and describe the difference between these two distributions.
A. Natural distribution on the pairs of (G, ⃗ c) Suppose all elements of G and √ d⃗ c are picked independently from a standard normal distribution.Such a distribution is known as the Ginibre Orthogonal Ensemble (Gi-nOE) [25].One might be interested in the following question: "How likely is it that these G and ⃗ c generate a Lindbladian, i.e. a ≥ 0?".Let us denote this probability as P GinOE J .While we do not know the asymptotics of P GinOE J , here we provide a non-rigorous attempt at deriving the asymptotics of an upper bound.Recall that a necessary condition for a ≥ 0 is that all eigenvalues of G have a non-positive real part.Denoting the probability of the latter by P GinOE J , this implies The distribution of eigenvalues in the GinOE is known: see, e.g.[25, Eq. (1.7)].One can use either the methods of [26] or some rigorous alternative to those methods to estimate the asymptotics of P GinOE J , which would be an upper bound for P GinOE J due to Eq. ( 135).More specifically, assuming the same (non-rigorous) argument as in Ref. [26] applies to the GinOE, we can state that We discuss the estimation of the positive constant θ GinOE in Appendix E. The important point about Eq. ( 136) is that the probability decays rapidly in the Hilbert space dimension d (recall that J = d 2 − 1).

B. Natural distribution on a
In this subsection, we examine the prevalence of Lindbladians within the set of all Liouvillians.
Specifically, we attempt to answer the following question: "Given a random H and a, what is the probability that L H,a is a Lindbladian?"Since the condition a ≥ 0 that guarantees a Liouvillian is a Lindbladian depends only on a and not on H, defining the distribution in the space of Hermitian matrices a is sufficient to answer this question.Additionally, the scale of a is irrelevant; changing a to αa (where α > 0 can be randomly selected from a distribution which may depend on a) does not alter the answer to a ≥ 0.
One natural choice for the distribution is the Gaussian Unitary Ensemble (GUE), also known as the β = 2 Gaussian Ensemble.In this ensemble, the real and imaginary parts of each component of a J × J matrix A are independently selected from a normal distribution with a mean of 0 and 5 a variance of 1 β = 1 2. The matrix a is then computed as a = (A + A † ) 2. The condition a ≥ 0 is equivalent to λ j ≥ 0 for j = 1, . . ., J, where λ = {λ j } J j=1 is the vector of eigenvalues of a.Let us denote the probability of this event as P GUE J .The joint probability density function (PDF) of eigenvalues of a matrix from the GUE is well known to be: where E(λ) = 1 2 ∑ J j=1 λ 2 j − ∑ 1≤j<k≤J ln λ j − λ k and G β,J is the normalization factor (partition function); see [27,Eq. (1.4)] for further details.Thus, the distribution of λ can be interpreted as the canonical ensemble of a two-dimensional gas (in C) with J charged particles (each of unit charge with the same sign), constrained to the real line in a harmonic potential at temperature 1 β = 1 2, also known as a log-gas.The probability P GUE J in this interpretation is the probability that, by random chance, all particles happen to be to the right of the origin.P GUE J is known as the probability of atypically large fluctuations of extreme value statistics of the GUE, and was estimated in Ref. [26] as: The estimate uses non-rigorous techniques from statistical mechanics, including the replacement of particle density (sum of delta functions) with a smooth function and the use of functional integrals and functional Fourier transforms.While rigorously proving this estimate is left to future research, Eq. (138) shows that the probability of a randomly picked Liouvillian being a Lindbladian decays rapidly with d, just like Eq. ( 136) for the GinOE.For example, already for d = 4, the estimate gives P GUE J ∼ 10 −54 , demonstrating that it is extremely unlikely to find a Lindbladian for d = 4 by picking a random a from the GUE.

C. Comparison between these distributions
Given the distribution on the pairs (G, ⃗ c) described in Section VI A, one can derive a distribution for a or a matrix related to the tensor x from Theorem 2. P GinOE is then equal to the probability that a ≥ 0 (or, equivalently, x ≥ 0) given that distribution.
Due to the linearity of ϕ 16 , the components of a also follow a multivariate Gaussian distribution with mean 0. Using Eq. (5b) one can compute the covariances: This distribution does not match the GUE investigated in Section VI B: for comparison covariances in the GUE are given by The normalization difference (1 vs 1 2) is due to a difference in conventions and is irrelevant to the question of whether a ≥ 0. The presence of the second term in Eq. ( 139), however, is significant and, for d > 2, introduces a dependence of the probability distribution given by Eq. ( 139) on the choice of the nice operator basis.Nevertheless, the distribution of x computed from (H = 0, a) is independent of both the nice operator basis and the choice of the orthonormal basis in the Hilbert space H: .One approach might be to first compute the joint distribution of eigenvalues of a or of x similarly to how they are computed for the GUE (see, e.g., Ref. [27]), i.e. by integrating out unitary symmetry.One may attempt to integrate out the unitaries in U (H) first, and then integrate over the quotient space U (B 0 (H)) U (H), which consists of the unitaries in B 0 (H) modulo those in U (H), where U (H) denotes the manifold of unitaries acting on a finite-dimensional Hilbert space H.That second integration appears to be non-trivial because while Eq. ( 141) is symmetric with respect to U (H), it is no longer symmetric with respect to U (B 0 (H)) U (H).Once the joint distribution of eigenvalues is computed (or estimated), one can attempt to use it to estimate the probability that all of them are nonnegative.

VII. OVERLAP WITH PRIOR WORK
After this work was completed, it came to our attention that our main results could be reconstructed by combining several earlier results, in particular using Refs.[5, Section 3.2.2],[28, Section 7.1.2],and [29].An aspect that is common to these three works and distinguishes them from ours is that they do not represent L by its action on the basis elements F i , i.e., they do not have (G, ⃗ c).In this sense, their motivation is different from ours, given that our starting point is the title question "which (nonhomogeneous linear first-order) differential equations correspond to the Lindblad equation?",formulated in terms of (G, ⃗ c) [Eq.(3)].Nevertheless, it is possible to use these earlier results to prove Theorem 1: one can show that the map ϕ 64 given by Eq. ( 81) is a bijective linear map with inverse given by Eq. ( 82).With this established, one can substitute Eq. (82) instead of L to apply the above results to obtain Theorem 1.
The overlap is described in more detail below.We first summarize it in the following list, mentioning the parts of the Theorem 1 and the overlaps we have found.
3. Complete positivity condition Eq. ( 6) in terms of G and ⃗ c.

2.2]
Item 2 and surjectivity in Item 1 follow from [5, Section 3.2.2].To see this, one needs to take V (t) given by with L given by Eq. ( 82), and write it in the form [ Reference [28, Theorem 7.1] asserts that a linear map is a Lindbladian if and only if it can be represented in any of the four forms described in [28,Eqs. (7.20)- (7.23)].[28, Eq. (7.23)] matches Eq. ( 1), where [28] uses the matrix C = a T 2 referred to as the "Kossakowski matrix".
Reference [28,Proposition 7.4] establishes the uniqueness of the decomposition of the Lindbladian L into L H and L a , with H being unique up to an additive constant.The uniqueness of a can be deduced from the second part of this proposition.
It should be noted that the expression of the complete positivity condition in Ref. [28] differs from the expression used in our paper, and thus, some work would be required to derive one condition from the other.
C. Overlap with [29] The blog post [29] provides a complete positivity condition in a form more similar to the one used in our work.[29] defines 6 L PT as follows.First, introduce a matrix notation for a superoperator S acting on M d (C) where S is described by a matrix with elements S (nn ′ )(mm ′ ) (matrix rows and columns indexed by elements of {1, . . ., d} 2 ) such that: Then, define S PT using An alternative way to describe S PT is With this notation, the complete positivity condition becomes7 Using Eq. ( 145), this can be rewritten as This is the same as requiring Eq. ( 6) to hold for all traceless B ∈ B(H), as in Theorem 1.

VIII. SUMMARY AND OUTLOOK
A standard approach to solving a (time-independent quantum) Markovian quantum master equation is to vectorize it and solve the corresponding nonhomogeneous first-order linear ODE for the coherence vector.Here we posed the inverse problem: when does such a 1ODE of the form ⃗ v = G⃗ v + ⃗ c correspond to a Markovian quantum master equation?When does it correspond to a completely positive Markovian quantum master equation, i.e., a Lindblad equation?What are the parameters, i.e., a and H, of such master equations in terms of the parameters G and ⃗ c of the 1ODE?Finally, how ubiquitous are Lindbladians?
We have shown that the answer to the first question is "always".We also expressed the parameters of such Markovian quantum master equations using an expansion in a nice operator basis, which yields explicit expressions for the Hamiltonian and the matrix a of coefficients of the dissipator in terms of the parameters (G, ⃗ c) of the 1ODE; see Theorem 1.In essence, this means that every 1ODE of the form ⃗ v = G⃗ v + ⃗ c is directly representable as a Markovian quantum master equation, Eq. (1).However, complete positivity (i.e., whether the result is a Lindblad equation) is not guaranteed and must be checked on a case-by-case basis.Toward this end, we have also formulated the complete positivity condition directly in terms of (G, ⃗ c); see Eq. ( 6).This condition is equivalent to the positive semidefiniteness of a, which is simpler to check in practice after a has been computed from G and ⃗ c using Eq.(5b).
Our work assumed the setting of finite-dimensional Hilbert spaces.We left open for future research the problem of connecting 1ODEs to quantum master equations in the infinitedimensional setting.We also left open the complete answer to the question of the ubiquity of Lindbladians: namely, while we have argued that the condition of positivity of a makes Lindbladians extremely rare when viewed from the perspective of random matrix theory, the answer to what is the probability that a ≥ 0 for all real G (sampled from the Ginibre Orthogonal Ensemble (GinOE) [25]) such that G has nonpositive eigenvalues, is still open.Answering this question will quantify the probability that a randomly selected 1ODE results in a Lindblad master equation.
The general case is where G is not diagonalizable over R J and may not be invertible.In this case, we can still use a similarity transformation S to transform G into Jordan canonical form: where the q Jordan blocks have the form The µ j 's are the (possibly degenerate, complex) eigenvalues, and K j are nilpotent matrices: K dj j = 0, where d j is the dimension of J j .When all d j = 1, G is diagonalizable, and G J reduces to the diagonalized form of G.When one or more of the d j > 1, then G is not diagonalizable, meaning that a similarity transformation does not exist that transforms G into a diagonal matrix.The eigenvalues are the solutions of det (G − µI) = 0.
Applying S from the left to Eq. ( 3) yields where ⃗ w = S⃗ v and we defined ⃗ c ′ = S⃗ c.We again look for a solution in the form where now ⃗ w (0) (t) is the homogeneous part and ⃗ w (∞) is the nonhomogeneous (time-independent) part.
First, let us solve for the homogeneous part.Since the Jordan blocks are decoupled, the general solution of the homogenous part is same initial conditions: X(0) = X 1 (0) = X 2 (0).Let ⃗ v 1 (t) and ⃗ v 2 (t) be the vectors associated with X 1 (t) and X 2 (t) via Eq.( 40).Then we have By uniqueness of the solution to ⃗ v = R⃗ v + ⃗ c and that ⃗ v 1 (0) = ⃗ v 2 (0) since X 1 (0) = X 2 (0), we have ⃗ v 1 (t) = ⃗ v 2 (t) for all t, which implies X 1 (t) = X 2 (t) for all t.Thus Ẋ = L a1 (X) and Ẋ = L a2 (X) have the same unique solution up to initial conditions.
Proof.To prove injectivity we need to show that F(a 1 ) = F(a 2 ) ⇒ a 1 = a 2 , but since F is linear it suffices to show that F(a) = (0, ⃗ 0) ⇒ a = 0. Let us take a matrix a such that F(a) = (0, ⃗ 0).From Lemma 4: where L a is the dissipative Liouvillian in Eq. ( 1).Let j⟩ be the vector with the j-th component equal to 1 and all other components equal to 0. Then, from Eq. (B2), ∀j, k = 1, . . ., d we have L a ( j⟩⟨k ) = 0.In particular, using Eq.(1c): We can now express the matrix elements a mn in terms of L a , as follows: since L a = 0.That is, a = 0.
Since M +,1 (J, C) is an intersection of closed sets, it is closed.Thus, it remains to prove it is bounded, e.g.: Let {λ i } J i=1 be the eigenvalues of such an a.Then ∑ J i=1 λ i = 1 and 0 ≤ λ i ≤ 1.Therefore, indeed, Vice-versa, if rank(a) = 1, a ∈ M + (J, C), we want to prove that a lies on an extreme ray of M + (J, C).Assume to the contrary that where a 1 , a 2 ∈ M + (J, C) and a 1 is not proportional to a.We can write a = αbb † for some α ∈ R + , b ∈ C J with b = 1, α > 0. Then for i = 1, 2, we can decompose and we have just shown that the quadratic term in this expression is zero.Since the inequality Eq. (C8) has to hold for all δ, the linear term is zero too, i.e., b † a 1 c ⊥ = 0. Thus, c † ã1 c = 0, contradicting the assumption.
Lemma 8. M + (J, C) is the convex hull of its extreme rays.
Here we present two alternative ways to see this.
Proof 1.Any positive semidefinite matrix can be diagonalized.Such diagonalization results [30,Theorem 7.5.2] in a decomposition of the matrix into rank-1 positive semidefinite matrices [as in Eq. (C5)], which -according to Lemma 7 -lie on the extreme rays of M + (J, C).
Proof 2. In the context of finite dimensional vector spaces, Krein-Milman (also known as Minkowski's theorem, see, e.g., [31,Corollary 18.5.1])states that any compact convex set is the convex hull of its extreme points.As a corollary any convex cone with a compact base is the convex hull of its extreme rays.As follows from Lemma 6, M + (J, C) is such a cone. where III. BACKGROUND Let M (d, F ) denote the vector space of d × d matrices with coefficients in F , where F ∈ {R, C}.For our purposes it suffices to identify B(H) with M (d, C).Quantum states are represented by density operators ρ ∈ B + (H) (positive trace-class operators acting on H) with unit trace: Tr ρ = 1.Elements of B[B(H)], i.e., linear transformations E ∶ B(H) → B(H), are called superoperators, or maps.

Proposition 7 .
[a CPTP map is unital if it maps I to itself: E(I) = I].Another general property of the Liouvillian is the following: The Liouvillian is unital if any of the conditions in Proposition 6 are satisfied.

Figure 1 .
Figure1.A commutative diagram representing the transformations between the various spaces described in Theorem 2 (solid lines) and Lemma 2 (dotted lines).These transformations capture the equivalence between the algebraic objects appearing in Markovian quantum master equations and first-order differential equations.The maps to V5 from V2 and V6 are represented by dashed lines as they are represented by the same formulas (78) and (82) as the corresponding maps to V4: indeed, these maps to V5 trivially coincide with the composition of the corresponding maps to V4 with the restriction V4 → V5.
a mn b n ≥ 0 .(108) Such vectors are in one-to-one correspondence with B ∈ M 0 (d, C) [i.e., with traceless B ∈ B(H)] via expansion in the basis {F m }: B = J m=1 b m F m , b m = Tr(F m B).

Proposition 17 .
and the space of possible matrices R The decomposition of G into Q and R can be obtained by the following formulas:

a 1 2 δaa
mn ⟨i F m j⟩ Tr(F n ) − d 2 ⟨i F n F m j⟩ − ij Tr(F n F m ) mn ⟨i F n F m j⟩ ,(B3d)where we used Tr(F n ) = 0 and Tr(F n F m ) = δ nm (Definition 1).For the same reason,∑ d i=1 ⟨i F n F m i⟩ = Tr(F n F m ) = δ nm ,so that after summing Eq. (B3d) over i = j = 1, . . ., d we obtain mn δ nm = −d Tr(a) , (B4) i.e., Tr(a) = 0. From Eq. (B3d) it follows that ⟨i A j⟩ = 0 ∀i, j, where A ≡ ∑ J m,n=1 a mn F n F m , i.e., A = 0. Hence, we have shown that ∀X: L a (X) = J m,n=1 a mn F n XF n .(B5)

2 =Lemma 7 .
Tr(a † a) = The extreme rays of M + (J, C) are the rays generated by rank 1 matrices.Note: this is [30, Exercise 7.5.15].Here we provide a proof for completeness.Proof.Let {αa ∶ α ∈ R + } be an extreme ray in M + (J, C).Here a ∈ M + (J, C) ∖ {0}.Since a is Hermitian, we can diagonalize it by a unitary, hence write it in the form some b i ∈ C J orthogonal to each other, where k = rank(a).Since a ≠ 0, k ≥ 1.If k ≥ 2 then Eq. (C5) represents a as a sum of non-proportional elements of M + (J, C), contradicting the assumption that a lies on an extreme ray in M + (J, C).Thus, k = 1.

Appendix D: Representation of a superoperator using a nice operator basis Proposition 19 .
Any superoperator E ∈ B[B(H)] can always be represented asE(A) = J i,j=0 c ij F i • F j .(D1) This is [1, Lemma 2.2].It can also be seen directly by noting that any linear operator can be represented by a matrix and, thus, E(A) kn = ∑ d l,m=1 E klmn A lm for some tensor E klmn .The tensor E klmn can be seen as a matrix in 2 ways: first, as a function of indices k and l, and second, as a function of indices m and n.Applying "coordinatization" [Eqs.(8) and (9)] twice to E klmn we get c ij .Proof.Let E be any superoperator in B[B(H)].Denoting E klmn = E( l⟩⟨m ) kn and applying Eq. (11) to δ kk ′ δ ll ′ and δ mm ′ δ nn ′ for any A ∈ B(H) we getE(A) kn = d l,m=1 E klmn A lm (D2a) = d k ′ ,l ′ ,m ′ ,n ′ ,l,m=1

)
This x can be interpreted as a matrix with indices (ij) and (lk):x (ij)(lk) = x ijkl .Withthis interpretation, such x is positive semidefinite if and only if a is.While beyond the scope of this work, one can use Eq.(139) or Eq.(141) to attempt to estimate P GinOE