Simulating Open Quantum Systems Using Hamiltonian Simulations

We present a novel method to simulate the Lindblad equation, drawing on the relationship between Lindblad dynamics, stochastic differential equations, and Hamiltonian simulations. We derive a sequence of unitary dynamics in an enlarged Hilbert space that can approximate the Lindblad dynamics up to an arbitrarily high order. This unitary representation can then be simulated using a quantum circuit that involves only Hamiltonian simulation and tracing out the ancilla qubits. There is no need for additional postselection in measurement outcomes, ensuring a success probability of one at each stage. Our method can be directly generalized to the time-dependent setting. We provide numerical examples that simulate both time-independent and time-dependent Lindbladian dynamics with accuracy up to the third order.


I. INTRODUCTION
The Lindblad quantum master equation is a fundamental tool in studying open quantum systems [1,2].Unlike the time-dependent Schrödinger equation, the Lindblad equation accounts for the effects of an environment on a quantum system by incorporating non-Hermitian operators that depict dissipative processes and jump operators that characterize environment noise.Beyond its seminal applications in quantum electron dynamics [3][4][5][6], the Lindblad equation, due to its universal representation property, has found extensive utility in various disciplines, ranging from material science [7,8] to cosmology [9].Lindblad dynamics can also be used to describe circuit noise in quantum computing [10] and it underpins many quantum error-mitigation (QEM) strategies [11][12][13][14].Recent advances have also leveraged Lindblad dynamics as an algorithmic tool for thermalizing quantum systems [15,16], and for preparing ground states [17].
As the range of applications for the Lindblad dynamics continues to expand, it becomes increasingly important to develop efficient and robust simulation methodologies.Classical simulation algorithms [6,[18][19][20] are often hindered by a complexity that scales polynomially with the Hilbertspace dimension, resulting in exponential cost relative to the system size (such as the number of spins or qubits).In this context, quantum algorithms have emerged as promising alternatives that may reduce the cost exponentially.However, many of the current algorithms [16,[21][22][23][24][25], particularly when high-order accuracy is required, can require many ancilla qubits, complicated quantum control logic for clock registers, and an involved amplitude-amplification procedure.These algorithms are thus much more intricate to implement compared to those designed for Hamiltonian simulation [26][27][28][29].
This paper presents a novel approach to simulating the Lindblad equation.Our method leverages the intimate relationship between Lindblad dynamics, stochastic differential equations (SDEs), and Figure 1.A flowchart illustrating the derivation of our numerical scheme and the quantum circuit (one step) for simulating the time-independent Lindbladian dynamics using the following steps: (1) unravelling the Lindblad equation into stochastic differential equations (SDEs).( 2) express classical numerical SDE schemes as the Kraus-representation form for the density operator.(3) mapping the Kraus form to the dilated Hamiltonian in the Stinespring form.The simulation on the circuit advances a Hamiltonian simulation for a time duration of √ ∆t, after which the ancilla qubits are measured.The outcomes of these measurements on the ancilla qubit are disregarded and the ancilla qubits are subsequently reset to the state |0 a k ⟩ in preparation for the next iteration.The inherent unitary and trace-out design ensures that the algorithm achieves a success probability of one, eliminating the need for any additional amplitude-amplification steps.
Hamiltonian simulations.We show that, by adding extra ancilla qubits, the Lindblad dynamics can be incorporated into a unitary dynamics in a larger Hilbert space.Moreover, the unitary dynamics can be simulated using a quantum circuit that only involves Hamiltonian simulation and tracing out the ancilla qubits (see Fig. 1).In this work, we present a systematic approach for constructing this unitary map and the corresponding Hamiltonian.Compared to other Lindblad simulation methods [22,23,30], our proposed method has several distinct features: 1.Our numerical scheme reduces the Lindblad simulation problem to Hamiltonian simulations, for which many algorithms are available.
2. When a unitary dynamics is constructed for the Hamiltonian simulation (e.g., via Trotterization), there is no need for additional postselection in measurement outcomes.The unitary evolution and the trace-out procedure guarantee that the success probability at each step is one, eliminating the need for amplitude-amplification procedures.
3. The algorithm can be systematically improved to achieve high-order accuracy.
4. The algorithm can be easily generalized to time-dependent Lindbladians in applications such as driven open quantum systems.Such direct generalization is highly nontrivial for many existing algorithms.
Our procedure involves the following three steps, summarized in Fig. 1.For simplicity, the Lindbladian dynamics are assumed to be time independent.The detailed explanation of the flowchart can be found in Section IV.
1. We unravel the Lindblad dynamics and reformulate them as SDEs.

We use classical numerical SDE schemes and approximate the unraveled equation with an
Itô-Taylor expansion of an arbitrary order of accuracy.This induces a Kraus representation of the dynamics of the density operator, which is completely positive.
3. Finally, instead of using the quantum algorithm due to [24] to implement the Kraus form, we propose a new procedure that converts the Kraus form to the Stinespring form, detailing the construction of the Hamiltonian operator from the Kraus operators.This gives rise to a numerical scheme represented as a unitary dynamics that can be simulated through Hamiltonian simulation and trace-out.The resulting map is completely positive and trace preserving (CPTP).

I.1. Related works
Wang et al. [30] demonstrated how a single-qubit completely positive trace-preserving quantum channel can be approximated by simple quantum channels that can be simulated using only one ancillary qubit.Kliesch et al. [21] introduced the first quantum algorithm for simulating general Markovian open quantum systems.This algorithm has a complexity scaling of O(t 2 /ϵ), where t denotes the evolution time and ϵ represents the desired precision.The computational cost has been improved considerably in more recent works [22][23][24]31].In particular, the complexity of the algorithms in [23,24,31] is O tpolylog(t/ϵ) , with a linear dependence on t and polylogarithmic dependence on ϵ.To our knowledge, all of the works focus on time-independent Lindbladian dynamics.In [24], the authors have suggested an extension of their method to time-dependent Lindblad dynamics, which emerges from rotating-wave approximations [32].However, such an extension has not been fully explored, e.g., how to block encode the time-dependent Hamiltonians and jump operators.Schlimgen et al. [33] have proposed to decompose Kraus operators into unitary operators that can be approximated by matrix exponentials.This approach has later been applied to the vectorized form of the Lindblad equation [25].The overall complexity, however, has not been presented.Andersson et al [34] have explored how to construct the Kraus form for the quantum channel induced by the Lindblad dynamics, but without a full characterization of the numerical or model error.More importantly, this approach requires the input of the density matrix as a d 2dimensional vector, with d being the Hilbert-space dimension.Maintaining quantum speed-up with such a classical input is highly nontrivial.More recently, Patel and Wilde [35,36] have proposed to encode the jump operators into a pure state |ψ⟩, called a program state.Their algorithm is implemented through a quantum channel that involves both ρ and ψ, followed by a trace-out step.For multiple jump operators, their approach follows a Trotter-type splitting [22], which is at most second order.The work of Nakazato [37] has also studied the Kraus form, but with a focus on specific open quantum system models.Very recently, [38] has proposed a novel approach using repeated-interaction (RI) maps for approximating Lindblad dynamics.The simulation based on RI maps offers a first-order accuracy scheme for Lindblad simulations.
In the domain of Hamiltonian simulations, various algorithms with near-optimal query complexities suitable for different settings have been introduced.For instance, quantum signal processing [39] can reach the optimal query complexity for time-independent problems.In contrast, the truncated Dyson series [40] is applicable to general time-dependent Hamiltonian simulation, but it is based on block encodings with complex control-logic operations.Although Trotterization does not achieve the optimal query complexity, it is more accessible in terms of its implementation (especially its lower-order versions).Taking this perspective into account, this work diverges notably from existing methods for simulating open quantum systems [23,24], which are based on block encodings and entail complex control-logic operations.The nature and complexity of our algorithm for simulating open quantum systems resemble those of higher-order Trotter schemes used in Hamiltonian simulation.Combining our approach with different Hamiltonian simulation frameworks could lead to the development of new efficient Lindblad simulation algorithms.

I.2. Organization
The organization of the rest of the paper is as follows.In Section II, we introduce essential notations, the relation between the Lindblad equation and the SDEs, along with classical numerical methods for solving SDEs.The main idea with the development of a first-order scheme is illustrated in Section III.Our main results and quantum algorithms for simulating the Lindblad equation ( 2) are detailed in Section IV.The performance of our algorithm is validated through various numerical experiments in Section V, for both time-independent and time-dependent Lindbladians.
Moreover, in Appendix A, we provide a detailed derivation of the time-independent second-order scheme, serving as a constructive example for our main results.For practical implementation, we provide formulations of the first-, second-, and third-order schemes (in both time-independent and time-dependent frameworks) in Appendix B. The technical proofs supporting our main results are found in Appendices C and D.

II. PRELIMINARIES
This paper uses capital letters for matrices and a curly font for superoperators.In particular, the identity map (superoperator) is denoted by I and the density operator (matrix), which is a positive-semidefinite (PSD) matrix with Tr(ρ) = 1, is represented by ρ.The vector or matrix 2norm is denoted by ∥ • ∥: when v is a vector, its 2-norm is denoted by ∥v∥ and when A is a matrix, its 2-norm (or operator norm) is denoted by ∥A∥.

The trace norm (or Schatten 1-norm) of a matrix
M that acts on operators (matrices in this paper), the induced 1-norm is The main emphasis of the paper is on the approximation of the Lindblad master equation [1,2], Here H ∈ C d×d is the system Hamiltonian, and V j ∈ C d×d are known as the jump operators that come from the interactions with the environment.The Gorini-Kossakowski-Lindblad-Sudarshan (GKLS) theorem [1,2] states that if L is a Lindbladian with the form given in (2), then exp(Lt) is a quantum channel, which means that it is a CPTP map that transforms one density operator into another.As a quantum channel, it is contractive under the trace distance [41, Theorem 9.2]: for any two density operators ρ 1 and ρ 2 , and any t > 0, it holds that To approximate the dynamics up to a given time T , one can divide the time interval into N steps, N ∈ N, with step size ∆t = T /N .Thus it suffices to construct an approximation, here denoted by M ∆t ρ, for a small step, e.g., for any density operator ρ and some k ≥ 1 with a constant C k .The global error can then be deduced due to the contractive property, given in (3).Specifically, if where we have repeated the method N times to arrive at the last inequality.This gives us a k th-order convergence, and we note that the final constant C k is independent of T .

II.1. Unraveling the Lindblad equation using SDEs
The solution to the Lindblad equation can be expressed through an SDE, which in turn also offers an intuitive description of a quantum dynamics subject to environmental noise.Such a procedure is known as unraveling [6] and, for this purpose, we consider the stochastic Schrödinger equation, where {W j t } J j=1 are independent Wiener processes, and the solutions are interpreted in Itô's sense [42].
In the classical regime, the aforementioned relationship serves as the basis for a stochastic algorithm designed to simulate the Lindblad solution [19,43].More specifically, the approach involves the following steps.First, several initial states |ψ 0,i ⟩ N i=1 are randomly sampled from the density operator ρ 0 .Next, numerical simulations of (6) are performed for each initial state, evolving them up to time T .Finally, by averaging the resulting set of density matrices |ψ T,i ⟩⟨ψ T,i |, one obtains an approximation to the solution ρ T .

II.2. Numerical schemes for SDE
Having reformulated the Lindblad dynamics using SDEs as in Eq. ( 6), we can leverage a wide variety of numerical techniques available in the literature for solving SDEs.In this paper, we mainly rely on the techniques described in [42,Chapter 14].The simplest among these methods is the Euler-Maruyama scheme, which, for any time step ∆t > 0, is given by where {W j } J j=1 are independent Gaussian random variables with zero expectation and unit variance.∆t is a discretization of dt in (6) and √ ∆tW j is a discretization of dW j t .This scheme provides a first-order approximation to the solution in the weak sense.Specifically, for N ∈ N and T = N ∆t, we have where |ψ T ⟩ is the solution of ( 6) and the constant is independent of ∆t.Like ordinary differential equations (ODEs), higher-order numerical schemes can be obtained through a high-order expansion of SDEs.Due to the presence of the Brownian-motion terms, the Itô-Taylor expansion needs to be employed.This leads to many more terms when compared to such expansions from ODEs (see the higher-order schemes in Appendix B).

III. ILLUSTRATIVE DEMONSTRATION USING A FIRST-ORDER ALGORITHM
While numerical simulations of SDEs have been extensively explored in the literature, adapting these schemes directly for execution on a quantum computer presents challenges.For instance, the transformation from |ψ n ⟩ to |ψ n+1 ⟩ in (8) is generally nonunitary, and there is no guarantee that |ψ n+1 ⟩ will remain a unit vector.On the other hand, since our objective is to simulate the Lindblad equation, it is not necessary to simulate every individual SDE trajectory (6).Instead, due to (6), it suffices to simulate the "expectation form" of SDE (6).
We illustrate our main concept by deriving a first-order Lindblad simulation scheme from the Euler-Maruyama scheme (8).For simplicity, we assume J = 1, i.e., there is only one jump operator.Using (8) and the property that E(W ) = 0 and E(W 2 ) = 1, we obtain The evolution from ρ n to ρ n+1 is then expressed in the Kraus form: Furthermore, one also observes that where L is the Lindbladian that is defined in (2).This equality implies that ( 11) is a first-order scheme for the Lindblad equation.
The above calculation shows that an SDE solver implies an approximation for the density matrix in the Kraus form.Next, to derive a first-order quantum simulation scheme, we further expand the Kraus form K in Eq. ( 11) into a Stinespring representation where U is a unitary matrix that can be derived from Stinespring's factorization theorem.A key focus of this paper is on the construction of a Hamiltonian-generated unitary to approximate U , so that the algorithm can be implemented via a Hamiltonian simulation.In particular, we want to find a 2d × 2d Hermitian matrix H such that where the operator Tr A traces out the ancilla qubit.We construct H that takes the following form: where H 0 is a Hermitian matrix.After applying Taylor expansion to exp −i √ ∆t H and matching O(1) and O(∆t) terms on both sides of ( 14), we find that The above derivation suggests that the scheme serves as a first-order approximation to the Lindblad equation ( 2).This formula can be directly extended to the general case with multiple jump operators, simply by appending the additional jump operators along the first row and the first column.Furthermore, the update process described in (17) only comprises a Hamiltonian simulation and a trace-out procedure, making it straightforward to implement and succeed with probability one.The above algorithm is similar to the first-order scheme in Ref. [23], which uses first-order Trotter splitting to separate exp(L V ∆t) and exp(L H ∆t). Subsequently, it uses formulas analogous to those in (17) to simulate exp(L V ∆t).However, it is difficult to extend the first-order scheme in [23,38] to high-order schemes.We note that the limitation of the first-order accuracy comes from two components: (i) the first-order approximation of the map exp(L V ∆t)(ρ); and (ii) the firstorder Trotter splitting used to separate exp(L V ∆t) and exp(L H ∆t). While the approximation of exp(L V ∆t)(ρ) might be improved to a higher-order approximation, which is already not trivial, it is very difficult to avoid the first-order error caused by the first-order Trotter splitting.Unlike Hamiltonian simulation, the simulation of the dissipative part e tL V must have a non-negative t, meaning the simulation cannot go backward in time, since it does not constitute a CPTP map.However, for Trotter splitting beyond second order with a real time variable t, a backward-in-time simulation is required [44].The method described in [23] employs (17) merely as an illustrative example.The authors' primary algorithm is built upon the first-order method expressed in the Kraus form (11) and the accuracy is boosted using a compression scheme.A key goal of this paper is to demonstrate that the Stinespring form, such as the one in (17), paired with an appropriate dilated Hamiltonian, can be constructed to achieve arbitrary orders of accuracy.

IV. MAIN RESULTS
In the previous section, the passage from Eq. ( 8)-Eq.( 11) and Eq. ( 13) and then to Eq. ( 17), unveils a procedure to construct a Stinespring representation of the solution map with a Hamiltoniangenerated unitary operator.Since numerical solutions for the SDE ( 6) can be systematically constructed with an arbitrary order of accuracy, by taking expectations, we arrive at the Kraus-form approximation for simulating the Lindblad equation ( 2) to arbitrary order.Our main contribution is to extend the first-order scheme (17) to an arbitrarily high order.We present a family of methods, as detailed in (19) and Algorithm 1, to derive the unitary dynamics that approximate the Lindblad dynamics (2) to an arbitrarily high order.Moreover, the simulation of the unitary dynamics requires only Hamiltonian simulations and tracing out ancilla qubits, similar to (17).
Our main theoretical result is stated as follows.
be ), N ∈ N, and T = N ∆t.There exists a Hermitian matrix where the matrices H j ∈ C d×d , H 0 is Hermitian, the number of terms S k is upper bounded by (J + 1) k+1 , and ∥H j ∥ = O (∥L∥ be ).Furthermore, using a k ≤ ⌈(k + 1) log 2 (J + 1)⌉ ancilla qubits, is a k th-order scheme for simulating the Lindblad equation (2), i.e., and the constant only depends on k and J.
The proof of Theorem 1 is constructive.The Hermitian operator H in our construction will be called the dilated Hamiltonian.For any order k > 0, we can always construct the corresponding Kraus-representation and Stinespring forms of the Lindblad dynamics (2).Specifically, we will propose a method to construct each block of H (denoted as H 0 , H 1 , . . ., H S k ) using a polynomial of H, V j , V † j , and ∆t 1/2 with the maximum degree of poly(k).According to the above theorem, our algorithm requires O(k log(J + 1)) ancilla qubits to generate a k th-order scheme, which is slightly fewer than the Ω(k log((J + 1)k)) ancilla qubits needed in [24].
In [23,Theorem 4], a lower bound on the total Hamiltonian simulation time is proved using the amplitude-damping process.It asserts that discretizing Lindblad dynamics into N stages requires a minimum total Hamiltonian evolution time of Ω( √ N ).In Theorem 1 with N = 1/∆t (assuming the final time T = 1 for simplicity), this implies that the required total Hamiltonian simulation time must be at least O(1/ √ ∆t).In Eq. ( 19) the Hamiltonian simulation time step is √ ∆t, resulting in a total simulation time of √ ∆t/∆t = 1/ √ ∆t, which agrees the aforementioned lower bound.
In practical applications, the implementation of exp −i Ht relies on the assumptions made about the oracles for H and V j .Assuming that H and V j can be decomposed into a sum of local operators, we can then decompose each H j into a sum of local operators.This decomposition enables the implementation of exp −i Ht using e.g., a high-order Trotter formula.In this case, the complexity depends on how complicated H and V j are.An alternative method of implementation involves utilizing block encoding.In Appendix F, we explore a specific approach to implement the block encoding of H assuming the block encoding of H and H j .This provides a method for implementing exp −i Ht using block-encoding-based Hamiltonian simulation algorithms.Alternative methods for efficiently implementing exp −i Ht are an important direction for future research.

IV.1. Overview of the main algorithm
In this section, we describe the construction of our main simulation algorithm, focusing on deriving the k th-order scheme for the time-independent Lindblad equation.We outline the general procedure for constructing the Hamiltonian H for any k and in Appendix A we provide a specific example of a second-order scheme for time-independent Lindbladian dynamics.In Appendix B, we extend our approach to time-dependent Lindblad equations and present the explicit forms of H for the first-to third-order schemes, covering both time-dependent and time-independent scenarios.
We first note that the simulation algorithm for (2) is straightforward after obtaining H (see Fig. 1).Given a required order k > 0, after finding the Hamiltonian H such that our numerical scheme is The trace-out process can be accomplished by measuring and resetting the ancilla qubit.Now, we focus on our approach to constructing the dilated Hamiltonian H in Eq. (18).Similar to the derivation of the first-order scheme in Section III, we follow three steps to generate a k th-order scheme, Step 1. Formulate the weak scheme of order k for SDEs in (6).Find a random linear operator L k,∆t : C d → C d that generalizes (8), such that for any unit vector |ψ⟩, where |ψ(∆t)⟩ is a realization of the solution of ( 6) with |ψ(0)⟩ = |ψ⟩.We recall that ρ(∆t) = E (|ψ(∆t)⟩⟨ψ(∆t)|) is the solution of the Lindblad equation with ρ(0) = E (|ψ(0)⟩⟨ψ(0)|).
We note that there are many approaches to designing a k th-order weak formulation for SDE (6).In Section IV.2, we will present the Itô-Taylor-expansion approach from [42,Chapter 14].
Step 2. Formulate the k th-order Kraus form: From the operator L k,∆t , find a sequence of Kraus operators {F j } S k j=0 , where S k ≤ (J + 1) k , such that The above equation directly implies that the trace-preserving property holds approximately, We can explore various methods to construct the Kraus form mentioned above.In Section IV.2, we will discuss one approach to obtain the Kraus form associated with a k th-order weak scheme for the SDEs.With the Kraus form ready, the algorithms in [23,24] can be directly used to simulate the Lindblad dynamics by implementing the Kraus form.Therefore, the unraveling approach provides an alternative to obtaining a higher-order approximation expressed in Kraus form, without using Dyson series and numerical quadrature.More importantly, here we take a different path forward, by converting the Kraus form to a Stinespring form, thereby enabling simulations of the Lindblad dynamics through Hamiltonian simulations.
Step 3. Construct the dilated Hamiltonian H. Find a sequence of matrices {H j } S k j=0 such that where the Hermitian matrix This is achieved through asymptotic analysis.This versatile approach is applicable not only when the Kraus form is derived from an SDE integrator but also in situations in which the Kraus form emerges from alternative derivations.

IV.2. Proof of the main theorem: Construction of the dilated Hamiltonian H
In this section, we detail the strategies to accomplish the preceding three steps, which provide a constructive proof of Theorem 1.The algorithm to construct H is summarized in Algorithm 1.

Algorithm 1 Construction of the dilated Hamiltonian H
Input: Desired order: k; Time step: ∆t; Hamiltonian: H; Jump operators: {V j }; Output: H. 1: Formulate a k th-order SDE scheme following Eq.( 30).2: Produce the corresponding k th-order Kraus using Eq. ( 39).3: Construct the dilated Hamiltonian H based on the pathway detailed in Eq. (D21) and Fig. 5 in Appendix D.
In the following part of the derivation, we simplify our notation by omitting the subindex of L k,∆t and denoting it as L. We also define which is responsible for the non-Hermitian part of the Lindblad dynamics.We will not include the subscript of |ψ n ⟩ in the following proof for the sake of simplicity.
Step 1: Formulate the weak scheme of order k for the SDE (6).
The k th-order weak scheme has been thoroughly investigated in the classical numerical SDE literature.Here, we employ the scheme derived from the Itô-Taylor expansion as presented in [42,Chapter 14].Toward this end, we define two sets of multi-indices and where |α| is the number of components of the multi-index α.These indices are necessary to keep track of the different components of the Brownian motion W j (t).A scheme of weak order k can be expressed using multiple integrals over 0 where we set dW 0 s = ds, denotes a product of the jump operators, and the sequence of random variables {R α } α∈Γ k/0 corresponds to multiple Itô stochastic integrals as follows: According to [42, Theorems 14.5.1, 14.5.2]1, the direct expansion given in (30) induces a k thorder weak scheme that satisfies the desired order condition given in (23).In addition, when ∆t = O (∥L∥ be ), we have Step 2: Formulate the k-th order Kraus form.
In the second step, we construct the Kraus form of k th-order from the Itô-Taylor-expansion method in (30).As a preparation, we introduce some notation and definitions for the terms with multi-indices.Note that the zero components in α indicate a standard integration over t, while nonzero components correspond to stochastic integrals.Given α ∈ Γ k , let α + be the multi-index obtained by removing all components of α that are equal to zero.For example, if α = (1, 0, 2, 1), then we have We define l =0 (α) as the number of zero elements, which means that l =0 (α) = |α| − |α + |.According to [42, Chapter 5, Lemma 5.7.2], given α, α ′ ∈ Γ k , we have Here, 1 α + =(α ′ ) + stands for the indicator function and C α,α ′ is a factor that depends on the indices α and α ′ but not on ∆t.In addition, |C α,α ′ | ≤ 1 for all α, α ′ .Based on (33), we define the normalization of R α by the step size ∆t: As a result of this rescaling, we can work with a set of Gaussian random variances R n,α with mean zero and covariance independent of ∆t.In particular, we can rewrite L[|ψ⟩] in (30) as Here E(R 2 n,α ) = C α,α ′ .Note that even though the expected value of R n,α is zero, the expected value of R n,α R n,α ′ may not be equal to zero; i.e., in general, these random variables are correlated.Specifically, Thus, if we naively define V α , we will encounter some cross terms in the expansion of the Kraus form, leading to a nondiagonal Kraus form.To overcome this difficulty, we introduce the following lemma to orthogonalize the noise term.
Lemma 2. Let R n,α be defined in (34).There exists a sequence of random variables R α α∈Γ k/0 that satisfy the following conditions: where c α,α ′ is a constant independent of ∆t.In addition, • For any α, E R α = 0.In addition, R α is either zero or E( R 2 α ) = 1.
The proof of Lemma 2 is in Appendix C. With this new expression for the noise terms, we can plug Eq. ( 36) from Lemma 2 into (30) and obtain We are now in a position to derive a Kraus form.We define In light of (37), we obtain an approximation of the density-operator in a Kraus form, which satisfies (24).We note that the total number of Kraus operators is at most (J+1) k+1 −1 Step 3: Construct the dilated Hamiltonian H.We start by ordering and expressing Kraus operators by the powers of ∆t, i.e., in an asymptotic form: Here, we separate those Kraus operators with integer powers of ∆t from those with half powers of ∆t.We note that S k + 1 equals to the number of Kraus operators.Thus, S k ≤ (J+1) k+1 −1 From ( 23) and ( 24), we see that S k j=0 F j ρF † j is a k th-order approximation of a Lindblad equation and can be expanded into Stinespring form, meaning that where U is a unitary matrix that can be constructed by Stinespring's factorization theorem.Now, we are ready to introduce the following lemma that implies the existence of the dilated Hamiltonian H. Lemma 3. Given the Kraus operators {F j } S k j=0 in (40), there exists H such that Furthermore, H can be written as (18) with Here, each X j,q is a polynomial of H, V j that satisfies ∥X j,q ∥ = O(∥L∥ q+1/2 be ) for 1 ≤ j ≤ s k and ∥X j,q ∥ = O(∥L∥ q+1 be ) otherwise.Intuitively, the unitary operator on the right-hand side of Eq. ( 42) can be expanded and its first column can be compared to the first column of the unitary matrix in Eq. (41).Specifically, each matrix in Eq. ( 43) can be obtained by matching the corresponding terms in the expansion in (40).The proof is in Appendix D. According to Lemma 3, we obtain ∥H j ∥ = O(∥L∥ be ).
Finally, to complete the proof of Theorem 1, the remaining step is to demonstrate that H must be a Hermitian matrix.This is stated in the following lemma: Lemma 4. The dilated Hamiltonian H constructed in Lemma 3 is Hermitian.
The proof of Lemma 4 is in Appendix E.

V. NUMERICAL EXPERIMENTS
In this section, we provide results from several numerical experiments to illustrate the convergence of our algorithm.We start with a time-independent transverse-field Ising model (TFIM) in Section V.1 and examine the convergence rate of the first-, second-, and third-order methods.The specific forms of these methods can be found in Appendix B. To extend the applications to more general cases, we also present two time-dependent examples in Section V.2 and Section V.3 to further test the performance of our proposed methods.In all the following numerical experiments, we use the fourth-order Runge-Kutta scheme with a very small time step to generate the "exact solution" ρ T and measure the error at time T using the trace distance, which means that where T is the stopping time, N = T /∆t, and ρ N is the output of our algorithm.

V.1. A TFIM damping model
Consider the one-dimensional TFIM model defined on m sites:  Examining the accuracy of the first-, second-, and third-order methods using the TFIM damping mode (45).Left: The comparison of the evolution of the ground state overlap with different schemes and the same step size ∆t = 0.1 up to the stopping time T = 5.Right: the comparison of the error versus ∆t using different schemes with stopping time T = 1.We set the x axis as 1/∆t and plot it in the log scale to illustrate the order scaling of our methods.The three dashed lines are drawn by matching the error curve from the k th scheme with (∆t) k .The comparison of the slopes verifies that our k th-order scheme indeed leads to an error that is O(∆t) k .
where g is the coupling coefficient that describes the transverse magnetic field strength, Z i and X i are Pauli operators for the i th site and the dimension of H is 2 m .We set m = 4 and g = 1 and simulate the TFIM model with damping [25]: where V j = √ γ(X j − iY j )/2, the damping parameter γ = 0.1, and |ψ 0 ⟩ is the ground state of H.
In [25], the authors have used this model to test the accuracy of their numerical scheme and to investigate the effect of magnetic field strengths and damping parameters on the solution trajectory.
For our experiment, we focus on the scaling of the error of our numerical methods with ∆t, so we only assess its effectiveness with fixed values of g and γ.
We examine the convergence of three numerical schemes (see Appendix B): (i) the first-order scheme in (B3); (ii) the second-order scheme in (B6); and (iii) the third-order scheme in (B11).The results are shown in Figure 2. The graph on the left shows the overlaps between ρ(t) and the ground state when the time step ∆t = 0.1.We can see that the second-and third-order schemes match the exact solution better than the first-order scheme.In the right graph, with a stopping time of T = 1, we have evaluated the convergence of the three methods using different ∆t and measured the end error using (44).One can observe that all the schemes converge in the expected order.Due to the random selection of the operators G and G j,2 , as well as the initial condition, these orders of accuracy are very likely sharp.

V.2.
A time-dependent TFIM model with damping In the following numerical test, we consider the time-dependent TFIM damping model, where both the Hamiltonian and jump operators are driven by a linear pulse, Here, H is the TFIM model with m = 4, g = 1 and H ′ = G+G † ∥G+G † ∥ with G ∼ N (0, 1, I 2 m ×2 m ).We also choose random damping operators where γ = 0.1 and G j,2 ∼ N (0, 1, I 2 m ×2 m ).We note that this is a time-dependent Lindblad equation with two jump operators.We test the first, second, and third methods as discussed in Appendix B.
The result is shown in Figure 3. On the left graph, we set the initial state as the ground state of H and perform the simulations up to T = 5.We compare the evolution of the overlap with the ground state for all three methods.It can be seen from the graph that the second-and third-order schemes show much better agreement with the exact solution than the first-order scheme.The results shown in the right panel are obtained with a random initial state and simulation of the dynamics up to time T = 1.We examine the convergence of the methods by varying ∆t and measuring the end error as defined in (44).We observe that all three schemes converge to the true solution with the expected order of accuracy.

V.3. Periodically driven Lindbladian dynamics
In this section, we consider a single-qubit time-dependent system that is driven by a periodic Hamiltonian and jump operators [45].Specifically, we choose and the damping operators We then compare the performance of the first-, second-, and third-order method at the stopping time T = 10π with a random initial state.The error is measured using (44).
The numerical results are summarized in Figure 4.In the left graph, we choose ∆t = 0.1 and compare the evolution of Tr (ρ(t)σ z ).We observe that the second-and third-order schemes exhibit significantly better accuracy than the first-order scheme.Similar to the previous results, in the right figure, the error of all schemes behaves with the expected order of convergence.

VI. DISCUSSION AND CONCLUSIONS
This paper presents a new method for simulating the Lindblad dynamics using Hamiltonian simulation in an enlarged Hilbert space.Our algorithm only involves simulation of a dilated Hamiltonian Tr( z (t))  Testing the accuracy of the first, second, and third order methods using the periodic driving Lindbladian [45].Left: we compare the evolution of Tr (ρ(t)σz) with different schemes and the same step size ∆t = 0.1 up to stopping time T = 10π.Right: we compare the error vs ∆t using different schemes with stopping time T = 10π.We set the x-axis as 1/∆t and plot it in the log scale to illustrate the order scaling of our methods.The three dashed lines are drawn by matching the error curve from the k-th scheme with (∆t) k .The comparison of the slopes verifies that our k-th order scheme indeed leads to an error that is O(∆t) k .
and trace-out operations.The latter can be implemented simply by measuring the ancilla qubits and discarding the results.Each step of our algorithm forms a CPTP map, thereby guaranteeing a success probability of one.Contrary to previous methods [23,24], our algorithm eliminates the need for oblivious amplitude amplification at the level of Lindbladian simulation, which may require precise adjustment of the time step ∆t with respect to the block-encoding factor.
Our methodology bridges the gap between Lindblad simulation and Hamiltonian simulation.This approach also introduces a new class of Hamiltonian simulation problems, where the Hamiltonian H consists of commutators among the jump operators (including the system Hamiltonian H) and the various components of the dilated Hamiltonian scale differently with respect to the time step ∆t.Identifying suitable Hamiltonian simulation techniques for this specific context poses an interesting question for future investigations.For example, suppose that both H and V j can be expressed as sums of Pauli operators.In that case, we can decompose H 0 , H 1 , . . ., H S k into sums of Pauli operators and further refine the simulation using a high-order Trotterization method.
In contrast to Hamiltonian simulations, where a diverse range of methods are available and practicality resource estimates have been conducted (see e.g., [46]), quantum algorithms for Lindblad simulations remain in their nascent stages.This study introduces a framework that differs from those in the existing literature.Low-order methods, such as second and third order, may be more practical than higher-order versions in terms of practical implementation.We hope that this work can facilitate future resource estimates for identifying the most practical methods for simulating Lindblad dynamics.
As a concrete example, in this appendix, we derive a second-order scheme to simulate timeindependent Lindbladian dynamics.
Step 1: Formulate the weak scheme of order two for the SDE (6).
According to the first step of Algorithm 1, we can write down the weak order-2.0scheme according to [42, (14.2.6)]: Here we have defined, Step 2: Formulate the second-order Kraus form.
In the second step, we construct the Kraus form according to the scheme described above.Generally, we must convert the Itô integrals to random variables and arrange them to ensure that they are not correlated (see, e.g., Lemma 2).In this case, we simply take the formula from [42, (14.2.7)] and reformulate the above second-order scheme as follows: Here, {∆W j } J j=1 are independent Gaussian random variables with mean zero and variance ∆t, and {∆Z j1,j2 } are independent two-point random variables such that Given that the random noises in distinct terms are uncorrelated, and taking the expectation on both sides, we arrive at the following relation for the expected state at time n + 1 where Here, we have combined the third and fourth lines of (A2) in F 2,j,k using E((∆W 2 j − ∆t) 2 ) = 2∆t 2 and E((∆W j1 ∆W j2 − ∆Z j1,j2 ) 2 ) = 2∆t 2 .This leads us to define the Kraus form and to define the iteration scheme as Step 3: Construct the dilated Hamiltonian H.The goal of the last step is to construct the Hamiltonian H such that Since there are J 2 +J +1 Kraus operators, we seek a Hamiltonian with the following block structure, , where we require H 0 to be a Hermitian matrix.We begin by noting that This will be compared to the Stinespring form, By matching the above two equations, we see that, to arrive at (A3), we need to find H 0 , H 1,j , and H 2,j,k so that where [•] is 0, (1, j), or (2, j, k).
Next, we note that the matrix exponential can be expanded in the following Taylor expansion Plugging this formula into the left-hand side of (A4), we match terms in the blocks of the first column and find that We first match the first-order terms in the last two equations by taking the leading terms to obtain where we use X [•] to represent the coefficient of the order terms ∆t p .We then substitute them into Q to obtain Plugging this into the first equation of (A5) and matching the first term, we obtain Next, we include the next-order terms in F 1,j and F 2,j,k .Again, matching both sides of the last two equations, we obtain the asymptotic expansion, We then substitute them into Q and obtain Plugging this into the first equation of (A5) and matching the second term, we find that where We have left out higher-order terms, since they only contribute at most O(∆t 3 ) terms, which is comparable to the leading error term in Eq. (A3).This completes the construction of the Hamiltonian H.
Appendix B: A summary of first-, second-, and third-order schemes for simulating time-dependent Lindblad equations In this appendix, we extend the construction in the previous appendix and derive the numerical schemes to the time-dependent Lindblad equation, which takes the form: In our derivation, we assume that H(t), V j (t) ∈ C 2 [0, ∞).We note that when H(t), V j (t) are smooth enough, we can directly implement the strategy (Algorithm 1) in this paper to develop a high-order scheme.In this appendix, we summarize the first-, second-, and third-order schemes for solving the time-dependent Lindblad equation in Eq. (B1).The scheme for solving time-independent Lindblad equations can readily be obtained by removing the terms involving the time derivatives of H and V .
We define For simplicity, we omit the first step and start by expressing the Kraus operators in an asymptotic form (we omit −i in front of F for simplicity since it does not affect the Kraus representation), contains the coefficient of the order term ∆t p in each expansion.
We note that, in the time-independent case, all derivative terms with ′ and ′′ are equal to zero.After obtaining the above formula, we can use our general strategy in Section IV.2 to derive H.For simplicity, we omit the derivation process and directly give the formulas of different order schemes: • The first order scheme: , where with Altogether, the dilated Hamiltonian is given by, which is a direct generalization of (17).
• The second-order scheme: Using X 0,0 , X 1,j,0 , Q 0 in Eq. (B4) from the first order scheme, we have the expressions for the entries of H, j, k, l ∈ [J], where In addition, the first diagonal block is given by, We find the explicit form of H (B10) • The third-order scheme: as in the first-and second-order schemes, we have In addition, where X † 4,j,k,1 X 4,j,k,0 + X † 4,j,k,0 X 4,j,k,1 .

(B13)
Appendix C: Proof of Lemma 2 The purpose of Lemma 2 is to decompose the noise terms into uncorrelated random variables.According to (33), the noise terms in the Itô-Taylor expansion (30) have the property that We define the set of multipositive indices Γ + k as Using the normalized noise, we rewrite L[|ψ⟩] in (30) as In light of Eq. (C1), to ensure zero correlation between random variables, it suffices to focus on the set {R n,α } α + =β for each β ∈ Γ + k .In the remainder of the proof, we fix β ∈ Γ + k .To construct R, we first fix an order of the noise terms R n,α α + =β (the order can be arbitrary and does not affect the statement) and reformulate the sequence as {R β,i } I β i=1 .Here I β denotes the cardinality of the set.Consequently, we rewrite the original summation R n,α V α |ψ⟩ =: We define V β,i = V α and q β,i = |α|+l=0(α)

2
, where the index i is assigned based on the specified ordering.
We define Cov β as the covariance matrix of {R β,i }.Because Cov β is a positive semidefinite matrix, we can write Cov β in eigendecomposition form Cov β = QΛQ ⊤ , where Λ is a diagonal matrix the entries of which are non-negative and Q is an orthogonal matrix.We define .
where Λ + is a diagonal matrix such that We have that R β,i are not correlated, which means that E R β,i , R β,j = 0 if i ̸ = j, and where This proves (36).Recall that S k + 1 = 2 at .To fulfill (42), we need to construct a Hamiltonian Comparing (43) with (40), we reduce the power of ∆t by half because there is an extra √ ∆t term in the Hamiltonian simulation (43).We identify the blocks in (D1) by asymptotically matching (40) and (43).For this purpose, we expand the matrix exponential in (40) using Taylor expansion: Next, we match the first block diagonal.By inserting the asymptotic expansion of H 0 and Q into (D3), we find which leads to We now move on to the second term.Returning to (D10), we can match the ∆t terms to obtain Additionally, equating the terms ∆t 2 in the first block diagonal yields the second component of H 0 (for the sake of simplicity, we will not write down the asymptotic expansion): To show that the above derivation process can always continue until we obtain the last term, we implement the induction argument.Assume that we have already matched K terms and obtained We can use the above terms, (D9), and (D11) to calculate To continue, we first match the ∆t K term in the off-diagonal blocks in (D2).Similarly to (D14), we obtain Using X j>0,k≤K , we can construct Q K according to (D9).We then match the ∆t K+1 term in the first diagonal block in (D2): where q x,K is a polynomial of degree K + 1.Thus, we obtain where D = Ω(max j ∥H j ∥).At the end of this section, we will take a closer look at the derivation of H j and give an upper bound for ∥H j ∥.Without loss of generality, we also assume S k = 2 P − 1 for some P ∈ N. The construction of the block encoding of H can be divided into three steps: by the following equation: F2) where I A is the identity map that acts on the P ancilla qubits.In the worst case, this selected oracle U can be constructed using S k + 1 controlled logic gates.We give an example with S k = 3 in Fig. 6: • where The unitary gate W can be implemented using P Hadamard gates and 3 P controlled logic gates.We draw the circuit in Fig. 7.We note that W satisfies our requirement, which means that Furthermore, plugging the formula of U, we obtain • Construction of the block encoding of |0⟩⟨0| ⊗ H 0 + S k j=1 |j⟩⟨0| ⊗ H j + |0⟩⟨j| ⊗ H † j .This step can be completed by an LCU circuit, as drawn in Fig. 8.More specifically, the circuit implements the block encoding of (W (I A ⊗ U)) + (W (I A ⊗ U)) † .The success probability of the block encoding is inversely proportional to (S k + 1)D 2 .The next step is to determine an upper bound for D, which is equivalent to finding the maximum value of ∥H j ∥.
The above two inequalities conclude the induction and prove (F9).

(F11)
Recall S k = O((J + 1) k+1 ) from Theorem 1.We obtain the bound for the subnormalization factor of the block encoding: It should be noted that the bound presented in Eq. (F11) significantly overstates the size of ∥H j ∥.
In the proof of Lemma 6, we upper bound the small coefficient c by 1 and do not take into account possible cancelation of terms in the summation.In practice, considerable cancelation occurs within asymptotic expansion and matching, resulting in a substantially reduced ∥H j ∥.

Figure 2 .
Figure2.Examining the accuracy of the first-, second-, and third-order methods using the TFIM damping mode(45).Left: The comparison of the evolution of the ground state overlap with different schemes and the same step size ∆t = 0.1 up to the stopping time T = 5.Right: the comparison of the error versus ∆t using different schemes with stopping time T = 1.We set the x axis as 1/∆t and plot it in the log scale to illustrate the order scaling of our methods.The three dashed lines are drawn by matching the error curve from the k th scheme with (∆t) k .The comparison of the slopes verifies that our k th-order scheme indeed leads to an error that is O(∆t) k .

3 ( 1 Figure 3 .
Figure3.Testing the accuracy of the first, second, and third order methods using the time-dependent TFIM Lindbladian (47).Left: The comparison of the evolution of the ground state overlap with different schemes and the same step size ∆t = 0.1 up to stopping time T = 5.Right: The comparison of the error versus ∆t (on the logarithmic scale) using different schemes with stopping time T = 1.We set the x axis as 1/∆t and plot it in the log scale to illustrate the order scaling of our methods.The three dashed lines are drawn by matching the error curve from the k th scheme with (∆t) k .This verifies that our k th-order scheme indeed leads to an error that is O(∆t) k .

Figure 4 .
Figure 4. Testing the accuracy of the first, second, and third order methods using the periodic driving Lindbladian[45].Left: we compare the evolution of Tr (ρ(t)σz) with different schemes and the same step size ∆t = 0.1 up to stopping time T = 10π.Right: we compare the error vs ∆t using different schemes with stopping time T = 10π.We set the x-axis as 1/∆t and plot it in the log scale to illustrate the order scaling of our methods.The three dashed lines are drawn by matching the error curve from the k-th scheme with (∆t) k .The comparison of the slopes verifies that our k-th order scheme indeed leads to an error that is O(∆t) k .
Appendix D: Proof of Lemma 3

Figure 5 .
Figure 5.The generation of H.We need to compare terms of the same order in the asymptotic expansion.Specifically, in each row of the two matrices, we need to match the terms with the same color.Here, poly([•]) means that the term can be written as a polynomial of elements in [•].

Figure 8 . 1 2
Figure 8.Quantum circuit for the block encoding of H.