Hardware-efficient variational quantum algorithms for time evolution

Parameterized quantum circuits are a promising technology for achieving a quantum advantage. An important application is the variational simulation of time evolution of quantum systems. To make the most of quantum hardware, variational algorithms need to be as hardware-efficient as possible. Here we present alternatives to the time-dependent variational principle that are hardware-efficient and do not require matrix inversion. In relation to imaginary time evolution, our approach significantly reduces the hardware requirements. With regards to real time evolution, where high precision can be important, we present algorithms of systematically increasing accuracy and hardware requirements. We numerically analyze the performance of our algorithms using quantum Hamiltonians with local interactions.


I. INTRODUCTION
Small quantum computers are available today and offer the exciting opportunity to explore classically difficult problems for which a quantum advantage may be achievable. The simulation of time evolution of quantum systems is an example of such problems where the advantage of using a quantum computer over a classical one is well understood [1][2][3][4][5]. This simulation is also important for our understanding of quantum chemistry and materials science which are key application areas for future quantum computers [6][7][8][9]. One of the main challenges in the design of time evolution algorithms is to reduce their experimental requirements without sacrificing their accuracy.
Although significant progress has been made based on the original quantum algorithm for simulating time evolution [2], this algorithm faces obstacles on current quantum hardware which lacks quantum error correction [10][11][12]. Promising alternatives are variational hybrid quantum-classical algorithms [13,14] and variational quantum simulation [15,16]. In these approaches, a parameterized quantum circuit (PQC) is prepared on a quantum computer and variationally optimized to solve the problem of interest. Promising examples of quantum advantage obtained with PQCs have already been identified for time evolution [17,18] and in other contexts such as for nonlinear partial differential equations [19,20], dynamical mean field theory [21,22], and machine learning [23].
In this paper, we focus on making the variational simulation of time evolution as efficient as possible in terms of quantum hardware resources. Existing proposals are based on the time-dependent variational principle or variants thereof [15,16]. These approaches require matrix inversion which poses computational challenges for ill-conditioned matrices. Our contribution is a set of alternative techniques that do not need matrix inversion and that allow one to systematically increase * marcello.benedetti@cambridgequantum.com (solid line). We use a PQC Ansatz of depth D = 2, which is the smallest nontrivial Ansatz among those considered in this work. The resulting causal cones require quantum hardware with only six qubits. (a) Imaginary time evolution of a randomly initialized PQC. We use angle update with one sweep for 50 time steps of size 0.05, then 50 time steps of size 0.03, and then 50 time steps of size 0.01. E(θ) is the variational state's energy and Egs is the exact ground state energy. The algorithm achieves relative energy errors below 10 −3 . (b) Real time evolution of the initial state |00 . . . 0 . We use cone update with six sweeps and time step size 0.01. We plot the squared Euclidean distance between the variational state |ψ(θ) and the exact evolved state |ψ versus time t (in units of 1/J). A 0.05 distance is obtained at the end of the simulation. the simulation accuracy, together with the hardware requirements, according to experimental capabilities. We analyze the hardware/accuracy trade-off and show that for imaginary time evolution our algorithm significantly reduces the hardware requirements over existing methods and produces accurate ground state approximations. For real time evolution, the accuracy per time step is essential. We present a hierarchy of algorithms where the accuracy can be systematically improved by utilizing more hardware resources. Figure 1 illustrates the performance of some of the algorithms developed in this paper.
Our strategy is inspired by several tensor network concepts.
Firstly, we apply the Trotter product formula to the time evolution operator and optimize the Ansatz one Trotter term after another. A similar procedure is used in the time-evolving block decimation algorithm [24][25][26][27] as well as with projected entangled pair states [28][29][30]. Secondly, for a given Trotter term, we restrict the optimization to its causal cone. This is an important concept for the multiscale entanglement renormalization Ansatz [31,32] as well as for matrix product states [33]. The same concept is used in the design of noise-robust quantum circuits [34] and to simulate infinite matrix product states on a quantum computer [35]. Thirdly, we perform the optimization coordinatewise [36][37][38][39], i.e., we optimize a set of parameters at a time while keeping all the others fixed. A similar approach is widely used in tensor network optimization where tensors are optimized one after another [40,41].
In this work, we call hardware-efficient any variational quantum algorithm that (i) can use parameterized quantum circuits tailored to the physical device [42] or (ii) exploits other concepts that reduce the number of qubits or gates [43,44] such as causal cones. There exist alternative efficient schemes based on measuring and resetting qubits during computation [45][46][47] and for periodic quantum systems [48,49].
The paper has the following structure. Section II summarizes our methods, Sec. III presents our mathematical and numerical results, and Sec. IV contains our concluding discussion. The technical details are in the appendices. In a companion paper [50] we apply our algorithm to combinatorial optimization problems of up to 23 qubits.

A. Taking apart time evolution
In the following, we focus on real time evolution. To obtain imaginary time evolution one simply needs to substitute t by −it.
The simulation of real time evolution consists of approximating the action of the operator e −itĤ on an initial state |ψ 0 of n qubits. The Hamiltonian is assumed to have the general formĤ = K k=1 h kĤk , whereĤ k are tensor products of Pauli operators, h k are real numbers, and K ∼ O(poly(n)). We approximate the evolution by a sequence of short-time evolutions using the well-known Trotter product formula. With N time steps of size τ = t/N one obtains e −itĤ ≈ U (τ ) N , where U (τ ) = e −iτ h KĤK · · · e −iτ h1Ĥ1 . The accuracy of this approximation can be improved using higher-order Trotter product formulas [51].
We variationally simulate the sequence of Trotter terms one term at a time.
Consider the kth term, e −iτ h kĤk , and let |ψ k−1 be the variational approximation to the previous step. We minimize the squared Euclidean distance |ψ(θ) − e −iτ h kĤk |ψ k−1 2 via the variational parameters θ by maximizing the objective function: where Re(·) denotes the real part of a complex number, see Appendix A for details. This optimization is carried out for all K terms in U (τ ). The process is repeated N times, after which the simulation of time evolution is completed. Figure 2 illustrates a few steps of the method. The variational state consists of a PQC (light blue rectangle) acting on the |0 ≡ |0 ⊗n state of the computational basis. At each step, a term of the Trotter formula is selected (blue rectangle) and simulated by the variational method.

B. Parameterized quantum circuits and causal cones
We consider PQC Ansätze composed of generic 2-qubit unitaries acting on nearest neighbors, as shown in Fig. 3 (a). The required 1d qubit-to-qubit connectivity matches that of many existing quantum computers. These Ansätze are universal for quantum computation if we adjust their depth accordingly. We refer to the unitaries U [i, j] (light blue rectangles) as blocks. In practice, they are made of gates from the hardware's gate set.
An interesting property is that the expectation of an operator acting on a few qubits depends only on the blocks inside its causal cone. Figure 3 (b) illustrates an example of causal cone for a two-qubit operator. The expectation can be estimated with a quantum circuit of six qubits and five blocks, regardless of the overall size of the PQC Ansätze.
We perform the optimization of the objective function in Eq. (1) using only the blocks inside the causal cone of the Trotter term. Thus our variational algorithm requires just the preparation of circuits on quantum hardware of a size restricted by the causal cone. This allows us to work with variational states of size greater than the FIG. 2. The Trotter product formula applied to the time evolution operator results in repeated products of U (τ ), where τ is the time step, acting on the initial state |ψ0 = U (θ0) |0 . We approximate time evolution one term after another, each time finding the optimal variational parameters θ1, θ2, . . . size allowed by the quantum computer. For example, periodic boundary conditions can be included with no additional hardware requirements. In Fig. 3 (a), block U [ n 2 , 2] operates on the first and last qubits. It is easy to see that such a physical qubit-to-qubit connectivity is not required when using causal cones. The number of required physical qubits depends only on the depth of the Ansatz and on the Trotter term. For long-range operators or operators acting on more than two qubits one can obtain several causal cones.
Throughout this paper we consider the Ansatz in Fig. 3 (a) with the first nontrivial depth D = 2. The performance of our algorithms could be systematically improved by successively increasing the depth D. For example, when the variational error becomes too large during the time evolution, we can include a new column of blocks initialized to 1 at the end of the PQC to increase its expressive power. In this way, our Ansatz can be dynamically adapted to guarantee a certain variational error during the entire evolution, analogous to how this is commonly done in the time-evolving block decimation algorithm [26].

C. The coordinatewise update rule
Let us specialize our discussion to PQCs of the form U (θ) = U D · · · U 1 where each gate is either fixed, e.g., a CNOT, or parameterized as U d = exp(−iθ d G d ), where θ d ∈ (−π, π] and G d is a Hermitian and unitary matrix such that G 2 d = I. This standard parametrization has nice properties that we exploit to design optimization algorithms. In Refs. [36][37][38][39], the authors showed that when all parameters but one are fixed the energy expectation value has sinusoidal form. Therefore, there is an analytic expression for the extrema. Here we show that the same is true for the objective function in Eq. (1).
We define the coordinatewise objective for the dth parameter as f k,d (x) ≡ F k (θ 1 , · · · , θ d−1 , x, θ d+1 , · · · , θ D ), where all parameters but one are fixed to their current value. In Appendix B, we show that this is a sinusoidal function f k,d (x) = A k,d sin(x + B k,d ) with amplitude A k,d and phase B k,d .
Thus one can use a coordinatewise optimization procedure as in Refs. [36][37][38][39] that neither requires the gradient nor the Hessian. The procedure sweeps through all parameters and sets each of them to their locally optimal value x * = π 2 −B k,d . At x * the coordinatewise objective attains its maximum value of A k,d . The estimation of A k,d and B k,d is efficient and leads to a simple update rule. For the dth parameter and for the kth term we have: where θ d in the right side is the current value of the parameter. This formula requires evaluating the objective at x = θ d and x = θ d + π 2 . However, in Appendix C, we show that f k,d (θ d ) = A k,d−1 is known from the previous step. Thus the method finds the maxima with a single evaluation, f k,d (θ d + π 2 ).

D. Hardware-efficient implementation
In general, our method optimizes a new PQC for each Trotter term. Let us denote the (k − 1)th quantum state as |ψ k−1 = V |0 , and the kth state as |ψ(θ) = U |0 , where V and U are PQCs. The objective in Eq. (1) and the update rule in Eq. (2) can be estimated using a well-known primitive called the Hadamard test.
The Hadamard test can be challenging to execute on hardware when U and V are unrelated quantum circuits due to the potentially large number of controlled operations. The process can be largely simplified if U and V differ only locally, e.g., in few gates or circuit regions. For instance, if one circuit can be efficiently transformed into the other using local adjoints of gates, then the Hadamard test consists of a rather simple quantum circuit. Figure 4 (a) shows an example with five variational parameters. Gates u i represent U . Gates controlled-ũ i represent the local transformations taking U to V . The subspaces where the ancilla (top qubit) is |0 and |1 contain U |0 and V |0 , respectively. Finally, a measurement yields the quantity in Eq. (1). This is discussed in detail in Appendix D.
Clearly, we only need to optimize the blocks in the causal cone of the Trotter term, see Fig. 3 (b). We call this approach the cone update. In cone update, each is the number of blocks in the causal cone, and N p is the number of parameters in a block.
The closer U and V remain during the execution of the algorithm, the fewer controlled gates are required for the Hadamard test. This suggests a systematic way to reduce hardware requirements by introducing approximations to the objective. The first level of approximation consists of replacing |ψ k−1 in Eq. (1) with the current variational state |ψ(θ) after a block has been updated. This way, U and V differ by O(N p ) parameters, greatly simplifying the Hadamard test. Note that the replacement is performed once for each of the N b blocks in the causal cone. Hence, to effectively simulate time step τ we use a time step τ /N b in the objective. The division of τ by N b is not necessary when τ is a hyperparameter as, e.g., in imaginary time evolution. We call this approach the block update.
The second level of approximation consists of replacing |ψ k−1 in the objective with |ψ(θ) after an angle has been updated. This guarantees that U and V differ by one parameter at all times. Since the replacement is done N b N p times, we use a time step τ /(N b N p ) in the objective. It is not necessary to divide τ by N b N p when τ is a hyperparameter, e.g., in imaginary time evolution. We call this approach the angle update. Figure 4 (b) shows an example with five parameters and where the Hadamard test requires a single controlled gate. angle update can be further simplified by replacing the indirect measurement with direct ones [52,53]. This removes the need for the ancillary qubit and the controlled gate resulting in the circuit shown in Fig. 4 (c). The coordinatewise update rule for this case are derived in Appendix E.

Update method
Circuits per sweep Matrix inversion Controlled gates per circuit TABLE I. Characteristics of cone, block, and angle update along with the tdvp methods of Ref. [16]. A sweep consists of updating all parameters inside the causal cone of a Trotter term exactly once.

A. Error analysis
In our methods there exist two sources of error. Firstly, there is a Trotter error resulting from the Trotter product formula that is used to split the time evolution operator. Secondly, there is a variational error due to the limitations of our PQC Ansatz and optimization methods. Both errors can be quantified and systematically decreased.
The Trotter error is determined by the order of the Trotter product formula and can be decreased by using higher-order formulas [51]. A pth-order Trotter product formula has an error scaling as O(τ p+1 ) per time step τ for sufficiently small values of τ . Then the total Trotter error for the complete time evolution over N = t/τ time steps scales as O(τ p ).
The variational error is determined by the expressive power of the Ansatz and can be decreased by adding new blocks to the PQC and performing more optimization sweeps. A sweep consists of updating all parameters inside the causal cone of a Trotter term exactly once. For the kth Trotter term, the variational error is simply the objective function in Eq. (1).
We obtain numerical evidence that our method can benefit from a second-order Trotter product formula. We perform real time evolution of the |0 state under the quantum Ising chain with J = 1, λ = 0.2, n = 6, and t = 2. In order to achieve a small variational error we use our most accurate algorithm, cone update, with six sweeps. Figure 5 shows the squared Euclidean distance between the variational state |ψ(θ) and the exact evolved state |ψ as a function of the time step τ used in the algorithm. The second-order method resulted in smaller errors for all values of τ .
Recently, machine learning heuristics have been employed to assist time evolution algorithms and reduce both the variational and Trotter errors [54,55].

B. Comparison of hardware requirements
We compare the hardware requirements of our update methods in Table I Error as a function of the time step τ for hardwareefficient real time evolution with total time t = 2 using firstand second-order Trotter product formulas. We observe that for the larger values of τ , i.e., fewer time steps making up the total evolution, the Trotter error is larger than the variational error. For the smaller values of τ , i.e., more time steps in the entire evolution, the variational error accumulates over more time steps and rapidly becomes larger than the Trotter error. This observation is consistent with similar studies that were carried out for matrix product states [56].
the number of parameters per block, and we assume that every block has the same number of parameters. We conclude that angle update requires the smallest amount of resources per sweep and no matrix inversion which makes this a stable and efficient algorithm for simulating time evolution. block update interpolates between angle and cone update in a natural way.
Table I also shows the hardware requirements for the tdvp methods of Ref. [16]. Compared with these tdvp methods, our update procedures have the advantage that they do not need matrix inversion. Matrix inversion is numerically unstable when the condition number of the matrix is large and small errors in the matrix can become large errors in the matrix inverse [57]. The tdvp methods of Ref. [16] compute the matrix elements from mean values over several measurements. For a desired accuracy per matrix element, this procedure requires O(1/ 2 ) measurements. To determine the accuracy of the time-evolved parameters after a tdvp udpate, All the causal cones enclose six qubits at most, thus we are attacking a problem that is larger than the quantum hardware. The y axis is the energy relative to the ground state, where E(θ) is the variational state energy and Egs is the ground state energy. The time step is a hyperparameter, which we set at τ = 0.1 in all cases. (a) The transverse field is set to λ = 1 where the corresponding infinite system is critical. (b) The transverse field is set to λ = 4 where the corresponding infinite system is noncritical.
we need to take into account the condition number κ of the matrix because the tdvp update needs the matrix inverse. We show in Appendix F that for a desired accuracy in the time-evolved parameters the required number of measurements scales as O(κ 2 / 2 ) in the worst case. Therefore, for ill-conditioned matrices, the tdvp methods of Ref. [16] may need to compute matrix elements very accurately and require many measurements.
We obtain some numerical evidence by computing the median condition numberκ of 100 random initializations of our Ansatz. To avoid instabilities, we ignore singular values smaller than 10 −7 . With 15 parameters we havẽ κ ≈ 16, with 45 parameters we haveκ ≈ 5001, and with 75 parameters we haveκ ≈ 17085. This rapid increase of the condition number as a function of the number of variational parameters indicates a challenging scaling of tdvp cost for our choice of Ansatz.

C. Numerical experiments
We present numerical experiments that validate our methods (details are provided in Appendix G). First, we look into the accuracy of cone, block, and angle update. This is important, in particular, for real time evolution where the variational state should closely track the true evolved state at all times. In other words, the objective in Eq. (1) should be very close to one for any Trotter term. It is interesting to analyze the accuracy as a function of the number of sweeps.
We use the Ansatz in Fig. 3 (a) with depth D = 2, periodic boundary conditions, and randomly initialized parameters. Regardless of the number of qubits n, there are only two possible causal cones: a four-qubit cone enclosing N b = 3 blocks, if the term is located in front of a block, and a six-qubit cone enclosing N b = 5 blocks, if the term is located in front of two blocks (the latter case is shown in Fig. 3 (b)). We select random Trotter terms of the form exp − i 10σ j ⊗σ j+1 , whereσ is chosen randomly from {1,σ x ,σ y ,σ z }, and we perform cone update with a time step of 0.1. Recall that for real time evolution angle and block update require a corrected time step. For block update we use a step of 0.1/(N s N b ) and for angle update we use 0.1/ (N s N b N p ), where N s is the number of sweeps. Figure 6 (a) shows the mean objective and standard deviation for the case of four-qubit cones. cone update outperforms and converges to the optimal value of one, while block and angle update do not benefit from an increased number of sweeps. The six-qubit case is shown in Fig. 6 (b), where a similar behavior is observed. However, none of the methods converge to the optimal value of one, reflecting the limitations of the shallow PQC Ansatz.
Second, we verify that cone, block, and angle update can find ground states via imaginary time evolution. We use the 1d quantum Ising Hamiltonian: For n = 8 qubits we use the PQC Ansatz in Fig. 3 (a) with depth D = 2 and open boundary conditions. Here the time step plays the role of a hyperparameter because we are interested in finding the ground state as quickly as possible. We use τ = 0.1 in all cases. Figure 7 (a) shows the mean energy and standard deviation obtained in 20 time steps for J = λ = 1 in the Hamiltonian, i.e., for the critical point of the corresponding infinite system. For our choice of hyperparameter τ , all methods reach similar low energies. Figure 7 (b) shows the results for J = 1 and λ = 4, i.e., where the corresponding infinite system is far from the critical point. All methods converge rapidly, producing states that are very close to the ground state. We The state overlap can alternatively be computed using this shorter-depth circuit representing the destructive Swap test [58,59]. emphasize that the circuits for angle update are much simpler than those for cone update.
Finally, we test our algorithms on larger instances of up to 12 qubits. The experiments and results are shown in Fig. 1 and explained in the figure caption of Fig. 1.

IV. DISCUSSION
Variational simulations of time evolution can be performed without inverting a possibly ill-conditioned matrix at each time step. To this end, we derived suitable algorithms whose hardware requirements can be adjusted to match the experimental capabilities.
One of the main applications of imaginary time evolution is to find ground states. Our most efficient algorithm, angle update, performed remarkably well at this task. In practice, once angle update converges one could switch to more demanding algorithms, such as block or cone update, in order to fine-tune the result.
For real time evolution, the task is to simulate the time-dependent Schrödinger equation. We presented numerical evidence that block and cone update achieve the high accuracy required. We also expect our angle update to be useful for specific applications, such as the computation of steady states, where accuracy per time step is not crucial.
A recent publication [21] contains simulations of real time evolution based on the maximization of the state overlap.
As illustrated in Fig. 8, combining this method with our cone strategy leads to a promising hardware-efficient algorithm that can additionally make use of hardware-efficient overlap computation [58,59]. Here the coordinatewise update rules are already known [36][37][38][39] as the overlap maximization is equivalent to the expectation minimization for the Hermitian operator M = − |0 0|.
The last key aspect of this work is the use of causal cones to enable simulations of finite systems larger than the size of the underlying quantum hardware. Causal cones are also an ingredient in the construction of noiseresilient quantum circuits [34,63,64]. We envision that such techniques that detach the logical model from some of the hardware limitations will ultimately enable us to attack large problems and obtain a quantum advantage.

V. ACKNOWLEDGEMENTS
We thank Nathan Fitzpatrick, David Amaro, and Carlo Modica for helpful discussions. We gratefully acknowledge the cloud computing resources received from the 'Microsoft for Startups' program.

Appendix A: Variational simulation of time evolution
In this Appendix, we detail our method for the variational simulation of time evolution. To keep the discussion general, we consider an arbitrary complex time z ∈ C. Later we will specialize to purely real time and purely imaginary time evolution.
We want to simulate the time evolution operator e −izĤ applied to an initial state |ψ of n qubits. We assume that the Hamiltonian is given in general formĤ = K k=1 h kĤk , whereĤ k ∈ {1,Ẑ,X,Ŷ } ⊗n is a tensor product of Pauli operators, h k is a real number, and K ∼ O(poly(n)). There can be termsĤ k inĤ that do not commute with each other.
A commonly used technique to simplify the problem consists of expanding the time evolution operator into a product. A product formula, such as the Trotter formula, produces a sequence of short-time evolutions which approximates the full evolution. Let us apply the first-order Trotter product formula for N ∼ O(poly(n)) time steps of size ζ = z/N : e −izĤ |ψ ≈ e −iζh KĤK · · · e −iζh1Ĥ1 N |ψ = e −iζh KĤK,N · · · e −iζh1Ĥ 1,N · · · e −iζh KĤK,1 · · · e −iζh1Ĥ1,1 |ψ .

(A1)
In the second line,Ĥ k,m has an additional subscript indicating the mth application of the kth term. Now assume we are able to obtain a variational approximation to the first operation: where θ 1,1 is the vector of variational parameters and θ * 1,1 indicates their optimal value. Then we can substitute this in Eq. (A1) and proceed with a variational approximation to the second operation: where again we assumed the optimal value for the variational parameters. Iterating the above procedure for all the N K terms we obtain an approximation to the full time evolution e −izĤ |ψ ≈ ψ(θ * K,N ) . To simplify the notation let us condense indexes k and n into a single index l, and let us write |ψ(θ * l ) = |ψ l whenever the parameters have been optimized.
In order to find the variational approximation at step l we use the squared Euclidean distance as a cost function: Here Re(·) denotes the real part of a complex number, andζ denotes the complex conjugate of ζ. We have assumed an Ansatz for which ψ(θ l )|ψ(θ l ) is constant. This is the case, for example, if the Ansatz is implemented by a PQC. We have also used thatĤ l is Hermitian.
The minimization of C l (θ l ) is equivalent to the maximization of the following objective function: Let us now specialize to the two cases of interest. For real time evolution we write z ≡ t where t ∈ R is the total time, and ζ ≡ τ = t/N is the time step. Since the termsĤ l are tensor products of Pauli operators, we have that H 2 l = 1. Using this property and the definition of matrix exponential e A = ∞ n=0 A n /n!, it can be verified that e iτ h l H l = cos(τ h l )1 + i sin(τ h l )Ĥ l . Plugging this in the objective function, we obtain: For imaginary time, we write z ≡ −it, where t ∈ R is the total time, and ζ ≡ −iτ = −it/N is the time step. Following the same argument above, it can be verified that e −τ h l H l = cosh(τ h l )1 − sinh(τ h l )Ĥ l . Plugging this in the objective function we obtain: In Appendix B, we show that the coordinatewise version of Eq. (A5) has a sinusoidal form. This fact is inherited by Eqs. (A6) and (A7), and is exploited to design the optimization algorithm in Appendix C. In Appendix D, we present quantum circuits for the estimation of the objectives.

Appendix B: Sinusoidal form of the coordinatewise objective function
In Refs. [36][37][38][39], the authors showed that for certain standard PQCs the expectation tr M U ρU † as a function of a single parameter has sinusoidal form. This yields an efficient coordinatewise optimization algorithm that does not require explicit computation of the gradient or the Hessian. Here we present a similar derivation for objective functions of the form Re(tr M U ρV † ) where U is a PQC and V is a fixed circuit. Note that there is nothing preventing us from parametrizing V and carrying out the same derivation.
Let us consider a circuit of the form U (θ) = U D · · · U 1 , where each gate is either fixed, e.g., a CNOT, or parameterized as U d = exp(−iθ d G d ), where θ d ∈ (−π, π] and G d is a Hermitian and unitary matrix such that G 2 d = 1. For example, tensor products of Pauli matrices are suitable choices for G d ∈ {1,X,Ŷ ,Ẑ} ⊗n . For the parameterized gates, we use the definition of matrix exponential to get U d = cos(θ d )1 − i sin(θ d )G d . Without loss of generality, let us consider a pure initial state ρ = |0 0| where |0 ≡ |0 ⊗n . Expanding the objective function we have Re( 0| V † M U D · · · U d · · · U 1 |0 ). To express this as a function of a single parameter θ d we simplify the notation absorbing all gates before U d in a unitary which we call U B , and we absorb all gates after U d in a unitary which we call U A . Using this notation, we write: Noting that U d (0) = I and U d ( π 2 ) = −iG d , we rewrite the above as: We now use the harmonic addition theorem a cos(x) + b sin(x) = √ a 2 + b 2 sin x + arctan a b to obtain: The objective function as a function of a single parameter has sinusoidal form with amplitude A, phase B, and period 2π. Note that we must use the arctan2 function in order to correctly handle the sign of numerator and denominator, as well as the case where the denominator is zero.
There is nothing special about the evaluations at 0 and π 2 in Eq. (B3). Indeed, we can estimate A and B from: for any φ ∈ R. From the graph of the sine function, it is easy to locate the maxima at θ * d = π 2 − B + 2πk for all k ∈ Z. Taking B from Eq. (B5), we obtain: In practice, we choose k such that θ * d ∈ (−π, π]. A similar derivation can be done for objective functions of the form Im(tr M U ρV † ). For the dth parameter, we write f d (x) = Im( 0| V † M U A U d U B |0 ) and we obtain the maxima exactly as in Eq. (B6). Figure 9 shows the sinusoidal forms for a random choice of U, V and M on n = 4 qubits.

Appendix C: The update rule with a single evaluation
In this Appendix, we take a closer look at the quantity of interest, Eq. (A5), and derive our coordinatewise update rule. Consider the lth term in the Trotter product formula. The coordinatewise objective for the dth parameter is sinusoidal: where A l,d is the amplitude, B l,d is the phase, and φ ∈ R can be chosen at will. The third line is obtained by applying the results in Eqs. (B3), (B4) and (B5).
Optimizing this objective using Eq. (B6) would require evaluations of f l,d (φ) and f l,d (φ + π 2 ). However, we can recycle information from previous steps and use a single evaluation. The approach is as follows. Say we have found the maximum θ * d−1 for the (d − 1)th parameter. At no additional cost, we calculate f l,d−1 (θ * d−1 ) = A l,d−1 . Now we move to the dth parameter. Setting φ in Eq. (C1) to the current parameter value, φ = θ d , we happen to know Hence, we only need to evaluate f l,d (φ + π 2 ) = f l,d (θ d + π 2 ). In summary, we obtain the update rule: where θ d is the current parameter value, and f l,d (θ d ) is known from the previous parameter update. Equation (C2) is our coordinatewise update rule for the variational simulation of time evolution.
Appendix D: Hadamard test for cone update In this appendix, we discuss the Hadamard test used in cone update. Recall that the proposed method initializes a PQC at each step and trains it to simulate the action of a short-time evolution operator on the previous variational state. Let us denote the state obtained at the (l − 1)th step as |ψ l−1 = V |0 and the state for the lth step as |ψ(θ l ) = U |0 . Here V and U are PQCs acting on the easy-to-prepare state |0 ≡ |0 ⊗n . The Hadamard test can be challenging to execute on existing hardware when U and V are unrelated quantum circuits due to the potentially large number of controlled operation. cone update uses the fact that U and V differ only locally to simplify the Hadamard test. One circuit can be efficiently transformed into the other using adjoints of gates. To show this we start from U |0 , add one ancilla qubit in the state |0 which is then acted upon by a Hadamard gate, so that we get 1 √ 2 (|0 + |1 ) ⊗ U |0 . Now we include the local transformations from U to V as gates controlled by the ancilla qubit. As an example, if U contains a rotation gate R z (a) and V contains R z (b) at the same location, then we only need to attach a controlled-R z (b − a) rotation. The subspaces where the ancilla is |0 and |1 will contain U |0 and V |0 , respectively. In other words, the result is 1 Having another Hadamard gate acting upon the ancilla qubit gives 1 2 Measuring the expectation ofẐ ⊗Ĥ l one obtains the real part Ẑ ⊗Ĥ l = Re( 0| V †Ĥ l U |0 ). For the imaginary part, we follow the same procedure, but we include a phase gate after the first Hadamard gate. This yields Ẑ ⊗Ĥ l = Im( 0| V †Ĥ l U |0 ). With the two circuits just described we can estimate the objective function in Eq. (A6) for real time evolution, and in Eq. (A7) for imaginary time evolution. If the causal cone ofĤ l contains N b blocks, each with N p parameterized gates, the Hadamard test requires O(N b N p ) controlled operations.

Appendix E: Hadamard test for angle update
In this appendix, we present the implementation of our method that has the lowest hardware requirements. angle update is efficient in terms of circuit depth, but still uses an ancilla qubit and a controlled operation which may be challenging to realize on existing hardware. We can avoid the use of those while requiring the execution of additional circuits. This approach uses the methods presented in Refs. [52] and [53] to replace indirect measurements with direct ones.
Recall that angle update consists of replacing the variational state |ψ l−1 = V |0 in the objective function with the variational state |ψ(θ) = U |0 after each parameter update. This approximation guarantees that the two PQCs V and U differ by just one parameter at all times. Thus we can drop V and express everything in terms of U .
For real time evolution (ζ ≡ τ ∈ R), we start from Eq. (A6). The coordinatewise objective for the dth parameter is: The variable x appears only in the gate denoted by U d (x), while U † d is fixed and uses the current parameter value θ d . We maximize this objective using Eq. (C2). The first evaluation is at the current parameter value which yields f l,d,real (θ d ) = cos(τ h l ). The second evaluation is at the shifted parameter value and is slightly more involved. Using we have that: Now we can use the technique from Ref. [53] to write the above as the difference of two expectations. Since G d is a Pauli operator, we can define projective measurement operators 1 √ 2 (1 ± G d ). In turn these can be used to define new which are easily verified to be Hermitian. With these, a direct calculation shows that: In practice, the projective measurement operators 1 √ 2 (1±G d ) correspond to measuring the qubit when these operators act in the eigenbasis of G d [53]. Putting everything together, Eq. (E1) is maximized in closed form using two expectations: All quantities are estimated by direct measurements, without using ancilla qubits or controlled gates. For imaginary time evolution (ζ ≡ −iτ with τ ∈ R), we start from Eq. (A7). The coordinatewise objective for the dth parameter is: Again we maximize this objective using Eq. (C2). The first evaluation is at the current parameter value which yields The subscript is used to stress that the expectation is computed using the current parameter value. The second evaluation is at the shifted parameter value. Using U d (θ d + π 2 ) = −iU d (θ d )G d in Eq. (E5), we see that the first term equals to zero. For the second term, we use the technique from Ref. [52] to express it as the difference of two expectations. The result is: Here the subscripts are used to stress the use of a shifted value for the dth parameter. Putting everything together, Eq. (E5) is maximized in closed form using three expectations: The three expectations are estimated by direct measurements, without using ancilla qubits or controlled gates. Recall that in cone update, we are able to recycle information from previous steps and reduce the circuit count. In angle update, this cannot be done exactly as the objective function changes after each parameter update. For imaginary time, assuming the change in objective function value is small, we can approximately recycle information from the previous steps as , bringing the total number of circuits to two. For small values of τ , the error of this approximation is proportional to the modification δθ d−1 of the previous parameter. A smaller value of τ produces a smaller value of δθ d−1 , hence the error of this approximation can be systematically reduced by decreasing τ .
where we have set = 1. The right-hand side of equation (F1) may leave the variational space of |ψ(θ) , which is created by all possible choices for θ. To stay in the variational space, we minimize the squared Euclidean distance: Using the chain rule: and the definitions: we rewrite Eq. (F2) as: In a PQC Ansatz, the variational parameters are rotation angles. Thus, we now restrict θ j to be real and obtain B j that are also real. That is,B j = B j in the equation above. The minimum of this equation in terms of the B j can be determined by taking the derivatives and equating them to zero: ∂dist TDVP ∂B j = k 2 Re(A j,k )B k − 2 Im(C j ) = 0.
This is equivalent to: k Re(A j,k )B k = Im(C j ).
Notice that Eq. (F7) can be written as a matrix vector equation by defining Re(A j,k ) to be the matrix elements inside a matrix A, and defining B k to be the vector elements inside a vector B, and defining Im(C j ) to be the vector elements inside a vector C. This leads to the final tdvp equation: This is an equation for the time-dependence of the parameters in our PQC Ansatz since B = d dt θ. Therefore, tdvp replaces the original Schrödinger equation (F1) with a new equation (F8) that time-evolves the variational parameters directly. Although we focused here on real time evolution, other applications such as imaginary time evolution lead to similar systems of linear equations [16].
To quantify the accuracy of our solution vector B in Eq. (F8) it is important to emphasize that the matrix A and the vector C are constructed from a finite number of measurements on a quantum computer and therefore have a finite precision [15,16]. The tdvp algorithm determines the individual elements in A and C as mean values over m measurements of specific quantum circuits and this mean value computation has the error MC of classical Monte Carlo sampling scaling like O(1/ √ m). Therefore, using a number of measurements m for each element in the matrix A and vector C, both are accurate only up to an error scaling like O(1/ √ m). To analyze the effect of this error in A on the solution of Eq. (F8), we replace A by A + δA where now A represents the exact A and δA its error. Similarly we replace B by B + δB: where we have used AB = C, ignored δAδB, and where · is a norm. We observe that A A −1 is the condition number. Thus κ = A A −1 = σ max /σ min where σ max denotes the largest and σ min the smallest singular value of A. We also observe that δA / A is the relative error of A which scales like O(1/ √ m). Therefore, we have obtained an upper bound for the relative accuracy of B: Equivalently, Eq. (F10) states that for a desired accuracy in B we need the number of measurements m to scale like O(κ 2 / 2 ). The same scaling is obtained for finite precision C by repeating the above calculation using C + δC. We conclude that the matrix inversion required in the original tdvp methods [15,16] leads to a computational cost scaling like O(κ 2 / 2 ) for accuracy . Compared with the standard measurement error O(1/ 2 ) the additional factor of κ 2 can be computationally challenging especially for ill-conditioned matrices A.