Fast and differentiable simulation of driven quantum systems

The controls enacting logical operations on quantum systems are described by time-dependent Hamiltonians that often include rapid oscillations. In order to accurately capture the resulting time dynamics in numerical simulations, a very small integration time step is required, which can severely impact the simulation run-time. Here, we introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical integrators. This solver, which we name Dysolve, efficiently captures the effect of the highly oscillatory terms in the system Hamiltonian, significantly reducing the simulation's run time as well as its sensitivity to the time-step size. Furthermore, this solver provides the exact derivative of the time-evolution operator with respect to the drive amplitudes. This key feature allows for optimal control in the limit of strong drives and goes beyond common pulse-optimization approaches that rely on rotating-wave approximations. As an illustration of our method, we show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.

The controls enacting logical operations on quantum systems are described by time-dependent Hamiltonians that often include rapid oscillations. In order to accurately capture the resulting time dynamics in numerical simulations, a very small integration time step is required, which can severely impact the simulation run-time. Here, we introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical integrators. This solver, which we name Dysolve, efficiently captures the effect of the highly oscillatory terms in the system Hamiltonian, significantly reducing the simulation's run time as well as its sensitivity to the time-step size. Furthermore, this solver provides the exact derivative of the time-evolution operator with respect to the drive amplitudes. This key feature allows for optimal control in the limit of strong drives and goes beyond common pulse-optimization approaches that rely on rotating-wave approximations. As an illustration of our method, we show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.

I. INTRODUCTION
High-fidelity logical gates are paramount to the realization of useful quantum computation. It is important in the development of these gates that they be simulated with great precision to ensure that they meet the particularly strict requirements for fault-tolerant quantum computation. Several techniques are used for simulating the time dynamics of quantum devices, including dynamical solvers such as Runge-Kutta integrators and direct matrix exponentiation [1,2]. However, capturing the full time dynamics with the necessary accuracy requires integration methods with a sufficiently small time step. As a result, simulations can be computationally very expensive, even for relatively simple cases such as optimizing a two-qubit gate.
To ensure that simulations of quantum systems are feasible, approximations must be made. For example, a common approximation is to neglect counter-rotating terms within the rotating wave approximation (RWA) to greatly reduces the complexity of the simulation whilst capturing the dominant dynamics. This approximation renders the Hamiltonian time independent, which can then simply be exponentiated to obtain the propagator. However, effects such as the Bloch-Siegert shift which do not appear under a RWA need to be taken into account to accurately model the system [3]. In addition, many gate optimization methods, such as GRAPE, require the calculation of gradients, something which greatly adds to the complexity of the numerical calculations [4]. More specifically, when including the effects of the counterrotating terms, there exists no simple derivative of timeordered unitaries with respect to the drive amplitudes, and one must resort to approximating the gradients. Consequently, this approach may not converge to the optimal solution, which is problematic when targeting very high-fidelity gates.
In this work, we develop an algorithm based on a Dyson series expansion of the time ordered problem that addresses all of the above difficulties. In this approach, which we call Dysolve, the time-ordered unitary evolution operator is written as a product of time-independent operators which are weighted by the drive amplitudes and dynamical phases. This algorithm captures the full fast-oscillatory dynamics irrespective of the integration step size, thereby decreasing the complexity of the numerical problem. This also greatly decreases the simulation time in comparison to traditional integration-based solvers without loss of numerical precision. Importantly, this approach trivializes the derivative with respect to the drive strength, which can be calculated to an accuracy equivalent to the order of the Dyson series. Moreover, this approach is compatible with non-Hermitian dynamics, allowing for the simulation of open quantum systems.
We begin by introducing our approach in Sec. II in the case of a single, sinusoidal drive with a constant amplitude, and then extend the formalism to the case of multiple drives, accounting for filtering effects on envelope functions. We then define the Dysolve algorithm in Sec. III, and demonstrate its application to driven quantum systems with random drive envelopes. We proceed to apply our algorithm to the GRAPE optimization routine in Sec. IV, and show as an example optimized twoqubit gates in the circuit QED architecture.  drive term (1) Here,Ĥ 0 = k λ k |k k| is a generic system Hamiltonian expressed in its eigenbasis, whileX is a dipole operator that connects the eigenstates ofĤ 0 and which we take to account for the amplitude of the drive. As will become important later, we note that we are not using the RWA. The propagator underĤ(t) for some time increment δt takes the usual form of a time-ordered integral with T the time-ordering operator. Due to the fast oscillatory cos(ωt) term, evaluating the propagatorÛ (t, t+δt) is a numerically challenging problem, and there exists few analytic solutions to even the simplest case of a driven two level qubit [5]. In special cases, one can invoke the RWA to remove the explicit time dependence from the Hamiltonian, thereby allowing for calculation of the propagator from matrix exponentiation directly [6]. However, when the RWA cannot be used due to a breakdown of the approximation or because a greater degree of accuracy is needed, we propose using a truncated Dyson series. Consider Eq. (2), written using the definition of the time-ordering operator (3) It is useful to express this expansion in terms of powers of the drive operatorV (t), forming the Dyson serieŝ where we have defined (5) Here, ω n is an n-vector whose entries are ±ω originating from the decomposition of the cos(ωt) of the control into complex exponentials, and ω n [p] is the p-th element of ω n . The sum over ω n implies a summation over all 2 n possible ω n vectors.
Equation (5) also introduces the Dyson series opera-torŜ (n) (ω n , δt) which takes the form and which corresponds to a summation over all Dyson path operators of n-th order, where m = [m n , ..., m 0 ] with each index m i ranging from 0 to infinity. These n-th order path operators are given bŷ EachŜ (n) m (ω n , δt) correspond to different ways to have n contributions from the controlX (i.e. different terminated branches of the tree diagram), with the m i 's labeling the number of applications ofĤ 0 before the subsequent application ofX. The total number of operators in the path operator is given by M . To simplify the notation, we introduce an indexing function ι(p),which can be interpreted as the total number ofĤ 0 andX operators before the p-th application of the controlX. These definitions can be visualized as in Fig. 1 where the red and blue branches correspond to sets of path operatorŝ S    [2,0] (ω n , δt) terminate at the points in the branches marked by a star.
Below, we give explicit expressions for these operators to zeroth and first order in the control. Building on these results, we then construct expressions that are easily amenable to efficient numerical evaluation to arbitrary orders.

B. Evaluation to zeroth and first order
The zeroth order corresponds to the leftmost branch of the tree diagram of Fig. 1 for which it is straightforward to obtain an explicit expression. Indeed, the path operator simply takes the form Summing all of the elements in the branch, we obtain which corresponds, as expected, to the drift evolution of the Hamiltonian in the absence of drive. In a similar fashion, the path operator to first order takes the form and therefore leads to the following Dyson series operator (12) This operator weights the matrix elements ofX by a function f whose inputs are weighted eigenvalues {λ k } of the free HamiltonianĤ 0 with corresponding eigenstates {|k }, where the second eigenvalue is shifted by the drive frequency ω 1 [1] = ±ω. This function is defined as We refer to this function as the first order weighting function. The above expressions are derived in the Appendix, and are unsurprisingly in exact agreement with first-order time-dependent perturbation theory. Crucially, the Dyson operators depend on the size δt of the time increment, but not the current time of the evolution t. As a result, for a total evolution time T = P δt, where P is an integer, the set of P incremental evolution operatorsÛ (pδt, (p + 1)δt) can be evaluated simultaneously. As will become clearer in the next section, this holds true to arbitrary order.

C. Evaluation to n-th order
To model quantum systems with sufficient accuracy, it is necessary to consider second-and higher-order terms in the Dyson series. This can be done following a similar approach as described above. Indeed, we introduce the n-th order Dyson operator in a similar fashion to Eq. (12) Here, each k n = (k (0) , k (1) , · · · , k (n) ) is a set of indices which specify a set of (n + 1) eigenstates {|k (m) } ofĤ 0 with corresponding eigenvalues λ n (k n ), and we sum over all possible k n . These eigenvalues are written in vector form: (15) The sum over k n implies a summation over all sets of eigenstates which the dipole operatorX couples in Eq. (14). Additionally, we have introduced the cumulative vector The n-th order weighting function f (λ n ) entering Eq. (14) can be obtained recursively using the relation (see Appendix A) where g(v n ) returns v n without its last element, g 2 (v n ) = g(g(v n )), and the notation ∪ indicates appending an additional element to a vector such that In the case of degenerate eigenvalues, we simply define Eq. (17) by taking the limit λ n [n − 1] → λ n [n]. Using Eq. (4) with the above results, we can now form the truncated Dyson series yielding an approximation to the evolution operator to n-th order in the perturbation thus yielding the total evolution operator for time T = P δt to the same order In this formalism, the time-ordering operator T becomes a trivial operation since we need only arrange the set of matrices {Û p } in ascending order from right to left. Using this method, we can thus calculate an arbitrary number of terms in the Dyson series, with each subsequent order increasing the accuracy of the approximation to the propagator operatorÛ (0, T ).

D. Generalization to more complex drives
So far, we have only considered a single, cosinusoidal drive. In practice, for applications such as DRAG [7], it is useful to consider more complicated drives of the form where Ω = Ω x +iΩ y is the complex drive amplitude. This generalization requires only a minor modification to the result of Eq. (5) which now readŝ and where we have introduced We see from Eq. (21) that Ω and Ω * will appear according to the number of positive and negative frequency elements in the vector ω respectively, leading to a simple expression for µ(ω n ) in Ω(ω n ): As shown in the Appendix, this formalism can be further extended to an arbitrary number of drives of different frequencies, and acting on different system operators.

E. Envelope Functions and Gaussian Filtering
Having established the necessary notation, we can now take into account control drives with time-dependent amplitudes Ω(t). To do so, we discretize the drive envelope of total time T = N p ∆t into N p increments, called pixels, each of duration ∆t. For a given pixel i, the drive is characterized by its complex amplitude u i such that the envelope can be expressed as [4] where (t, t + ∆t) = Θ(t) − Θ(t − ∆t), with Θ(t) the Heaviside function. Additionally, following Motzoi et al. [8], we take into account the finite bandwidth of the control by applying a Gaussian filter on the discretized enveloped. To do so, each pixel is subdivided in N s subpixels of width δt with ∆t = N s δt, see Fig. 2. The subpixel amplitudes s l are then defined as where the Gaussian filter matrix T has elements Following Eq. (22), the n-th order evolution operator over the l-th subpixel takes the form where the drive function in Eq. (23) picks up an additional subscript l to denote the l−th subpixel As can be seen in Fig. 2, the subpixels (red) will often overestimate or underestimate the filtered pulse (blue) amplitude depending on its gradient, something which can become a leading contribution to the error in simulations. In Appendix C, we generalize the amplitudes s l to have a linear time dependence to compensate for the change in amplitude of the continuous pulse across a single subpixel, providing a more accurate approximation to the filtered pulse.

III. THE DYSOLVE ALGORITHM
As already explained, the Dyson series operatorŝ S (n) (ω n , δt) for which we have expressions at arbitrary order n are functions of the time increment δt and, crucially, are independent of the total evolution time T . The Dysolve algorithm leverages this fact to parallelize the time evolution.
The algorithm operates in two parts: a preparation stage and a contraction stage. In the preparation stage, the Dyson operatorsŜ (n) (ω n , δt) for a Hilbert space size N are computed up to a chosen truncation order n, and arranged in a tensor. This tensor has dimensions (R × N ×N ), where R is the total number of Dyson operators. In the case of a single drive without linear interpolation, R = 2 n+1 − 1, where n is the order of the expansion.
Once the preparation stage is completed, it is in principle possible to consider arbitrary gate times and drive envelopes. Suppose we wish to simulate a time-evolution of length T = P δt, where P is an integer. We first generate a (P ×R) tensor whose elements are the envelopes and oscillatory terms exp i n p=1 ω n [p]lδt Ω l (ω n ) in Eq. (28). We then multiply these tensors to obtain a (P × N × N ) time-evolution tensor, where the p-th (N ×N ) matrix corresponds to a time-step operatorÛ p . This multiplication constitutes the parallelized portion of the algorithm, with all P time-evolution operators calculated simultaneously. As in Eq. (20), we multiply all of the individual evolution operators in the time evolution tensor to obtain the final evolution operatorÛ (0, T ). This whole procedure forms the contraction step of our algorithm.
Below, we will refer to the computation of the evolution operatorÛ (0, T ) to n-order following the above approach as Dysolve-n.

A. Benchmarking
Before turning to examples of application of Dysolve, we first benchmark this algorithm. To do so, there are a number of factors to consider: i) the size of the Hilbert space, ii) the drive amplitude, iii) the number of independent drive channels, and iv) the shape of the envelope function. To quantify the performance of our algorithm, we use two metrics. Given a number of subpixels N s , we evaluate: 1) the computation time, and 2) the Frobenius norm distance betweenÛ (0, T ), the propagator calculated under the Dysolve algorithm with a chosen number of subpixels, and a referenceÛ ref (0, T ), the same unitary calculated with very high precision. As discussed further in Appendix D, we use Dysolve-4 with 10 4 subpixels to computeÛ ref (0, T ), since Dysolve is able to reach precisions on the benchmark setup that are up to three orders of magnitude greater than QuTiP's propagator, a comparable dynamical solver [9].
For our benchmarks, we use diagonal system Hamiltonians with an Hilbert space size N = 25. As a concrete example, we take the eigenvalues to be normally distributed about 7 GHz, the typical operating frequency of superconducting qubits [10]. We consider between one and three input drive operators, at the frequency of the |0 ↔ |1 , |2 ↔ |3 and |4 ↔ |5 transitions with |k the k-th lowest lying system eigenstate. The matrix elements of each operator corresponding to these transitions are set to 1. To emulate an arbitrary drive operator with off-resonant terms, we populate 20% of the remaining matrix elements of the drive operators with complex numbers normally distributed about 0, after which hermicity is enforced by the addition of the complex conjugate. The envelope function associated to each drive operator is centered around an amplitude such that the duration of the simulation is equivalent, in the absence of the other drives, to a total of 20 Rabi oscillations. In the context of superconducting qubits with their microwave drives, this corresponds to the simulation a 500 ns evolution with a drive amplitude of 40 MHz. We take the pixel amplitudes u j of the envelope functions to be normally distributed about 40 MHz with a standard deviation of 1 MHz. To reduce statistical fluctuations, the results presented in Fig. 3 are averaged over 30 simulations, each with different random envelope functions and system eigenvalues.
The numerical simulations reported in Fig. 3 are performed on a 2.8 GHz, Intel Xeon Gold 6242 Processor (16 cores/32 threads) using python. Since the preparation stage only needs to be performed once for a particular system Hamiltonian, the reported computation time accounts only for the contraction stage of the Dysolve algorithm. We report the preparation times in the figure caption for reference. The top three panels of Fig. 3 present the norm distance between Dysolve-n for n = 2, 3 and 4 as a function of the number of subpixels, and for increasing number of input drives. Even with large drive amplitudes and long evolution times, we obtain an excellent approximation to the final unitary operation with relatively few subpixels. For example, using a fourth order Dyson expansion with 40 subpixels yields a norm distance error of less than 10 −5 in all cases. Importantly, as shown in the bottom panels of Fig. 3, the computation time needed to reach this level of accuracy is only on the order of a few seconds (less than 3 s). In comparison, QuTiP's propagator function takes about 16 minutes to perform the calculation to an equivalent accuracy for a single input drive. Such a significant speedup proves to be particularly useful when, as dicussed in section Sec. IV, the contraction stage of the algorithm needs to be repeated many times in an optimization loop. Additional benchmarking results are provided in Appendix D.

IV. APPLICATION TO OPTIMAL CONTROL
Optimal control is an essential tool in the development of high-fidelity gates for quantum computation. Most optimization algorithms, such as GRAPE [4], rely on calculating the gradient of the gate fidelity with respect to the drive amplitude at each pixel. With most approaches, this requires recalculating the evolution operator at each time interval, something which can be as expensive as the original calculation of the unitaries. Moreover, this calculation is generally performed with the presumption that the Hamiltonian is time independent over the duration of a subpixel, thus invoking a form of the RWA. In  our expansion, such an assumption is not required.
Here, we show how Dysolve can be applied to optimizing control pulse envelopes to maximize gate fidelity. More precisely, we consider optimizing the fidelity of an evolutionÛ (T ) with respect to a target gate unitarŷ U target . In general,Û (T ) acts on the full Hilbert space of the system whileÛ target is defined on its computational subspace. The objective is thus to maximize the performance function [8] where d is the dimension of the computational subspace (d = 2 for a single qubit), andP is the projector on that subspace. Although our approach can in principle deal with an arbitrary number of drives, for simplicity here we consider control of a single set of complex drive amplitudes u j .
We use a GRAPE-like approach to maximize the gate fidelity which requires the gradient of the cost function Φ with respect to the drive amplitude at each pixel u j and subpixel s l [8] (31) Within the framework of the Dysolve algorithm, it is simple to evaluate the operator (∂Û /∂s l ). Indeed, using Eq. (20), we find that for the n-th order Dyson expansion, the derivatives of the unitaries take the form (32) Importantly, note that the Dyson operatorsŜ (n) (ω n , δt) remain unchanged by the derivative. As such we only need to perform the preparation stage of the Dysolve algorithm once, with the calculation of ∂Û /∂s l . Thus, the optimization iterations only require the contraction stage computation to be performed. Further, these derivatives are exact. Consequently, the effects of off-resonant and counter-rotating terms are accounted for in the derivatives. This is the strength of the Dysolve algorithm for optimization.
Recall from Eq. (21) that the real and imaginary components of the drive amplitudes are associated with the magnitudes of the cosine and sine quadratures, respectively. Thus, to calculate the relevant amplitude derivatives for the cosine and sine drive envelopes, one simply calculates the appropriate sum or difference of the derivatives in Eq. (31): where u j = u j,x + iu j,y . To perform the GRAPE algorithm, the set of drive amplitudes are simply updated by taking steps in the direction of the gradient of the fidelity [4] u j,(x,y) → u j,(x,y) + ∂Φ ∂u j,(x,y) , where is a small, positive number which need not be fixed during the optimization process. For example, one could use a backtracking line search to maximize the gain in fidelity [11,12].
In the examples presented below, we simply use a sufficiently small for the updating process to be stable.

A. Example: Cross-Resonance Gate for Coupled Transmons
As an example of application of our implementation of the GRAPE algorithm with Dysolve, we present results of the optimization of a two-qubit gate. To this end, we consider the system of two fixed-frequency transmon qubits, illustrated in Fig. 4. Each qubit is described by a Hamiltonian of the form [10] wheren j andφ j with j = c, t are the conjugate charge and phase operators of the transmon, while E Cj and E Jj are the charging and Josephson energies. More precisely, we consider the cross-resonance gate, which performs an X rotation on a target qubit conditional on the state of a control qubit which is driven at the target qubit's frequency [13,14]. Optimal control has been applied to this gate before in Refs. [15,16]. However, these approaches have made use of simplified models for the device and of the rotating wave approximation.Taking advantage of the Dysolve algorithm, our approach generalizes previous work on the cross-resonance gate by considering . Superconducting circuit for cross-resonance gates between transmon qubits. The leftmost transmon, of frequency ωc/2π, plays the role of the control qubit and is strongly driven by the voltage source Vc at the frequency ωt/2π of the target qubit (rightmost transmon). The target qubit is also driven by Vt using a relatively weak tone that serves to give additional control.φc andφt correspond to the phase degree of freedom associated with the control and target qubits, respectively.
the full circuit Hamiltonian and including all counterrotating terms.
Following the standard circuit-quantization procedure [17], the two-transmon Hamiltonian can be put in the formĤ(t) =Ĥ 0 +V (t) witĥ whereĤ c ,Ĥ t are the Hamiltonian of the control and target qubits, respectively, and gn cnt results from the capacitive interaction between the qubits. The amplitudes Ω c (t), Ω t (t) are the time-dependent drives Ω c (t) = [Ω cx (t) sin(ω t t) + Ω cy (t) cos(ω c t)] , Ω t (t) = [Ω tx (t) sin(ω t t) + Ω ty (t) cos(ω t t)] , with {Ω cx , Ω cy , Ω tx , Ω ty } the drive envelope functions to be optimized by GRAPE. While activating the crossresonance gate only requires driving the control qubit at the frequency of the target, the additional control over the target qubitΩ t (t) is useful to eliminate single qubit rotations and obtain higher fidelities to the target unitary [18,19]. We choose to operate this gate with control and target frequencies of ω c /2π = 5.1 GHz, ω t /2π = 4.9 GHz, and anharmonicities α c /2π = −355 MHz and α t /2π = −352 MHz, respectively. Moreover, the qubit-qubit coupling is set to g/2π = 4.29 MHz and the target unitary is taken to be ZX 90 gate [20,21]. Figure 5 shows the infidelity of the cross-resonance gate as a function of the gate time. We find that for a flat pulse a gate fidelity of 98.5% in a 300 ns gate time is possible (red symbols), while values approaching 99% can be reached when correcting for additional single-qubit rotations (orange symbols). In contrast, GRAPE reaches significantly higher fidelities, going beyond 99.99% in a shorter gate time, even when accounting for all off-resonant and counter rotating terms (blue symbols).
To further demonstrate that GRAPE under Dysolve can successfully optimize the cross-resonance gate more  Figure 6. Infidelity of the CR gate as a function of the detuning of the qubits when optimized under GRAPE for 5000 iterations. The control qubit frequency is fixed to ωc/2π = 5.10 GHz.
generally, Fig. 6 shows the fidelity of an optimized 300-ns gate as a function of the qubit-qubit detuning ∆/2π = (ω c −ω t )/2π. In the numerical simulations, this detuning is varied by changing the target qubit frequency whilst keeping the qubit-qubit coupling and the control-qubit frequency fixed. We note that the performance of the gate at positive detunings is approximately in line with those predicted by Schrieffer-Wolff perturbation theory for an echoed-cross resonance gate performed on a similar device [22]. The results are excluded at ∆ = 0, ±α, as the qubits strongly hybridized due to resonances. The slight variations in the GRAPE fidelity in the wide detuning range are partially attributable to variations in higherorder two-qubit coupling amplitudes across, alongside the performance of gradient ascent for different optimization landscapes. This also constitutes evidence for the robustness of our algorithm, as excellent fidelities are obtained in a broad range of parameters.

V. FUTURE WORK AND CONCLUSION
We have demonstrated that the Dysolve algorithm provides a means to quickly and accurately simulate driven systems whilst accounting for all of the effects of the counter-rotating and off-resonant terms. Analysis of ultrafast quantum gates and the quantum speed limit [23], where counter-rotating effects are particularly important, would also be possible with our algorithm. Additionally, this method trivializes the calculation of the gradient, allowing for rapid optimization without the need for additional approximations, and can be modified to include dissipative effects. Indeed, a simple extension of the optimization scheme would allow for optimization of lossy quantum systems specifically, through an open GRAPE-like scheme [24], or the simulation of larger lossy systems through the use of trajectories. We also note that as the expressions depend explicitly on the drive amplitudes, second derivatives and the Hessian matrix can also be calculated exactly, opening the door to second order optimizers such as Newton's method.
We anticipate that future iterations of the Dysolve algorithm improve the efficiency of both the preparation and contraction stages of the algorithm. Given the parallelized nature of this solver, we also envision a direct extension to graphics processing units (GPU's), which could allow for fast simulation of significantly larger systems. We leave this for future work.
At the time of publication, the authors were made aware of a recent paper [25] which uses a different approach to obtain a result similar to that which we provide in Sec. II. Our derived weighting functions are equivalent to divided differences [26] utilised in their method.
We now consider the effect of a set of oscillatory terms. To this end, we first note that the Dyson path operator in Eq. (7), as well as the Dyson operators themselves, can be defined recursivelŷ We now wish to demonstrate thatR (n) (c(ω n ), δt) =Ŝ (n) (ω n , δt), where c(ω n ) is the cumulative vector introduced in Eq. (16). To first order, We again make the inductive step with the assumption thatR (n−1) (c(ω n ), δt) =Ŝ (n−1) (ω n , δt). To proceed, we first show the following result: finishing the induction and giving the final desired form in Eq. (14).

Appendix B: Multiple Drive Inputs
Suppose there are q-independent drive inputs each with own drive frequency and operator. For an n-th order Dyson expansion, we define an n-dimensional vector β with values ranging from 1 to q, each value referring to one of the drive inputs. To simplify the notation, we drop all subscripts n, which are implied. The new Dyson series operators corresponding to a particular vector β iŝ S (n) β (ω β , δt) = kn k (n) |X β[n] |k (n−1) k (n−1) |... k (1) |X β[1] |k (0) |k (n) k (0) |f (λ − δtv β ), with ω β an n−vector where the p-th element of this vector is ±ω β , where thus allowing us to define our generic n-th order Dyson series, which reads as a version of Eq. (22) with multiple drive operators and frequencies. and unable to return solutions with a Frobenius norm error less than 5 × 10 −6 , as seen in Fig. 8. The shortcoming of this numerical method was confirmed by the fact that the norm distance between adams and bdf was found to be significantly greater than the norm distance between adams and the Dysolve algorithm for a sufficient subpixel number. This is in addition to a computational time at least 2 order of magnitudes larger with the standard QuTiP approach as seen in the right panel of Fig. 8.