An Algorithm for Quantum Computation of Particle Decays

A quantum algorithm is developed to calculate decay rates and cross sections using quantum resources that scale polynomially in the system size assuming similar scaling for state preparation and time evolution. This is done by computing finite-volume one- and two-particle Green's functions on the quantum hardware. Particle decay rates and two particle scattering cross sections are extracted from the imaginary parts of the Green's function. A $0+1$ dimensional implementation of this method is demonstrated on IBM's superconducting quantum hardware for the decay of a heavy scalar particle to a pair of light scalars.


I. INTRODUCTION
Quantum field theories describe three of the four fundamental forces of nature. In particular, Quantum chromodynamics (QCD) describes the strong interactions that bind together quarks into hadrons [1]. The predictions of QCD are often tested in experiments where unstable hadrons decay and their decay products are observed. For high energy phenomenon, Feynman diagrams and other perturbative techniques provide an excellent description. In the low energy region, the QCD coupling constant becomes large and these methods fail. Nonperturbative approaches such as lattice QCD (LQCD), chirial perturbation theory and other effective field theories have enabled the calculation of some hadronic properties in this region. For example, Luscher's method [2,3] has allowed the computation of some decay rates and scattering cross sections using LQCD by relating them to finite volume energy shifts. It has been used to compute scattering phase shifts for several low energy processes [4][5][6][7], and the decay widths of ρ and σ mesons [8,9]. The extraction of finite volume energy levels becomes difficult for excited states and for large lattices which limits the applicability of the method.
Quantum computers have been proposed as a tool to avoid various problems present in simulations of QFT's. In particular, fault tolerant quantum computers are expected to be capable of simulating time evolution of local QFT's using resources that scale polynomially in the system size [10][11][12]. The first steps towards simulating lattice gauge theories, such as the Schwinger Model, have been made [13][14][15][16][17][18][19][20][21][22]. In this work, a method of extracting particle decay rates and scattering cross sections from a Green's function calculated on a quantum computer is demonstrated. This method only requires the ability to prepare initial particle states and perform real time evolution. It has been shown for scalar and fermionic field theories that state preparation and real time evolution can be performed on quantum computers using resources that scale polynomially with the system size [10][11][12]. The computational costs of classically performing real time evolution usually scales exponentially with the system size [23][24][25][26][27][28][29][30] so the use of quantum computers would represent an exponential speedup. A classical simulation of this quantum algorithm is explicitly demonstrated for a 1+1 dimensional QFT where a heavy scalar decays to a pair of light scalars. A 0 + 1 dimensional demonstration is performed using IBM's superconducting hardware. Although this calculation is demonstrated for a specific model, the approach is based on general properties of Green's functions and it is expected that it can be applied to particle decays or scattering in other theories.
The paper is organized as follows. The method of computing the decay rate from the Green's function is described in Section II and the mathematical details are shown in Appendix A. The quantum circuit used to calculate the Green's function is described in Appendix B. The time truncation and discretization errors are analyzed in Appendix C 1. The systematic errors present in extracting a decay rate from a finite volume Green's function are analyzed in Appendices C 2 and C 3. The errors due to finite particle number truncations is analyzed in Appendix C 4. A classical simulation of this quantum algorithm is performed in Section III. IBM's quantum processor is used to implement this algorithm in Section IV. The Trotterization procedure used in this demonstration is described in Appendix E. The data from running on IBM's quantum processor is in Appendix G.

II. QUANTUM COMPUTATION OF GREEN'S FUNCTIONS
For a single particle state |ψ , the Green's function can be written as ψ| 1 ω−Ĥ+iη |ψ = 1 ω−E+iη− ψ|T (ω+iη)|ψ , where E is the energy of the state |ψ ,Ĥ is the Hamiltonian andT is the scattering T matrix as shown in Appendix A. The inclusive decay rate of a particle in d spatial dimensions is given by where P ψ is the energy-momentum vector of the initial particle, P X f is the energy-momentum vector of the final state X f , the sum is performed over all possible final states and the integral is performed over all possible energy-momenta vectors of the final state. The optical theorem relates this sum to the forward matrix element of the T matrix by Γ = −2 lim η→0 Im( ψ|T (E + iη) |ψ ) [1]. Therefore, if the Green's function can be computed in the η → 0 limit, the inclusive decay rate can be extracted from it. For η = 0, the difference between Im( ψ|T (E + iη) |ψ and Γ is O(η) as shown in Appendix C 3. Furthermore, if |ψ is a two particle state, the same kind of relationship between the Green's function and the T matrix holds, and the optical theorem can be used to find the inclusive scattering cross section for the two particles present in the state. To simplify the following discussion, the case of particle decays will be focused on. When the theory describing the particle is simulated inside a finite volume box with periodic boundary conditions, the difference between Im( ψ|T (E + iη) |ψ ) in the finite volume and the infinite volume value for a 1 → N decay is O(E L 2 ). It should be noted that the L → ∞ and η → 0 limits are not independent and to have finite volume errors vanish in the L → ∞ limit, η must be chosen such that ηL → ∞. To evaluate this Green's function, it is helpful to express it in integral form, If this integral is truncated at finite time T , a Riemann sum approximation, to this integral can be evaluated on a quantum computer with the techniques described in Appendix B within an accuracy of using a gate count that scales as where p(t, δ) is the gate count required to evolve to time t with accuracy δ, provided that |ψ has already been prepared and E is the energy of the state |ψ . Once the Green's function has been computed, the particle decay rate can be extracted from the imaginary part of its pole. It should be noted that in the η → 0 limit, the imaginary part of the Green's function becomes the spectral density function and other work has been done on using quantum computers to calculate the spectral density function [31][32][33]. Γ can be extracted from the peak of the Green's function which takes the value 2 Γ . Therefore, to compute Γ to within an accuracy δΓ, the Green's function must be computed to within an accuracy of δΓ Γ 2 . Since the uncertainty in Γ scales linearly with η, Γ can be determined to an accuracy of δΓ using gates with a lattice whose size scales as O( 1 δΓ log 1 δΓ ) when the theory has a mass gap.

III. DECAY OF A HEAVY SCALAR
A demonstration of the algorithm discussed in previous sections will be provided by a classical simulation of the decay of a heavy scalar, φ, to a pair of light scalars, χ, in 1+1 dimensions. The Lagrangian for this process is given by where M 0 and m 0 have been chosen such that the heavy particle's mass is 2.01 times the light particle's mass (so the φ → 2χ channel is the only allowed decay channel) and λ > 3g (to ensure a stable vacuum without spontaneous symmetry breaking in the infinite volume continuum theory). This theory was placed on a lattice with periodic boundary conditions and with lattice spacing a = 0.2m −1 where m is the light particle's mass. This was done for lattices with three, five and seven sites. With these boundary conditions, the allowed momentum modes are in the set where n s is the number of sites and L = n s a is the length of the finite volume box. To simulate this on a classical computer, the φ occupation numbers were truncated at one for each momentum mode and the χ occupation numbers were truncated at two for each momentum mode. Since the mass of the heavy particle is only slightly larger than two times the light particle's mass, the arguments of Appendix C 4 indicate that the error in the decay rate calculation due to this particle number truncation should be negligible.
A classical computer was used to determine the renormalization parameters, and to simulate the quantum algorithm from the previous section. The renormalization conditions were that the vacuum has zero energy and the mass of the heavy scalar is 2.01 times the mass of the light scalar. For each lattice volume, η was chosen to minimize the sum of the finite volume and finite η error calculated using the methods in Appendices C 2 and C 3.
FIG. 1: Heavy particle decay rates calculated on different lattice volumes plotted as a function of the coupling constant. The blue points are the decay rates calculated in the classical simulations of the quantum algorithm and the red curves are the one loop infinite volume continuum calculation. The error bars on the finite lattice decay rates represent finite volume and finite η errors calculated using the methods in Appendix C. The icons appearing are defined in Ref. [34].
The heavy particle decay rates calculated classically in this example are displayed in Fig. 1. The finite volume and finite η uncertainties were calculated using the methods described in Appendix C. To improve the precision of this calculation, a larger lattices must eventually be used. No matter what truncation is used, the dimension of the Hilbert space will grow exponentially with the number of lattice sites. The Green's function can be computed on a classical computer using matrix inversion techniques, the fastest of which scale as the dimension of the Hilbert space which grows exponentially with the number of sites [35,36]. Due to this exponential scaling, it is infeasible to use a classical computer to compute Green's functions on a large lattice. However, using previously developed techniques for simulating scalar field theories, the method described in the previous section can be used to compute the Green's function on a quantum computer using resources that scale polynomially [10,37].
The calculations in the previous section were performed using classical computers, but it is possible to use existing quantum computers to do these calculations for a single lattice site with the truncations from the previous section. The Ourense quantum processor made available by IBM was used to implement this method for a one site calculation of the heavy particle decay rate.

FIG. 2:
Green's functions computed with the Ourense quantum processor. The solid blue curve is a zero noise classical simulation of this calculation with Qiskit. The light blue points were computed using the error mitigated amplitudes from the Ourense quantum processor. The error bars represent uncertainties from the error mitigation extrapolation. The red curve is the Lorentzian fit to the error mitigated Green's functions.
The details of how the theory was discretized and how time evolution was implemented on the quantum computer are described in Appendix E. The Hadamard test method [38] was used to obtain φ| e −iĤ∆tk |φ for ∆t = 0.2m −1 and k = 1, 2, ..., 96 , where |φ is a state describing a single heavy scalar at rest. Two Trotter steps were used to calculate each time slice so the circuits used to calculate the real component of φ| e −iĤ∆tk |φ used 36 single qubit gates and 28 CNOT gates. The circuit used to estimate the imaginary component had one additional single qubit gate. Due to the length of the circuit used, the effect of imperfect gate implementation on the Ourense quantum processor is non-negligible. The contribution of imperfect gate implementation to the error in the computed amplitudes was estimated using the technique described in Appendix F. Each circuit used in the Hadamard test was sampled 8000 times so the resulting statistical error was negligible relative to the systematic gate errors. To mitigate the effects of gate errors, an error mitigation technique described in Appendix D was used to extrapolate to the zero CNOT gate error limit. The Green's function, was calculated classically using the error mitigated amplitudes and the results for two different couplings are displayed in Fig. IV. The heavy particle decay rate was extracted from the Green's function by performing a least squares fit to a Lorentzian distribution. The extracted decay rate is compared to the ideal decay rate, where the states |E n are eigenstates of the Hamiltonian with energy E n , that would be computed in the absence of any finite T or ∆t errors in Table I. g Ideal Γ Extracted Γ 0.5 0.070m (0.099 ± 0.037)m 1. 0.287m (0.286 ± 0.047)m TABLE I: Heavy particle decay rates calculated with the Ourense quantum processor. The first column is the coupling constant. The second column is the value of Γ that would be computed in the absence of any finite T or ∆t errors. The third column is the decay rate calculated with the Ourense quantum processor. The error represents uncertainties in the fit to the Green's function.
The heavy particle decay rates calculated on the Ourense quantum processor are in agreement with the ideal calculation. However, even after using these error mitigation techniques, the error due to imperfect gates remained large.

V. CONCLUSION
In this work, a quantum algorithm to calculate the decay rate of unstable particles and scattering cross sections has been introduced. The resources required to implement this method scale polynomially with the system size provided that state preparation and time evolution can be performed using resources that scale polynomially in the system size and field value truncations. It has been shown that this is possible for scalar and fermionic field theories [10,11,39]. To apply this method to LQCD, it will be necessary to develop techniques to prepare hadronic states and perform time evolution in lattice gauge theories. IBM's Ourense quantum processor was used to apply this algorithm to a scalar field theory defined on a single lattice site with truncated occupation numbers. Bounds on the finite volume error of 1 → N decay rates and 2 → N scattering cross sections computed with this method have been determined. More work will need to be done to understand how different truncations effect the error in the computed decay rate. The method presented here only requires preparation of the initial state and the ability to simulate the Hamiltonian. Classical methods of computing decay rates and cross sections from lattice calculations such as Luscher's method rely on relating these observables to finite volume energy shifts. In general this is a difficult process, and only allows the calculation of decay rates and cross sections for limited processes. Due to the greater generality of this method, it is expected that quantum computers will be able to calculate decay rates and cross sections beyond the reach of classical computers.

VI. ACKNOWLEDGMENTS
We would like to thank Martin Savage for many useful discussions. We would like to thank Natalie Klco and Alessandro Roggero for feedback in preparing this manuscript. We thank the Institute for Nuclear Theory at the University of Washington for its kind hospitality and stimulating research environment. This research was supported in part by the INT's U.S. Department of Energy grant No. DE-FG02-00ER41132. This work was supported in part by the U. S. Department of Energy grant No. de-sc0019478. We acknowledge use of the IBM Q experience for this work. This work was facilitated through the use of advanced computational, storage, and networking infrastructure provided by the Hyak supercomputer system at the University of Washington.

Appendix A: Green's Function Poles
The Green's function used in this method is where |ψ is a state describing the particle that will be decaying andĤ is the Hamiltonian of the system. This Green's function has poles whose real part is the energy of the state |ψ and whose imaginary part is given by the imaginary part of the forward scattering amplitude. The manipulations to show this are standard [3], but have been reproduced here for the reader's convenience. The Hamiltonian can be split into a free term and an interaction term so that H =Ĥ 0 +V ,Ĥ 0 |ψ = E 0 |ψ . LetP = |ψ ψ|,Q = 1 − |ψ ψ|. The Green's function can be written as Using this fact, Eq. (A4) becomes In the limit that η goes to zero,T becomes the scattering T matrix,T , and according to the optical theorem Γ 2 = − Im( ψ|T |ψ ) for a single particle state, |ψ . So and from Eq. (A9) the decay rate can be extracted since it is proportional to the width of a Lorentzian distribution centered at the particle's energy.
Appendix B: Quantum Computation of the Green's Function

Fully Quantum Approach
In the previous section, it was shown that the imaginary part of the poles of the Green's function ψ| 1 ω−Ĥ+iη |ψ is Γ 2 + η. Therefore, if this Green's function can be computed efficiently, then the decay rate can be computed efficiently as well. This Green's function can be expressed as an integral If this integral is cut off at some finite large time T , it can be approximated with a Riemann sum lim T →∞,∆t→0 If T = (2 n+1 − 1)∆t, this sum can be evaluated on quantum computer using a register of n ancilla qubits in addition to a register used to store the state of the system and a number of gates that scale polynomially with n and the size of the system. The circuit used to calculate the Green's function is displayed in Fig. 3. The calculation begins with the quantum computer in the state |0 ⊗n |ψ where all qubits in the ancilla are in the state |0 and the system register is in the state |ψ which describes the unstable particle that will be decaying. R k is applied to the kth ancilla qubit, and θ k = arctan e −2 k η∆t . Up to normalization factors, the quantum computer is in the state 2 n −1 k=0 e −ηk∆t |k |ψ . From the kth ancilla qubit, a controlled time evolution evolution operator is applied to the system register for time 2 k ∆t. Finally, the quantum Fourier transform is applied to the ancilla qubits which will put the quantum computer in the state . Performing a measurement on both registers, the probability that the ancilla register is in the state m and the system register is in the state ψ is where ω m = 2πm (2 n+1 −1)∆t . This is directly proportional to the Riemann sum that approximates the Green's function, and by repeatedly running this circuit, estimates for P (m, ψ) can be obtained.

Hybrid Approach
The circuit described in the previous section allows the Green's function to be computed using only quantum resources. However, that circuit requires many CNOT gates and ancilla qubits which makes implementation on a near term quantum computer difficult. Previous work has introduced variational methods to compute the Green's function on near term quantum computers [40]. In this section, a method of computing Green's functions using the Hadamard test will be introduced. This method only requires a single ancilla qubit, which makes it more suitable for near term quantum computers than the method in the previous section. The Hadamard test method [38] can be used to compute ψ| e −iĤt |ψ with a quantum computer for several time slices. For the circuit in Fig. 4a, is the probability that the ancilla qubit is measured to be in that state 0 and P (1) is the probability it is measured to be 1. For the circuit in Fig. 4b, P (0) − P (1) = Im ψ| e −iĤt |ψ . By running these two circuits n times, ψ| e −iĤt |ψ can be computed with statistical error given by 1

FIG. 4: Circuits used in the Hadamard Test
Once ψ| e −iĤt |ψ has been computed for several time slices, the Green's function can be computed by classically performing a discrete Fourier transform. This requires a separate quantum circuit for each time slice, but the circuits used are shorter than the circuit in the previous section which makes them better suited for implementation on near term quantum computers. Implementing the Hadamard test requires at most a polynomial overhead over the cost of implementing e −iHt . Therefore, the quantum and classical resources needed to compute the Green's function scale polynomially with this method as long as |ψ can be prepared using polynomially many resources and time evolution can be performed using polynomially many resources on the quantum computer.
Appendix C: Error Scaling

Finite T and ∆t
This calculation is based on performing a Riemann sum approximation to an integral, so errors due to a finite T cutoff and a finite step size ∆t will need to be estimated. The Riemann sum being evaluated is Eq. (C2) differs from the T → ∞ limit by Therefore, to determine the Green's function to within an accuracy of , T must be taken to be O 1 η log 1 η . Using integration by parts, it can be shown that and using Eq. C4, it can be shown that where n t is the number of time slices used in the Riemann sum. |ψ can be expanded in the eigenbasis ofĤ as |ψ = n c n |E n . The Hamiltonians for which this method of computing the Green's function is to be applied to have been renormalized such that the lowest energy state has zero energy. Therefore, it may be assumed that E n ≥ 0 for all n. If ψ|Ĥ |ψ = E, then Using this bound, it can be shown that Therefore the number of time slices needed to determine the Green's function evaluated at ω + iη with an accuracy of must be Many implementations of Hamiltonian simulation on quantum computers do not implement the time evolution operator exactly [41][42][43][44]. To calculate the Green's function with accuracy , the error in the implementation of the time evolution operator must be O(η ). If the gate cost required to evolve to a time T with accuracy δ is given by p(T, δ), then the gate cost required to calculate the Green's function is When scattering calculations are done inside of a finite volume, first the L → ∞ limit should be taken, followed by the η → 0 limit. The order in which this limit is taken matters, as can be seen from a rearrangement of Γ = −2 Im( φ|T |φ ), where |E n are eigenstates of the full Hamiltonian, |φ is the state describing the unstable particle,V is the interaction piece of the Hamiltonian, and M is the mass of the particle decaying. If the η → 0 limit is taken first then this discrete sum goes to zero. Alternatively, if L → ∞ first, the energy levels become continuous and If then η → 0, the Lorentzian term becomes a delta function and the usual expression for the decay rate is recovered.
Finite volume errors in a 1 → N particle decay rate will be calculated in the case where all of the decay products are massive. If the interaction energy between the decay products can be ignored (as in the L → ∞ limit), then for a 1 → N decay, the calculated decay rate is in the a → 0 limit, where M is the mass of the heavy particle decaying and m k is the mass of the k-th decay product. In this case, where M is the scattering amplitude which is generically an analytic function of all the decay products momentum, and x is integrated over the region [ −L 2 , L 2 ] d . Therefore, the decay rate computed inside a finite volume at finite η is (C15) If instead, the goal is to calculate a cross section for 2 → N scattering in the center of mass frame, an initial state with two particles each with energy K i must be prepared. In this case, The scattering cross section, σ, is given by the decay rate divided by the incident flux which is equal to | v1− v2| L d , where v 1 and v 2 are the velocities of the particles present in the initial state. The extracted value for the cross section at finite volume and η is given by (C17) This expression takes the same form as Eq. C15 just with M replaced by 2K i and with some slightly different prefactors, so the finite volume error analysis for cross sections can proceed in the same way as the decay rate analysis. To simplify the following discussion, the finite volume errors will be computed only for decay rates.
As L → ∞, the computed decay rate becomes The Poisson resummation formula states that and using Eq. (C19), the finite volume error in the calculation of Γ η is given by Using the fact that where γ is a contour enclosing the lower right quadrant of the complex plane, Eq. C21 can be rewritten as The component of p k parallel to n k can be integrated over with contour integration yielding where p T k is a d-dimensional vector integrated over vectors perpendicular to n k . The integral over p T k can be performed in the large L limit using the saddle point approximation method which states that in the integration domain. Therefore, in the large L limit Eq. C24 becomes (C26) The E k integrals over γ can be written as a sum over an integral over the positive real axis and the negative imaginary axis. Explicitly writing out these integrals, For k ∈ σ, the E k integrals can be evaluated using the saddle point approximation again, The final set of E k integrals will be performed by making the substitution E k = Ef k , where the f k are integrated over the region 0 ≤ f k ≤ 1 and f k = 1. The integral over E can be performed by performing a contour integration over a contour enclosing the upper quadrant of the complex plane. Performing this contour integral yields comes from integrating along the positive imaginary axis and comes from the pole located at E = M + iη |σ|+1 N +1 The integrals in Eq. C31 can be evaluated using the saddle point approximation and using the fact that |n k L +n k · x| ≥ (n k − 1 2 )L, it follows that A n (σ) = O( 1 Using the bound it can be seen that Therefore, in the limit of small η, and The bound in the previous section was calculated under the assumption that all decay products are massive, however this is not always the case. In the case where there are massless particles, the decay rate calculated in finite volume is again given by Eq. C13, and the infinite volume limit is given by Eq. C18. Γ F V,η is a Riemann sum approximation to Γ η and using the multidimensional bound on the Riemann sum error for integrating over a k dimensional hypercube with side length R it can be seen that δΓ F V is O( 1 η 2 L ) for small η.

Finite η Errors
The value of Γ calculated for a 1 → N particle decay at finite η in an infinite volume is given by is the forward scattering amplitude, M is the mass of the particle decaying, m k is the mass of the k-th decay product and M is the scattering amplitude for the given decay channel. The decay rate Γ is given by lim η→0 Γ η . It will be shown in this section that δΓ η = Γ η − Γ is O(η) for small η. Changing to spherical coordinates and making the substitution Now making the substitution E k = f k E, T (z) can be expressed in the form where the f k are integrated over the region 0 ≤ f k ≤ 1 and Integrating Eq. C42 by parts gives To show that δΓ η = Γ η − Γ is O(η) for small η, it suffices to show that is finite. Differentiating under the integral shows that for η > 0, Therefore, The integral in Eq. C46 is finite due to the properties of f (E) discussed above and so δΓ η is O(η) for small η.

Particle Number Truncation
When a quantum field theory is simulated on a quantum computer, the degrees of freedom must be truncated. For the bosonic theories considered in this paper, this was done by simulating the theory on a finite lattice with particle numbers truncated. The calculations in the previous section bounded the error in the computed decay rate due to the finite lattice and in this section, the error due to the particle number truncation will be calculated. The scattering T matrix can be computed from the recurrence relation where V 0 is the free part of the Hamiltonian describing the motion of free particles andV is the interaction part of the Hamiltonian. IfP projects out the finite particle subspace under consideration, then the T matrix computed with this truncation satisfies the recurrence relation Then the difference between the actual T matrix and the T matrix computed with a particle number truncation, This can be rewritten asδ Therefore, if the lightest particle in the theory has mass m and particle number is truncated at n, then the error in T (E) due to the particle number truncation is O 1 E−m(n+1) .

Appendix D: Error Mitigation
While the Hadamard test enables the computation of matrix elements, it does not address errors due to imperfect gates on the device itself. To mitigate this error, an extrapolation technique was used [45,46]. In each circuit, every CNOT was replaced with an odd number, r (for r = 3, 5, 7), of CNOT's, and each amplitude was linearly extrapolated to r = 0. If there was no noise, these additional CNOT gates would make no change to the outcome of the circuit.
This procedure reduces the error from imperfect implementation of CNOT gates on the quantum computer, but does not mitigate readout errors. To address readout errors, the default calibration matrix method included in the Qiskit Ignis package was used [47].

Appendix E: Hamiltonian Simulation
The one site calculation done on IBM's quantum computer was done in the momentum basis. While the gate cost of performing time evolution in the momentum basis does not scale to large lattices as well as in the position basis, it is suitable for small calculations [37]. With a single site, where H is the Hamiltonian, M is the mass of the heavy particle, m is the mass of the light particle, Λ is chosen to make the vacuum energy equal to zero, and δM and δm are the differences between the physical and bare masses. This Hamiltonian only couples states with the same parity in the number of χ particles so states with an even number of χ particles are the only ones needed. The mapping of basis states to qubit states is listed in Table II. Two qubits were used to store the state of the system and one ancilla qubit was used to implement the amplitude estimation algorithm described in Appendix B 2.

(E2)
The amplitude estimation procedure described in the previous section requires implementation of a controlled time evolution operator which was implemented using a Trotter-Suzuki decomposition e −i H k δt ≈ k e −iH k δt wherê Implementing a controlled version of this time evolution operator requires the ability to perform controlled unitary transformations of the form e ic1⊗1 , e i(c1X+c2Ẑ)⊗1 and e i(c1X+c2Ẑ)⊗X . A controlled e ic1⊗1 can be performed by applying 1 0 0 e ic to the control qubit. c 1X + c 2Ẑ is a 2x2 Hermitian matrix, and the matrix U that maps the computational basis to the eigenbasis of this matrix can be found classically. Using this matrix U , it is trivial to modify the textbook implementation of a controlledẐ rotation [48] to a controlled rotation about c 1X + c 2Ẑ as shown in Fig. 5. A similar trick can be used to implement the (c 1X + c 2Ẑ ) ⊗X term as shown in Fig. 6. On NISQ era quantum computers, the statistical error and error due to imperfect implementation of logic gates on the quantum processor must both be addressed. In general, the density matrix describing the state of the quantum computer is given by where ρ ideal is the density matrix describing the state of the quantum computer if every gate was implemented perfectly, p is the probability there is an error anywhere in the circuit, E i are the Krauss operators describing the errors and The difference between the probability observed on a real quantum computer and an ideal quantum computer is given by where O is the projection operator corresponding to the measurement result. 1 is a difference of probabilities which must be bounded above by one. As a result, the difference between the probability of a given measurement observed on a real quantum computer and an ideal quantum computer is bounded above by p. For the calculation on IBM's Ourense quantum processor, p was calculated using the calibration data provided by IBM, and was used as an estimate of the error due to imperfect gate implementation.
The following tables contain the results of all computations run on the IBM Ourense quantum processor.