Digital quantum simulation of molecular dynamics and control

Optimally-shaped electromagnetic fields have the capacity to coherently control the dynamics of quantum systems and thus offer a promising means for controlling molecular transformations relevant to chemical, biological, and materials applications. Currently, advances in this area are hindered by the prohibitive cost of the quantum dynamics simulations needed to explore the principles and possibilities of molecular control. However, the emergence of nascent quantum-computing devices suggests that efficient simulations of quantum dynamics may be on the horizon. In this article, we study how quantum computers could be employed to design optimally-shaped fields to control molecular systems. We introduce a hybrid algorithm that utilizes a quantum computer for simulating the field-induced quantum dynamics of a molecular system in polynomial time, in combination with a classical optimization approach for updating the field. Qubit encoding methods relevant for molecular control problems are described, and procedures for simulating the quantum dynamics and obtaining the simulation results are discussed. Numerical illustrations are then presented that explicitly treat paradigmatic vibrational and rotational control problems, and also consider how optimally-shaped fields could be used to elucidate the mechanisms of energy transfer in light-harvesting complexes. Resource estimates, as well as a numerical assessment of the impact of hardware noise and the prospects of near-term hardware implementations, are provided for the latter task.


I. INTRODUCTION
The ability to accurately control the dynamics of quantum systems would have significant implications across the physical sciences. Shaped electromagnetic fields able to coherently interact with molecules on their natural length and time scales offer an unprecedented tool for realizing such control, and there is growing interest in using them to control quantum systems with chemical, biological, and materials applications [1][2][3][4]. One method for designing shaped fields capable of steering a quantum system towards a desired control target is quantum optimal control, whose original development in the 1980s was driven by the dream of tailoring laser fields to control the outcomes of chemical reactions, as depicted in Fig. 1. Optimal control has since been realized in numerous proof-of-concept experiments involving the control of branching ratios of chemical reactions [5], bond selective dissociation [6,7], molecular fragmentation [8], bond making [9], and isomerization in the liquid phase [10] as well as in the biologically-relevant retinal molecule bacteriorhodopsin [11]. Laboratory quantum optimal control demonstrations have also spanned numerous applications beyond the control of chemical reactions, including the control of molecular alignment and orientation [12], decoherence mitigation in gas-phase molecules [13], preparation of coherent superposition states in molecules at room * amagann@princeton.edu † mgrace@sandia.gov ‡ hrabitz@princeton.edu § mnsarov@sandia.gov temperature [14], molecular optimal dynamic discrimination [15], isotope selection [16], high harmonic generation [17], and energy flow in light-harvesting complexes [18]. Despite the promise of these experimental demonstrations, quantum optimal control has not yet found wide, practical applications in molecular systems. A primary contributing factor is the lack of theoretical support. That is, the prohibitive computational costs associated with performing accurate quantum control simulations limit our ability to identify new quantum control applications, design new quantum control experiments, and assess the feasibility of achieving desired control outcomes in a given experimental setting. These costs also limit the quality of the analyses that can be performed to probe the control mechanisms underlying quantum control experiments. The challenges arise from the fact that the computational memory and time costs associated with simulating quantum dynamical systems without approximations scale exponentially in the number of degrees of freedom in the system, termed the "curse of dimensionality." For molecular control simulations, this computational challenge is manifested in problems where the control of multiple coupled rotational, vibrational, and/or electronic degrees of freedom is sought.
A first solution to this challenge is to use a tractable, reduced model to simulate the field-induced quantum dynamics. Such models typically assume one or multiple approximations that can lead to deterioration of the solution accuracy, and while numerous approximate methods for quantum dynamics simulations have been developed, no method is suitable for every problem. For example, mean-field approaches such as time-dependent Hartree can be used to simulate controlled quantum molecular FIG. 1. The development of quantum optimal control theory was initially motivated by the goal of controlling selective dissociation reactions in molecules, as depicted, and it offered a flexible approach for realizing this goal. The development of femtosecond lasers and pulse-shaping technology in the 1990s subsequently provided the laboratory tools for the task, leading to several proof-of-principle demonstrations of control over selective dissociation using quantum optimal control [5][6][7][8]. Today, quantum optimal control has found applications across chemical, biological, and materials applications [1,2] dynamics with costs polynomial in the system size, but these approaches often yield poor performance for systems with only a few degrees of freedom, or for systems whose degrees of freedom are strongly coupled [19]. Improvements in accuracy can be gained by using variants such as multi-configurational time-dependent Hartree, but these alternatives can have costs that scale exponentially in the system size and are therefore not suitable for large systems [20]. For simulations of controlled multielectron dynamics, time-dependent Hartree-Fock [21] and its variants can be used, but suffer from the same drawbacks. Alternatively, time-dependent density functional theory [22] can be used, but the choice of exchangecorrelation functional yields approximations that are not well understood. Other approaches include tensor network methods such as the time-dependent density matrix renormalization group [23], which become prohibitively expensive as the simulation time increases.
A second solution is to seek alternative simulation settings that do not suffer from this exponential scaling problem: analog or digital quantum simulators show promise for this purpose [24]. Analog quantum simulation involves tuning a controllable quantum system such that it emulates the behavior or dynamics of another specific quantum system of interest. Analog quantum simulators could have high value for specific applications, and the use of analog quantum simulators for quantum control simulations has been explored recently. For example, in ref. [25], fields for preparing a particular correlated spin state were optimized using a gradient algorithm on an NMR quantum simulator, and in refs. [26,27], the design of bang-bang control protocols for one-and multiqubit systems using noisy quantum devices was considered. Meanwhile, in ref. [28], quantum photonics were used for the analog simulation of quantum vibrational dynamics and control, where the initial state of ammonia was optimized to maximize the probability of molecular dissociation. However, due to their analog nature, there are concerns about the reliability of the solutions that such simulators produce when scaled to large systems [29,30].
A universal alternative to analog quantum simulation is digital quantum simulation, which can be used to simulate the dynamics of general quantum systems through a set of discrete operations. The most common model for digital quantum simulators is the circuit-model of quantum computation, which can simulate arbitrary unitary evolutions using operations in the form of quantum circuits [31]. Research to develop circuit-model quantum computers has accelerated in recent years, and technologies based on superconducting qubits [32] and trapped ions [33] capable of implementing shallow quantum circuits on tens of qubits are currently available. To sustain this progress, there is a need to explore scientificallyrelevant problems for which quantum computers offer clear advantages over their classical counterparts, and to define candidate problems for evaluating their performance.
In this article, we explore how a quantum computer could be used as a digital quantum simulator for the design of quantum optimal controls for molecular systems. To this end, we introduce a hybrid quantumclassical scheme combining (a) digital quantum simulation methods for simulating the molecular dynamics in polynomial time [34] with (b) classical optimization approaches to identify control fields for achieving a desired task. Our scheme offers a clear example of a scientific problem amenable to a quantum speedup, which we hope will serve to motivate current efforts advancing quantum computing devices. The first step of this scheme involves encoding the state and Hamiltonian of the molecular system under consideration into qubits for simulation on the quantum computer. To date, relatively little attention has been given to encoding procedures for simulating rotational and vibrational systems [35,36], which are common and important applications for quantum control. We address this by outlining a general encoding approach, and provide explicit details regarding applications to rotational and vibrational systems.
We also consider the applicability of this general scheme towards elucidating the mechanisms underpinning important light-matter interactions found in nature, such as the absorption of sunlight and transport of photoexcitations by pigments in light-harvesting complexes of photosynthetic organisms. This process marks the first stage of photosynthesis and is widely believed to involve quantum coherent excitonic dynamics at short timescales [37][38][39][40]. Numerical studies are needed for understanding this process, but require simulating the quantum dynamics of numerous coupled pigments interacting with a larger, thermal environment, which poses a significant computational challenge [41][42][43]. The mechanisms underlying photosynthesis can also be probed using twodimensional electronic spectroscopy experiments, which produce maps of the energy transfer in light-harvesting complexes after a particular initial electronic excitation has been prepared [44,45]. Thus, in this latter setting, the ability to prepare the complex in a state that leads to the desired energy transfer dynamics is of paramount importance. We consider how optimally-shaped fields could be used to this effect, and estimate the qubit counts and circuit depths needed to perform the associated simulations on a quantum computer. Using this setting, we also consider the prospects of implementing of our approach on current and near-term quantum hardware. In particular, we numerically analyze the performance of our algorithm in the presence of different levels of noise, using a model for quantum hardware based on trapped ions.
The remainder of the article is organized as follows. Section II provides an overview of quantum optimal control theory and the computational challenge associated with simulating the dynamics of molecular systems with multiple degrees of freedom. In Section III, we introduce a hybrid quantum algorithm that eliminates the computational challenge by performing the quantum dynamics simulation on a quantum computer, which can perform the simulation in polynomial time, while using a classical co-processor only to store and update the coefficients parameterizing the control field. Details regarding digital quantum simulation are given in Section III A, with the qubit encoding, Hamiltonian simulation, and qubit readout each discussed in subsections III A 1, III A 2, and III A 3, respectively. Details regarding the classical optimization are given in Section III B. A series of numerical illustrations are presented in Section IV. First, an illustration involving the control of bond stretching in hydrogen fluoride is described in Section IV A, followed by illustrations involving the controlled orientation of dipoledipole coupled molecular rotors in Section IV B, the controlled state preparation in a light-harvesting complex in Section IV C, and a numerical analysis of the impacts of hardware noise on the algorithm performance in Section IV D. We conclude with a look to future research directions in Section V.

II. QUANTUM OPTIMAL CONTROL THEORY
An example of a general quantum optimal control problem relevant to this work consists of designing a control field f (t, {θ i }) for t ∈ [0, T ], parameterized by a set of coefficients θ i , i = 1, · · · , K, that will achieve a desired control objective at the terminal time t = T . This can be posed as a minimization problem: where J[T, {θ i }] is the objective function, which is formulated to include the control target, often with additional criteria, which can be defined to reflect the resources available in an associated laboratory implementation. One common choice is to seek low-energy fields for achieving a particular control target by including a term penalizing the field fluence [46]. For problems involving multiple control fields, the search is performed with respect to the set of coefficients {θ i } which, taken together, parameterize the set/space of available controls. In vibrational control problems, the set {θ i } often contains the amplitudes and phases of the frequency components of the laser field. For control problems involving longer time-scales, the pulse can often be modulated in the laboratory directly in the time-domain using an arbitrary waveform generator [47,48]. Quantum optimal control simulations are chiefly useful for identifying new quantum control applications, analyzing the feasibility of controlling new classes of quantum phenomena, and providing a basic understanding of controlled quantum dynamics. When numerically-designed control fields are applied to actual molecular systems in the laboratory, however, a significant loss of fidelity can occur. This can happen due to noise-based fluctuations in the applied field, uncontrolled interactions with the environment, and uncertainties in the molecular Hamiltonian, including uncertainties in the description of the molecular dipole moment, which couples the field and the molecular system. Thus, it is important to identify fields that are robust to such errors and uncertainties if fields are sought for a direct laboratory implementation. This can be accomplished by including additional robustness criteria in J[T, {θ i }] [49,50], or by designing fields based on modeling uncertainties using a statistical distribution [51].
In simulations, the optimal control field parameters {θ i } that minimize J[T, {θ i }] are often sought iteratively. To evaluate J[T, {θ i }] at each iteration, the dynamics of the quantum system under consideration, driven by the field f (t, {θ i }) with a particular parameterization {θ i }, must be simulated by solving the time-dependent Schrödinger equation: where |ψ(t) is the system state at time t (we set = 1). The Hamiltonian H(t, {θ i }) of the system is assumed to be "k-local", i.e., to contain interactions coupling up to k degrees of freedom (these are sometimes also called "k-body" interactions), for a constant k that does not scale with the total number of degrees of freedom M . In models of low-energy physics, e.g., those derived from quantum electrodynamics (QED), interactions are typically limited to two-body terms, such as the Coulomb interaction derived from the minimal coupling Hamiltonian in QED. There are some exotic settings where k-body interaction terms with k > 2 are present [52], but these are weak in comparison to the k = 2 terms, and moreover, k never scales with M , the number of elementary degrees of freedom. Thus, the k-local assumption is not a strong one. In general, H(t, {θ i }) can be expressed in the dipole approximation as where H 0 is the time-independent molecular "drift" Hamiltonian, which contains all kinetic and field-free potential terms, including potentials due to fixed interactions between various degrees of freedom. The control Hamiltonian H c describes the light-matter interaction underlying the coupling of the molecular dipole moment of the system to the applied field f (t, {θ i }). In cases where multiple fields are applied, Eq. (3) also includes additional terms describing the coupling of each field to the system. Frontier applications of quantum optimal control often involve complex quantum molecular systems with multiple interacting degrees of freedom. The Hilbert space dimension of such systems scales exponentially in the number of degrees of freedom present, leading to an explosion of the computational resources required for simulating the system dynamics, rapidly rendering quantum optimal control simulations intractable. As such, despite the breadth of theoretical research on quantum optimal control theory, applications of quantum optimal control to molecular systems with many degrees of freedom remain scarce.

III. HYBRID ALGORITHM FOR QUANTUM OPTIMAL CONTROL
A common goal of quantum optimal control simulations is the identification of a set of parameters {θ i } describing a control field f (t, {θ i }), t ∈ [0, T ], that achieves a specified objective as well as possible at the terminal time T . Quantum optimal control simulations thus have two key components: the evaluation of the control objective function J[T, {θ i }] for a particular set of control parameters {θ i }, and the updates of {θ i } according to a chosen optimization algorithm. We propose a hybrid quantum-classical scheme for these simulations, where a quantum computer is used for efficiently evaluating J[T, {θ i }] by simulating the driven dynamics of a quantum system, while a classical coprocessor is used for updating the control parameters {θ i } [53], as depicted in Fig. 2

.
A. Digital quantum simulation

Qubit encoding
The initial task associated with simulating quantum dynamical systems is the choice of a finite representation for the system state and Hamiltonian. This is true for simulations on both quantum and classical hardware, as only finite computational resources are available in both settings. For continuous-variable systems such as molecules, whose Hilbert spaces are inherently infinite dimensional, simulations must be performed in a suitable truncated space. One approach is to represent the system state and associated operators in real space, using a finite mesh with differential operators represented using finite differences. Another approach is to represent the system state and associated operators using a particular finite set of basis functions, which are often chosen to be orthonormal.
After a finite representation is chosen, the corresponding quantum control problem must be encoded into qubits for implementation on a quantum computer. A variety of encodings have been developed for this purpose, e.g., refs. [36,54]; here, we focus on general basis set encodings that are relevant to quantum control problems [55]. For basis set encodings, the choice of basis set affects the cost of the initial qubit state preparation, the circuit depth and width required to simulate the dynamics of the molecular system, and the complexity of obtaining the value of J[T, {θ i }] at the terminal time. Consequently, the basis could be chosen with the goal of balancing all of these costs, or, if one task is particularly challenging, the basis could instead be chosen to minimize the complexity of this particular task.
In a basis set encoding, a set of d basis states {|q }, q = 1, 2, · · · , d, can be mapped to qubit states as using a standard binary mapping, while an arbitrary state can be represented as a superposition of d basis states as |ψ(t) = d q=1 c(q, t)|q , where c(q, t) is the probability amplitude associated with the basis state |q at time t. In this manner, basis set encodings can be used to encode the state of a quantum degree of freedom represented with d basis states in log 2 d qubits, where · is the ceiling function. The full 2 log 2 d dimensional space associated with each degree of freedom is then spanned by the Pauli operator basis is a (normalized) Pauli string, where N σ = 1/ √ 2 is a prefactor included for normalization, and σ denotes one of the Pauli operators σ Thus, any operator A acting on the degree of freedom can be encoded into a weighted sum of Pauli strings by projecting it onto the Pauli basis as whereĀ denotes the encoded version of the operator A and the coefficients g = A, B HS can be computed from the Hilbert-Schmidt inner product between A and each of the Pauli basis operators B . If d is not an exact power of 2, then prior to the encoding, the d×d dimensional matrix A should be expanded to 2 log 2 d × 2 log 2 d dimensions by adding zeros.
For molecular control problems, we are primarily concerned with qubit encodings relevant to systems consisting of multiple coupled degrees of freedom. The framework outlined above can be straightforwardly generalized to such cases. Namely, for quantum systems with M coupled degrees of freedom, each represented using d basis states, qubits can be used to represent the full system state in the associated 2 N dimensional Hilbert space as |ψ(t) = q1,··· ,q M c(q 1 , · · · , q M , t)|q 1 , · · · , q M , using M registers containing log 2 d qubits each. Eq. (7) states that the number of qubits N required to represent |ψ(t) on a quantum computer scales linearly in the number of degrees of freedom M . This can be contrasted with the memory resources needed to represent |ψ(t) on a classical computer, which scale exponentially as d M .
The operator encoding for the associated k-local Hamiltonian can be performed as according to Eq. (6), where the number of Pauli strings L in the decomposition has the upper-bound and where each Pauli string acts nontrivially on a 2 k log 2 d dimensional space (for further details, see Appendix A). These L operators can be computed by decomposing each local term in the Hamiltonian classically, which requires the classical resources to store and manipulate the associated 2 k log 2 d × 2 k log 2 d dimensional matrices. For systems with multiple electronic degrees of freedom, Fermi statistics must also be enforced (e.g., by using the Jordan-Wigner, parity, or Bravyi-Kitaev mappings, which automatically enforce Fermi statistics at the operator level [54]).

Hamiltonian simulation
At the outset, the qubits must be prepared in the state |ψ(0) encoding the initial condition of the molecular system. Then, the system's time evolution can be simulated by applying a quantum circuit to approximate the quantum time evolution operator U (T, 0), defined as the solution to Eq. (2), given by U ( where T denotes the time-ordering operator. The control field is assumed to be piecewise-constant over a sequence of N t time steps of length ∆t, and consequently, U (T, 0) can be computed as the time-ordered product U (T, 0) = U (T, T −∆t) · · · U (2∆t, ∆t) U (∆t, 0), where each term in the product is generated by a time-independent Hamiltonian and can be approximated using product formulas in polynomial time [34].
After the actual molecular Hamiltonian has been encoded asH, it is expressed as a weighted sum of Pauli strings according to Eq. (8), where for k-local Hamilto-nians, L grows polynomially with the number of qubits N as per Eq. (9). Then, the first-order product formula is given by where n is the so-called Trotter number, which defines the accuracy of the approximation (i.e., for n → ∞, the firstorder product formula is exact) [34]. The error incurred from using the first-order product formula is given by , and · denotes the spectral norm [56]. The total error PF1 (T, 0) can be bounded using the triangle inequality by the sum of errors over each time-step [31], to yield where Λ max = max j Λ(t j ) and N t = T /∆t. Higher-order product formulas can be used to improve the accuracy of the approximation [57], and can be defined recursively as where γ p = 4 − 4 1/(2p−1) −1 , which can be seeded with such that and The total error associated with 2pth-order product formulas is [58] The cost of approximating U (T, 0) on a quantum computer can be quantified by the number of qubits (i.e., memory) and the circuit depth (i.e., run time) required. Product formulas require no ancilla qubits, and so the number of qubits N needed is the same as the number of qubits required for encoding the state and Hamiltonian, and is given in Eq. (7). This stands in contrast to other quantum algorithms developed for simulating the time evolution of quantum systems, such as the Taylor series algorithm [59] and algorithms based on quantum walks, such as the quantum signal processing algorithm [60,61], which offer improved error scaling, but each require additional ancilla qubits. As such, product formulas may have greater utility for early devices with limited qubit counts. The asymptotic scaling of the circuit depth D, quantified by the number of applications of e −iB τ , for arbitrary τ , is for the first-order product formula and for 2pth-order product formulas [58]. The expressions given in Eqs. (12), (17), (18), and (19) are known to be very loose, and consequently, the circuit depths required in practice to achieve an error bounded by some can be expected to be far lower (e.g., orders of magnitude lower [56]) than the depths given by Eqs. (18) and (19). General quantum circuits for approximating U (T, 0) using product formulas can be designed using the observation that a quantum circuit able to implement e −iB τ (t) for arbitrary scalar τ (t) is sufficient, as the full quantum algorithm can be constructed as a simple concatenation of circuits with this basic structure. Fig. 3 illustrates how these basic circuits can be formed, where for N qubits, the associated circuit depth required scales as O(N ). Although the procedure outlined in Fig. 3 can always be used to form quantum circuits to implement the algorithm, and Eqs. (18) and (19) are useful for determining general bounds on circuit depth, significant gains can often be realized by using quantum compilers that seek to minimize the dominant costs (e.g., two-qubit gates for noisy devices, or T-gates for error-corrected devices) when translating product formula algorithms into quantum circuits for implementation on particular hardware. σ σxσxσyσy is presented in (iv), whereτL−1(k∆t) contains the normalization prefactor. (b) Details on composing the basic circuit structure e −iB τ (t) from CNOT gates and one-qubit rotations Rσ(θ) ≡ e −iσθ/2 . In (i), the basic circuit associated with ⊗ 1 k=1 σ (l) k = σzσz for N = 2 qubits is shown, while (ii) shows how to generalize this circuit structure to additional qubits. Circuits (iii) and (iv) illustrate how to account for σx and σy terms, by adding one-qubit rotations that transform the σz operations to σx or σy operations [62]. These structures can be straightforwardly generalized to Pauli strings on any number of qubits, with any combination of Pauli operators [63].
the computational basis into the desired σ x or σ y basis prior to measurements [43,64], and, similarly, to measure the expectation values of multiqubit operators such as σ x σ y σ z , the results of one-qubit measurements (in the appropriate rotated bases) can be multiplied together.

B. Classical optimization
The control field optimization is accomplished iteratively using a classical optimization routine, which seeks to identify the set of control parameters that minimize Global evolutionary strategies such as genetic algorithms have often been employed for this purpose [5][6][7]. Gradient algorithms can also be used [65], although on quantum computers, obtaining the gradient information needed to implement these approaches requires additional measurements. Namely, if the value of J[T, {θ i }] can be estimated in O(m) measurements, O(Km) additional measurements are required to estimate the K gradients ∂J[T,{θi}] ∂θj , j = 1, 2, · · · , K, via finite differences. This increases the cost of optimization substantially per iteration compared to gradient-free algorithms [66]. However, gradient algorithms may require fewer iterations to converge, suggesting that the choice of optimization method should be made considering the balance between measurement costs and classical optimization effort in mind.

IV. NUMERICAL ILLUSTRATIONS
In this section, we numerically investigate the performance of first-, second-, and fourth-order product formulas towards simulating the controlled dynamics of three model systems. For each system, the qubit encoding method used for our numerical analyses is described, with full details of the Hamiltonian and objective function mappings provided in Appendix A. We quantify the product formula performance by the Trotter error U P F (k) (T, 0) − U (T, 0) and by the objective function error |J P F (k) − J|, where U (T, 0) and J denote the time evolution operator and objective function value in the numerically exact limit of n → ∞, respectively, while U P F (k) (T, 0) and J P F (k) are the corresponding values when a k-th order product formula is employed. In addition, possibilities for extensions towards more complex control applications are discussed for each case. We conclude this section with an analysis of the effects of hardware noise on the performance of the algorithm.

A. Controlled bond stretching in HF
We first consider controlling the bond displacement of the diatomic molecule hydrogen fluoride (HF), modeled as a nonrotating Morse oscillator on the ground electronic state in the Born-Oppenheimer approximation [67]. Morse oscillators have been used extensively in the literature as simple yet nontrivial proof-of-concept models for testing new quantum control ideas [46,[68][69][70]. The drift Hamiltonian is given by where r is the bond coordinate operator, p is the center of mass momentum operator, m = 1732 m e is the reduced mass of HF, and V (r) is the anharmonic Morse potential with equilibrium bond position r 0 = 1.75 a 0 , well depth D = 0.2101 E h , and potential variation parameter α = 1.22 a −1 0 . We assume that the polarization of the field f (t) is aligned with the system's dipole moment. Then, the control Hamiltonian in the dipole approximation is H c = −µ(r), modeled by the function where µ 0 = 0.4541 a.u. specifies the strength of the dipole, and the parameter β = 0.0064 a −4 0 governs the bond length of the maximum dipole moment [71]. The form of this dipole moment function captures the fact that bonds of zero or infinite bond length do not contribute to the dipole moment, and it has two parameters which have been fitted to ab initio data [72].
We consider the task of driving the bond to a target bond length γ = 1.5r 0 at the terminal time T , and formulate the associated control objective function as This simple quantum control example can be extended to design fields for achieving controlled dissociation, e.g., by setting the target bond length γ to be sufficiently large. For dissociation, additional terms could also be added to J v [T, {θ i }] that require the energy to be greater than the dissociation energy, or for the momentum to be positive (i.e., such that the atoms are moving apart), at the terminal time T . The model could also be extended to simulate vibrational dynamics on multiple coupled electronic states, or to multiple coupled vibrational degrees of freedom, in order to simulate the vibrational dynamics of more complex systems.
To perform the qubit encoding, H(t) is represented in a basis truncated to d harmonic oscillator eigenfunctions, by evaluating its matrix elements in the harmonic oscillator basis via v|H(t)|v = To perform the quantum dynamics simulation, the initial condition for the oscillator is set as the harmonic ground state (in a truncated basis of size 2 N ) |ψ(0) = |0 ⊗N , which well-approximates the true ground state of the Morse oscillator and whose encoded qubit state is simple to prepare. The dynamics can be simulated using product formulas, and at the culmination of the quantum circuit, the qubits can be read out to determine J v [T, {θ i }] by evaluating the expectation values of the d log 2 (d)/2 Pauli strings in the qubit operator encoding for r. The explicit operator encodings for H 0 , H c , and r are given in Appendix A. Fig. 4a shows a field that achieves J v [T, {θ i }] = 0.01 in the numerically exact n → ∞ limit for an oscillator represented using d = 16 harmonic oscillator eigenfunctions (i.e., N = 4 qubits), where T = 290 fs and ∆t = 0.024 fs, parameterized as where ε(t) = sin 1/p ( πt T ) is an envelope function, whose width is defined by the parameter p, and where a i , ∆ i , and φ i denote the amplitude, detuning, and phase associated with the i-th frequency component, respectively. Optimal fields could be designed by optimizing over the set of control parameters {θ i } = {p, a i , ∆ i , φ i }. The solid curves in Fig. 4b show the total Trotter error (T, 0) associated with using first-, second-, and fourth-order product formulas to simulate the field-induced dynamics from We next consider the problem of controlling the orientations of two dipole-dipole coupled carbonyl sulfide (OCS) molecules, modeled as linear rigid rotors in a plane. Experimentally, systems of planar molecular rotors could be formed by adsorbing cold molecules onto a surface or trapping them in an optical lattice, while shaped microwave control fields can be created experimentally with an arbitrary waveform generator [47,48]. The controlled orientation of OCS molecules has been the subject of laboratory studies [73,74] due to the importance of molecular orientation in applications including chemical reactions [75][76][77] and high harmonic generation [78]. Furthermore, the controlled orientation of dipoledipole coupled OCS rotors using quantum optimal control has been studied theoretically in [79,80].
The drift Hamiltonian of the coupled rotor system is given by where the field-free, single-rotor Hamiltonian for the ith rotor is where B = 4.03 × 10 −24 J is the rotational constant of OCS [81] and is the squared angular momentum operator of rotor i, where ϕ i is the angular coordinate operator of the i-th rotor, and the angular coordinate represents the angle of the rotor's dipole moment with respect to the polarization direction of the field, assumed to be along thex-axis. The interaction describing the dipole-dipole coupling between the two rotors is (1 − 3 cos 2 θ 12 ) cos ϕ 1 cos ϕ 2 + (1 − 3 sin 2 θ 12 ) sin ϕ 1 sin ϕ 2 − 3 sin θ 12 cos θ 12 (cos ϕ 1 sin ϕ 2 where 0 is the vacuum permittivity, R 12 = |R 12 | = 3 nm is the distance between the two rotors, and θ 12 = R 12 ·x/R 12 = π/2 is the angle between the vector R 12 between rotors and thex-axis. A schematic of the coupled rotor system is provided in Fig. 5a.
The control Hamiltonian is given by where µ = 2.36 × 10 −30 C·m is the magnitude of the permanent dipole moment of OCS [82]. The minimization of the control objective function J r [T, {θ i }] = 1 − 1 Nr ψ(T )|(cos ϕ 1 + cos ϕ 2 )|ψ(T ) (30) then seeks both rotors to be identically oriented in the +x direction at the terminal time t = T , where N r = cos ϕ 1 + cos ϕ 2 is included for normalization and · denotes the spectral norm.
The state of each rotor is represented using a truncated basis composed of tensor products of the eigenstates |m 1 of L 2 1 and |m 2 of L 2 2 , where m = −M, ..., −1, 0, 1, · · · , M and M is set to 3, such that each rotor is represented using d = 2M + 1 = 7 levels, which can be encoded into log 2 d = 3 qubits. The eigenstates |m i , i = 1, 2, satisfy the eigenvalue equation L 2 i |m i = m 2 i |m i and can be expressed in terms of the angles ϕ i as ϕ i |m i = 1 2π e imiϕi , where |ϕ i are the eigenstates of the rotational coordinate operators. After representing H 0 and H c in this basis, the resultant matrices are expanded as weighted sums of Pauli basis operators (see Appendix A). Product formulas can be used to perform the quantum dynamics simulation using N = 6 qubits, where each rotor is initialized in its ground state |m i = 0 → |1 |0 |0 such that |ψ(0) = |1 |0 |0 ⊗ |1 |0 |0 . At the culmination of the circuit, the qubits can be measured to determine J r [T, {θ i }], whose explicit qubit encoding is given in Appendix A. Fig. 5b shows a control field that achieves J r [T, {θ i }] = 0.02 in the n → ∞ limit for T = 1.31 ns and ∆t = 1.87 ps. The field is taken to be a mix of ten frequency components with variable amplitudes a i , detunings ∆ i , and phases φ i , as per Eq. (25). The solid curves in Fig. 5c show the Trotter error (T, 0) associated with using first-, second-, and fourth-order product formulas to simulate the dynamics of the rotors from t = 0 to t = T as a function of the Trotter number n. The dashed curves show the associated objective function error.
We remark that extensions to systems consisting of M > 2 coupled molecular rotors are straightforward, and the associated simulations can be performed using N = M log 2 d qubits. Such simulations could be used to design optimally-shaped microwave fields to orient systems of numerous molecules with high precision, e.g., to improve the conversion of chemical reactions [83] or to improve the yield in high harmonic generation [17].

C. Controlled state preparation in light-harvesting complex
As a final example, we consider the task of excitonic state preparation in light-harvesting complexes. Understanding charge and energy flow in organic complexes such as photosynthetic light-harvesting complexes is a challenge due to the complexity of such systems and the confluence of several energy and timescales involved in these dynamical processes [38,84,85]. While advanced spectroscopic probes such as multidimensional spectroscopies [44] provide insights into these dynamics, a challenge in such experiments is the preparation of localized initial excitonic states, such as those seen by these systems in vivo. Previous research has studied the potential of quantum optimal control for preparing such initial states [86], and using shaped control pulses for controlling energy flow in light-harvesting complexes [18,87].
Due to the complexity of light-harvesting dynamics, and the immense practical importance of understanding charge and energy flow in complex materials, this example proposes an important application of our algorithm for solving optimal control problems using quantum hardware. In this section we show through a simple example, how to map standard models of light-harvesting dynamics to a simulation circuit, and analyze the complexity of treating larger scale examples.
We consider a portion the Fenna-Matthews-Olson (FMO) complex of green sulfur bacteria. The structure of a full FMO monomer is known [88][89][90], and consists of seven chromophores (bacteriochlorophyll A (BChl a) molecules) responsible for transferring energy towards the reaction center. The electronic excitations of the FMO complex couple to molecular vibrations and solvent degrees of freedom that are usually modeled as a Gaussian bosonic reservoir. However, recently it has become clear that certain modes of this reservoir are long-lived and moderately strongly coupled to the chromophores. This means that their coherent dynamics have a strong effect on the electronic excitation (exciton) dynamics, e.g., [91][92][93][94][95][96]. Thus, simulating transport in complexes coupled to underdamped vibrations has become important for understanding the efficiency of energy transfer in such systems. As a minimal model of such a setting, we consider just chromophores 3 and 4 of FMO and the dominant vibrational mode at 180 cm −1 coupled to chromophore 4 [92] (see Fig. 6a). We ignore the other vibrational degrees of freedom for simplicity, which correspond to only capturing dynamics at short time-scales. In addition to the chromophores and vibrational mode, we model a global, weak electromagnetic field, f (t), coupled to both chromophores.
The drift Hamiltonian in this example is given by where the subscripts label the FMO chromophores, b j denotes the annihilation operator for an electronic excitation on chromophore j, and a and a † are the harmonic oscillator lowering and raising operators, respectively. The parameters E 3 = 12, 205 cm −1 and E 4 = 12, 135 cm −1 denote the excitation energies on chromophores 3 and 4, while J 3,4 = 53.5 cm −1 denotes the dipole-dipole coupling between the two chromophores. We work in settings where the electromagnetic field is weak, and hence restrict the above Hamiltonian to at most one excitation per chromophore. The vibrational degree of freedom is modeled as a harmonic oscillator at thermal equilibrium at some temperature T vib , where ν = 180 cm −1 denotes the frequency of the vibrational mode and J 4,υ = 84.4 cm −1 denotes the magnitude of the coupling between chromophore 4 and the vibrational mode [89,92]. The control field is modeled as where ω 0 = 12, 200 cm −1 is the carrier frequency and f (t, {θ i }) is the dimensionless field profile, whose shape can be optimized. This field is coupled to the system through the control Hamiltonian: where µ 3 = 0.32|µ| and µ 4 = 0.92|µ| are the dipole couplings of each of the chromophores, |µ| = 6.3 Debye is the magnitude of the transition dipole for the relevant BChl a transition [90], and the other factors account for the alignment of the chromophores with the polarization of the control field. We assume that chromophores 3 and 4 are oriented at angles 109 • and 23 • , respectively, to the field's polarization direction [97]. We simulate the coupled chromophore subsystem in a frame rotating at ω 0 and make the rotating wave approximation as described in Ref. [86], such thatH(t) =H 0 + H cf (t), wheref (t) is the field profile and inH 0 , the excitation energies are shifted,Ẽ 3 = E 3 − ω 0 andẼ 4 = E 4 − ω 0 , while all other terms remain the same as in the original frame. The coupled chromophore subsystem is initialized in the ground state |g 3 g 4 . Meanwhile, the vibrational degree of freedom is taken to be initially in the thermal state υmax υ=0 c υ (T υ , υ max )|υ υ|, with the coefficients given by where Z(T vib , υ max ) = υmax υ e −β(T vib )ευ is the partition function, ε υ = νυ is the energy associated with the υth eigenstate |υ , and β(T vib ) = 1 K B T vib where K B is the Boltzmann constant, T vib = 300 K is the temperature, and we select υ max = 7, given that the higher vibrational states are not significantly occupied. The controlled preparation of an excited state localized on chromophore 4 can then be sought by minimizing where where the projector P ≡ |g 3 e 4 g 3 e 4 |, while I υ denotes the identity operator on the vibrational degree of freedom. This objective function quantifies the state overlap of the chromophore subsystem with the target state To perform the qubit encoding for the vibrational subsystem, we represent it in a basis truncated to d harmonic oscillator eigenfunctions, using the wellknown matrix element relations for a † a and (a + a † ), and expand the resultant matrices as weighted sums of Pauli basis operators (see Appendix A). Fig. 6b shows a field that achieves J s [T, {θ i }] = 0.25 in the n → ∞ limit, for T = 508 fs and N t = 300, and Fig. 6c shows the associated error (T, 0) when first-, second-, and fourth-order product formulas are used. For the field plotted in Fig. 6b, the field profile is taken to be a sum of ten Gaussian functions, contained in the envelope ε(t) = sin 1/2 ( πt T ), where the control parameters could be selected as {a j }, {b j }, and {c j }, which govern the relative amplitudes, means, and variances of the Gaussian functions, respectively.
The procedure described here can be straightforwardly extended to quantum control simulations involving more complex models for light-harvesting complexes. Fig. 7a shows the required qubit counts needed to simulate the dynamics of complexes composed of varying numbers of chromophores and vibrational modes. For example, to simulate the complete model for an FMO monomer involving seven chromophores, each coupled to two vibra-tional modes modeled using eight levels each, would require 49 qubits. In general, simulating C chromophores with arbitrary dipole-dipole couplings, each coupled to M vibrational modes modeled using d levels, requires N = (log 2 (d)M + 1)C qubits. Fig. 7b shows an upper bound for the circuit depth D, quantified by the number of applications of e −iB τ , for arbitrary τ , required to simulate C chromophores each coupled to M = 0, 1, 2 or 3 vibrational modes, modeled using d = 8 levels each and simulated using a fourth-order product formula, over a single time step of length ∆t = 10 a.u., presuming Λ max = 0.01 and L = (1 + C + 20M )C, with an error threshold of P F 4 (t + ∆t, t) ≤ 10 −5 .

D. Effects of hardware noise on algorithm performance
Until fault tolerant quantum computers are available, the implementation of quantum algorithms will be restricted to noisy, intermediate-scale quantum (NISQ) computing devices with nonzero error rates. As such, in this section we analyze the NISQ applicability of the hybrid quantum-classical scheme proposed here. We note that in this regard, very recent work [98] has demonstrated that present-day quantum hardware based on superconducting circuits remains too noisy to implement product formula-based time-dependent quantum simulation algorithms, as studied here, due to prohibitively high error rates.
In order to further study the NISQ-relevance of the hybrid quantum-classical algorithm proposed here, we perform a numerical analysis of its performance on quantum hardware with different nonzero error rates. In particular, we simulate the implementation of a simplified version of our example from Sec. IV C on noisy hardware based on trapped ions. We consider the same drift and control Hamiltonians given above in Eqs. (31) and (33), respectively. However, for this analysis, we take υ max = 3, and the control field profilef (t, {θ i }) is modeled as where T = 169 fs, ε(t) = 7.5×10 −4 sin( πt T ) is an envelope function, and φ a and φ b are the control parameters.
The system is initialized as before, with both chromophores in their respective ground states, and with the vibrational mode in a thermal state. Then, the controlled preparation of an excited state localized on chromophore 4 is sought by minimizing Eq. (35) as before, i.e., by computing the full objective J s as a weighted average of the results from a set of simulations where the vibrational mode initialized in each of the eigenstates |υ , υ = 0, 1, 2, 3, as per Eq. (36). We remark that due to the form of Eq. (38), the objective function is symmetric with respect to φ a and φ b , and extrema occur when φ a and φ b are chosen such that they constructively interfere, i.e., when φ a = φ b , or when φ a − φ b is an integer multiple of π.
In order to simulate the implementation of this model on quantum hardware, the qubit encoding step is carried out as before. In this simplified setting, N = 4 qubits are required to represent the system. Then, a first order product formula is used to simulate the field-induced dynamics over N t = 28 time steps, using a Trotter number of n = 1. This simulation is first compiled into a quantum circuit formed by a gate set consisting singlequbit rotations and CNOT gates, as per the recipe given in Fig. 3. Then, this circuit is recompiled into a circuit formed by the native gate set associated with the trapped ion hardware model adapted from [99], containing R x (±π/2), R y (±π/2), R z (θ) for arbitrary θ, and the two-qubit Mølmer-Sørensen gate. In order to simulate error in the implementation of the native gates, all singlequbit R x (±π/2) and R y (±π/2) gates are followed by a series of three quantum channels: an imprecise rotation about the rotation axis α = x, y, a depolarizing channel, and a dephasing channel, The R z (θ) gates are assumed to be implemented virtually and incur no error. Meanwhile, the two-qubit Mølmer-Sørensen gate is followed by an imprecise rotation about the σ x σ x axis,  a motional heating error (i.e., an imprecise rotation with a different error rate), independent depolarization channels on each of the two qubits, i.e., D p dep ⊗ D p dep (ρ), and independent dephasing channels on each of the two qubits, i.e., Z p d,2 ⊗ Z p d,2 (ρ). In addition, each idle gate is replaced with a depolarizing channel with depolarization error rate p idle = p dep /10. Meanwhile, for state preparation, an ideal preparation of the |0 state is followed by a depolarizing channel with the same depolarization error rate p dep . For measurement, an ideal measurement in the computational σ z basis of any qubit is preceded by a depolarizing channel, with the depolarization error rate equal to the largest imprecise rotation error rate (for further model details, see [99]). Within this noisy framework, we examine the algorithm performance using realistic error rates, optimistic error rates, and zero error rates. The realistic error rates are taken from [99], while the optimistic error rates are estimated based on projected improvements in ion heating rates and reductions in photon scattering (Raman scattering) while performing single and two qubit qubit gates. These error rates are collected in Table I. For the noise-free, optimistic, and realistic cases, we then incorporate one additional source of error: the effect of using m measurement samples to estimate the objective over the course of the control parameter optimization, where m is a finite number (here, we consider both m = 10 3 and m = 10 5 ). We denote the objective that is found with a first order product formula, using this trapped-ion hardware model, by J (H) s,P F 1 , and reserve the use of J s to denote the ideal objective function, as per Sec. IV C.
We simulate the performance of the algorithm introduced in Sec. III in these different settings. In order to optimize J (H) s,P F 1 , a Nelder-Mead algorithm is used, with control parameters initialized asymmetrically, as φ a = 0.2 and φ b = 3. The results of these numerical studies are given in Fig. 8. In panel (a), the value of J For the case of no circuit error, shown in orange, the only errors present are due to the use of a first-order product formula to simulate the dynamics, and the use of a finite number of measurements m to estimate the objective function at each iteration. As such, the associated optimization trajectory terminates very near to the true minimum for m = 10 5 , as shown in panel (b), and also relatively close for m = 10 3 as shown in panel (c), and the optimization proceeds in the correct direction with respect to φ a and φ b for both m values. When optimistic error rates are considered, the optimization trajectory terminates farther from the true minimum, but the optimization proceeds nevertheless in the correct direction with respect to φ b for both m values. However, for m = 10 3 , it moves in the wrong direction with respect to φ a . Finally, when realistic error rates are used, we see that the performance deteriorates much more, and the optimization proceeds in the wrong direction with respect to φ a for m = 10 5 and in the wrong direction with respect to φ b for m = 10 3 .
We also remark that the quantitative differences between J  These findings suggest that realistic trapped-ion error rates are too significant for a quantitatively-accurate implementation of the needed quantum circuits. However, as hardware improves, we can expect improvements in the quality of the simulations that are possible, as indicated by the positive trends in Fig. 8 as the error rate is reduced from realistic to optimistic to error-free. Fur-thermore, the results may also be further improved if methods such as readout error mitigation and zero noise extrapolation are used as in [100].

V. DISCUSSION AND OUTLOOK
In this article we have explored how quantum computers could be used for simulations of molecular control, which can often be intractable on classical computers. We introduced an algorithm that utilizes a quantum computer to simulate the field-induced dynamics of the molecular system under consideration, and a classical coprocessor to optimize a set of control field parameters to achieve a desired objective. Three numerical illustrations were then presented; the first two examples considered vibrational and rotational control problems, while the third treated the problem of state preparation in lightharvesting complexes, which could serve as a potential benchmark problem. To this end, in Fig 7, we analyzed the qubit counts and circuit depths required for its solution on a quantum computer.
One key difference between our approach and most other variational quantum algorithms is that the solution is encoded in the variational control parameters {θ i }, rather than the terminal state of the qubits. It should also be noted that unlike most other variational quantum algorithms, which are designed to use shallow quantum circuits compatible with noisy quantum devices, the depth of the quantum circuits associated with our algorithm can vary arbitrarily depending on factors such as the pulse length, time-step size, and the desired error tolerance.
Given this situation, we considered the prospects of a near-term implementation of our algorithm on NISQ hardware. In particular, we provided a numerical analysis of its performance in the presence of different levels of hardware noise, using a model for trapped ion hardware, with results presented in Fig. 8. Although we found that quantitatively-accurate calculations on NISQ hardware are likely infeasible with current hardware error rates, we wish to emphasize that there may exist important application settings where precise quantitative accuracy is not required. For example, in settings where the molecular systems under consideration become more complex, the study of trends may become increasingly important. Such situations are prevalent across the field of chemistry, where systematic theoretical, computational, and experimental studies allow for exploring trends governing the reactivity of chemical reagents in different conditions. In this spirit, an emerging goal in the domain of quantum control is to gain an understanding of the trends associated with controlled molecular dynamics. In this latter setting, control fields are akin to photonic reagents [101], and it is insights into their role in driving chemical processes that are sought [102,103]. Consequently, future studies using our algorithm to explore the systematics of molecular control may be useful in revealing important relationships and trends between the properties of control fields, and the dynamical behavior they induce in chemical systems. Such studies may tolerate certain levels of hardware noise as long as the correct trends remain present, thereby suggesting a potential application area of our algorithm in the NISQ era.
Furthermore, beyond the illustrative examples treated explicitly in this article, numerous additional applications of molecular control can be imagined. Many of these will surely involve more complex systems than the simple models we consider, such as polyatomic molecules with interacting vibrational, rotational, and electronic degrees of freedom, whose controlled dynamics can involve a breakdown of the Born-Oppenheimer approximation. It is expected that real space encodings outside of the Born-Oppenheimer approximation may be useful for simulating such systems [104]. In real space, the state of an electron or nucleus represented using d points on a 1D grid can be encoded in a binary fashion using log 2 d qubits, by treating each grid point analogously to how each basis function was treated in Sec. III A 1. This means that a total of N = 3M log 2 d qubits would needed to simulate the controlled dynamics of a polyatomic molecule composed of M interacting electrons and nuclei represented this way. Simulations of molecular dynamics performed in this manner require computational resources far beyond the capabilities of today's classical computers. However, if such simulations could be performed on quantum hardware in the future, they could enable the identification of optimal fields for controlling the outcome of chemical reactions, which is of fundamental interest, and also of potential utility towards commercially relevant chemical synthesis applications. In the emerging area of attosecond control, such simulations could also be used to explore the possibility of designing fields to control charge-directed reactivity, where attosecond pulses are sought to control the electron dynamics such that the associated nuclei are driven towards a desired chemical reaction path [105]. This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not neces-sarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
In this appendix, the full details regarding the qubit operator encodings are given for each of the numerical illustrations presented in the main text. The mappings for the Hamiltonian are performed as, where B are normalized tensor products of Pauli operators (here denoted using X = 1 √ 2 σ x , Y = 1 √ 2 σ y , Z = 1 √ 2 σ z , and I, where I is the 2 × 2 identity matrix, normalized using a prefactor of 1   Table  lists basis operators B , = 1, · · · , 30 and corresponding nonzero coefficients g 0, and g c, associated with encoding the Hamiltonian of the dipole-dipole coupled rotors in the Pauli operator basis.  Table lists basis operators B , = 1, · · · , 4 and corresponding nonzero coefficients J associated with encoding the projector P ⊗ Iυ in the Pauli operator basis, as needed for evaluating the objective function.