Optimal quantum parametric feedback cooling

We propose an optimal protocol using phase-preserving quantum measurements and phase-dependent modulations of the trapping potential at parametric resonance to cool a quantum oscillator to an occupation number of less than one quantum. We derive the optimal phase relationship and duration for the parametric modulations, and compute the lowest-possible occupation number in the steady state. The protocol is robust against moderate amounts of dissipation and phase errors in the feedback loop. Our work has implications for the cooling of levitated mechanical resonators in the quantum regime.

One such cooling method, known as parametric feedback cooling, has been especially successful in achieving ground-state cooling in levitated systems when used together with linear feedback techniques [38][39][40][41]. However, it is not clear whether parametric feedback cooling on its own can achieve ground-state cooling, or what its limits in the quantum regime are. Motivated by this, the D D FIG. 1. Cooling a quantum particle in a harmonic trap by phase-preserving quantum measurements and phasedependent parametric modulations of the trapping potential. In the feedback protocol, the phase information φ acquired from heterodyne measurements is used to update the phase φp of the parametric modulation of the trapping potential by requiring 2φ + φp = π 2 .
present work investigates the quantum regime of parametric feedback cooling of a simple harmonic oscillator. Henceforth, by "cooling" we are referring to reducing the mean quanta in an oscillator. Modulations of the harmonic potential at parametric resonance with a phaseoffset are modeled by Mathieu's equation [24,42], which was first discussed by Mathieu [43]. We show that parametric modulations with a definite phase reference relative to the oscillator state result in a reduction of the mean quanta in the oscillator. In order to cool down arbitrary quantum states of the oscillator which lack a fixed phase reference (such as thermal states), we introduce phase-preserving (heterodyne) quantum measurements into the protocol (see Fig. 1). We then derive conditions for an optimal modulation time based on the measurement outcome and compute the steady-state occupation averaged number over many cooling cycles. We find it to be below one quantum, thus achieving nearquantum ground-state cooling. This article is organized as follows. In Sec. II we summarize the methods for studying the parametrically driven dynamics of the oscillator. In Sec. III we discuss the cooling protocol, and derive optimal driving phase and duration to achieve the near-quantum ground state arXiv:2204.00476v2 [quant-ph] 7 Mar 2023 by sequential cooling cycles which incorporate phasepreserving quantum measurements. We discuss the robustness to phase errors, and possible experimental implementations in Sec. IV. We conclude by discussing some of the future directions in Sec. V.

II. DYNAMICS
In this section, we outline the solution of the quantum dynamics that results from the parametric modulations. In particular, we show how the quantum state of the oscillator undergoes single-mode squeezing.

A. Solution to the dynamics
The Hamiltonian describing the parametrically driven quantum oscillator has the form is the free Hamiltonian of the quantum oscillator, m is the mass of the oscillator, and ω 0 is the frequency of the mode. In this work, we consider the following sinusoidal driving profile f (t) = λ cos(ω p t + φ p ), where λ is the driving amplitude, ω p is the drive frequency, and φ p is the phase. When ω p = 2ω 0 , the drive is referred to as parametric.
We describe the dynamics governed by the Hamiltonian in Eq. (1) using the solutions developed in Refs. [24,42,44] (revisited in Appendix A). In what follows, we briefly summarize the solutions. Since the Hamiltonian in Eq. (1) is quadratic in its operator arguments, the evolution of a Gaussian state is captured fully by the evolution of the first and second moments. Defining the vector of first moments asX = (â,â † ) T , the solution to the dynamics readŝ whereÛ (t) is the time-evolution operator given bŷ and S(t) is a 2 × 2 symplectic matrix given by for which ← − T indicates time ordering of the exponential, Ω is the symplectic form, defined in this basis as Ω = i diag(−1, 1), and H(t) is the Hamiltonian matrix, defined byĤ(t) = 1 2X † H(t)X. The H matrix corresponding to the Hamiltonian in Eq. (1) reads The corresponding time evolution can be written as a Bogoliubov transformation of the first moments with where α(t) and β(t) are Bogoliubov coefficients satisfying |α(t)| 2 − |β(t)| 2 = 1. The operatorâ(t) evolves aŝ The coefficients α(t) and β(t) can be written as (see Appendix B in [42]) where the functions P (t) and Q(t) are both solutions to the differential equation where P (t) can be obtained by using the initial conditions P (0) = 1 andṖ (0) = 0, and Q(t) by setting Q(0) = 0 anḋ Q(0) = 1. These initial conditions follow from requiring that S(t = 0) = 1.
The Hilbert space representation of this symplectic transformation is a rotation followed by a squeezing operation of the formŜ(z) = e (zâ †2 −z * â2 )/2 , where ϕ(t) and z sq = r sq (t)e iθsq (t) are time-dependent functions given in Appendix A.

B. Modulations at parametric resonance
When the frequency modulation occurs at twice the free frequency ω p = 2ω 0 , Eq. (9) takes the form of Mathieu's differential equation [43] In our case x = ω 0 t, a = 1, and = −2λ/ω 0 . Mathieu's equation is usually defined without the phase φ p . When φ p = 0, the solutions can be represented as Mathieu's functions of the first kind: ce n (t, λ) and se n (t, λ). The solutions have no analytic form, but are periodic with 2π. We also note that at a = 1, the solutions are fundamentally unstable [45]. As a result, the system can only be operated at parametric resonance for short time-scales and with weak driving strengths λ. A similar issue arises when considering the classical treatment of parametric feedback cooling [46]. For non-zero φ p , the solutions can be expressed by linear combinations of ce n (t, λ) and se n (t, λ). Mathieu's equation has been rigorously studied and describes the behavior of a diverse family of systems ranging from a child on a swing to the buckling of membranes [47], as well as the quantum pendulum [48]. While Eq. (11) does not allow for an exact analytical solution, an approximate solution can be obtained when λ/ω 0 1 using a two-time-scale method (see Ref [42] and Appendix B, which also includes a discussion of the errors in the approximation). Using this technique, the Bogoliubov coefficients in Eq. (8) can be approximated as, We see that when λ → 0, we are left with the free evolution e −iω0t encoded in α(t).

III. COOLING THROUGH PARAMETRIC MODULATIONS
In this section, we show that the dynamics that results from modulations at parametric resonance considered in Sec. II lead to cooling. We proceed to derive conditions for optimal cooling, which leads to the development of a cooling protocol.

A. Phase control for cooling
We now derive the necessary phase relationship for cooling. Given Eq. (7), we find that the mean occupation number in the oscillator changes as Most experimental systems are initially found in thermal states, which lack a fixed phase reference. To obtain a phase relationship necessary for cooling, we instead start by considering the coherent-state basis. For an initial coherent state |ξ = |re iφ , the mean occupation number evolves as Using the approximate Bogoliubov coefficients in Eq. (12), the number of quanta evolves as, to first order in λ, The last term inside the square brackets in Eq. (15) is proportional to ω 0 t, which means that it either increases or decreases the mean occupation number in the oscillator over time as determined by the phase relation 2φ+φ p . When the last term in Eq. (15) produces an initial cooling effect of rate 2λr 2 . [49] A proof that thermal states do not experience a reduction in their occupation number is presented in Appendix C.

B. Optimal cooling
In Sec. II A, we found that modulations at parametric resonance correspond to the single-mode squeezing oper-ationŜ(z sq ), shown in Eq. (10). However, such modulations cannot be used to cool the initial coherent state indefinitely. For each initial coherent state, there exists an optimal squeezing value r op that maximally cools the coherent state. Squeezing beyond this value instead adds quanta to the state.
For modulations at parametric resonance, the squeezing magnitude is given by r sq (t) ≈ λt, where t is the duration of the modulations (see Appendix D for the derivation). Given the initial coherent state |ξ , we note that the optimal squeezing value is r op (t) = ln 1 + 4|ξ| 2 /4. Thus, the optimal cooling time for a single cycle is Furthermore, the minimum occupation number n min that can be achieved in each cycle starting from coherent state |ξ is n min (ξ) = ( 1 + 4|ξ| 2 − 1)/2. Again see Appendix D for detailed derivations.

C. A single cooling cycle
We now generalize the above result to a two-step measurement-based feedback cooling protocol for cooling down arbitrary quantum initial states of the oscillator, including thermal states. The cooling cycle is described as follows.
The measurements in the coherent-state basis, also known as heterodyne measurements, are required since parametric modulations alone cannot be used to decrease Single realization of the optimal cooling protocol for an isolated system, starting from a coherent state with mean quanta equal to r 2 = 80. The black dots indicate the average quanta in the coherent state obtained from measurements, and the red dots indicate the average quanta at the end of the cycle. The blue dashed line indicates the change of mean quanta as the drive is turned on, and the underlying green line is the corresponding prediction from solving the dynamics analytically. (c) Average cooling n as a function of the number of cycles Nc in the presence of a thermal environment, starting from a coherent state of r 2 = 80. Here, n h is the thermal quantum of the ambient temperature, kBTB = 10ω0. In the absence of noise, the occupation number of the state can be successfully reduced to below one. (d) Effects of uncertainly in phase control for different values of the phase spread ∆φ, in the absence of dissipation. Each simulation starts with a coherent state with mean quanta, r 2 = 10. The blue curves indicate the corresponding theory predictions. All plots use the drive strength λ/ω0 = 0.01. the mean quanta of thermal states. Furthermore, heterodyne measurements are optimal, since they only add a single quantum of noise on an average [51] and because the phase-matching condition in Eq. (16) that leads to cooling does not depend on the coherent-state amplitude (see Appendix E). We demonstrate a single cycle for feedback cooling of a thermal state in Fig. 2(a). A similar protocol with linear feedback has also been discussed as an engine in [51].

D. Sequential cooling cycles
We now consider applying a sequence of cooling cycles to a quantum oscillator, such that steps 1 and 2 above are repeated several times in sequence. For each cycle, the initial phase-preserving quantum measurements are modeled by Kraus operators, K(ξ) = 1 √ π |ξ ξ| sampling coherent states according to the corresponding Husimi Q function at the beginning of each cycle. Since the dynamics is Gaussian, we approximate the sampling distribution as (in units where = m = 1) [52][53][54] where MN [µ, Σ](Reξ, Imξ) is a multivariate normal distribution with mean and a variance matrix defined as Σ = 1 2 where σ ij are the elements of the familiar covariance matrix of observablesî andĵ defined as In the simulations, we assume that the measured coherent state is aligned to the φ = 0 axis via a unitary rotation such that φ p = π/2. In each cycle, we compute the optimal cooling time t op as per the expressions above and truncate the modulations at the nearest multiple of π. In Fig. 2(b) we show an example of such a quantum trajectory. The oscillator is initialized in a coherent state with mean quanta r 2 = 80, and the optimal cycle duration is calculated for each subsequent measurement outcome. The occupation number n(ω 0 t) quickly reduces to around unity. An added benefit of performing the measurements is that the instability of the oscillator due to the Mathieu equation in the long-time limit is mitigated by the sequential measurements. Similar techniques have been employed in the past to improve the stability of otherwise unstable quantum oscillator systems [55].

E. Near quantum ground-state cooling
The optimized cooling scheme achieves cooling to near the quantum ground state. The probability of obtaining a state |ξ j from the jth measurement from the previous state ||ξ j−1 | following modulations is whereÛ (t) is the time-evolution operator in Eq. (10). As mentioned before, if the parametric modulations are turned on for the optimal time t op starting from an initial coherent state |ξ j , the achievable minimum occupation number in the oscillator at the end of the driving protocol is given by n min (ξ j ) = ( 1 + 4|ξ j | 2 − 1)/2. By then requiring that a single cycle does not, on average, change the occupation number of the system in the steady state, we find that n min (ξ j−1 )| j 1 ≡ n f also satisfies the requirement for an invariant cycle, namely, which is a Fredholm integral equation of the first kind. We solve this equation numerically to prove that, in the absence of noise, the minimum occupation in the steady state is n f ≈ 0.83 (see Appendix F). This constitutes the optimal limit to parametric feedback cooling in the quantum regime in our protocol.

F. Cooling in a thermal environment
All experimental systems are affected by environmental noise. We therefore consider performing multiple cycles of cooling while the quantum oscillator is undergoing collisional interactions with modes of a thermal reservoir. Following [56], we model these interactions using an adiabatic Markovian master equation, resulting in the dynamical equations for the first and second momentṡ where γ is the dissipation rate,n B is the thermal occupation of a reservoir mode at frequency ω B = ω 0 /2, having a temperature k B T B = 10ω 0 , and ω(t) = ω 0 1 + 4f (t)/ω 0 . We compute the optimal cooling time t op numerically for each cycle by searching for the minimum of the occupation value. In Fig. 2(c), we demonstrate that our protocol is able to cool down the quantum oscillator below the ambient temperature on average, even for moderate amounts of dissipation. Here, we start from a coherent state at r 2 = 80 and average the result from 10 3 runs. For negligible dissipation, we recover the occupation value of 0.83 derived in Sec. III E, which corresponds to cooling near the quantum ground state.

IV. DISCUSSION
Here we discuss the effects of phase noise on the cooling power, as well as the physical implementations of the protocol across different platforms.

A. Phase inaccuracy
The protocol relies on the ability to adjust the phase of the modulation to 2φ+φ p = π/2. However, latency in the feedback loop and other inaccuracies can introduce errors into the protocol. To model this scenario, we consider several realizations of an individual cooling cycle where the driving phase φ p is sampled around the ideal driving phase φ p according to the probability distribution with standard deviation ∆φ. In Fig. 2(d), we demonstrate that our cooling protocol is robust against significant phase errors up to 20% of the ideal phase.

B. Physical realization
Modulations of the trapping potential can be realized by imposing an electrostatic force or external strong optical field on the mechanical mode [57]. In levitated systems the percentage change of the trapping potential is known as the modulation depth G [46]. In this work, G is related to the driving amplitude λ as G = 4λ/ω 0 , which for λ/ω 0 = 0.01 is G = 0.04 or 4%. In hybrid traps, modulation depths as high as 5% are possible [46], while in optical tweezers, around 0.4% is more common [58]. Beyond optical and hybrid traps, candidate systems include magnetically levitated magnets [59,60], diamagnets [61], and superconducting spheres [62].
Phase-preserving measurements are a key ingredient in this protocol and can be implemented through joint homodyne detection of both quadratures of the oscillator [50,63], or by pulsing light through the cavity when the system is in the unresolved sideband regime [64,65].
Superconducting circuits also offer novel methodologies to perform such measurements dynamically in hybrid systems [63,66].

V. CONCLUSIONS
The optimal parametric feedback protocol proposed here leads to near quantum ground-state cooling, and appears to offer significant cooling even when feedback capabilities are limited. The protocol may also be combined with linear feedback cooling techniques [40,41] or various other quantum refrigerator schemes proposed based on fundamental thermodynamic principles [67][68][69][70][71], to further explore quantum enhanced cooling at the nanoscale. The methodologies we developed can be generalized to derive exact results for optimal cycles in the presence of added noise; we defer this analysis to future work.
Note added : Recently, the authors became aware of a related paper by Ghosh et al. [72], where phase-adaptive quantum parametric feedback cooling is considered using a semi-classical approach. With the assumption of the equipartition of noise between the phase-space quadratures, the authors of [72] demonstrate efficient, exponential cooling by deriving the same phase-relation as that found here. In contrast, the present manuscript also highlights the role of squeezing that results from the parametric modulations of the trapping potential, which suggests an optimal duration of the cooling cycle.

ACKNOWLEDGMENTS
We thank Anthony Bonfils for helpful insights concerning the stability of Mathieu's equation, and added insights on the child in a swing problem. We also thank Lydia Kanari-Naish, Thomas Penny, Antonio Pontin, Anis Rahman, Ermes Scarano, Dhrubaditya Mitra, David Edward Bruschi, Alessio Serafini, and Witlef Wieczorek for helpful comments and discussions. The work of S.K.M. was supported by the Wallenberg Initiative on Networks and Quantum Information (WINQ). S.Q. was funded in part by the Wallenberg Initiative on Networks and Quantum Information (WINQ) and in part by the Marie Sk lodowska-Curie Action IF programme Nonlinear optomechanics for verification, utility, and sensing -Grant-No. 101027183. Nordita is partially supported by Nordforsk.

DATA AVAILABILITY STATEMENT
The code used to generate the figures shown in this work can be found in GitHub repository. In this appendix, we connect the derivation of the solutions for the dynamics, which was first presented in [42], with a more intuitive solution using a Lie algebra method [73] (see [74] for a pedagogical introduction). We identify a set of operators that is closed under commutation, which allows us to set up differential equations that, when solved, provide the exact solution to the dynamics. We then show that these solutions can be mapped to those derived in [42]. The solutions and the derivations build on methods also developed in Ref [75]. In addition, we note that the dynamics of this form may also be treated using the exact Lewis-Riesenfeld solutions [76].

Phase space dynamics
We start by setting = 1 in this section. Then, we identify the elements of the Lie algebra that generate the time evolution induced by the Hamiltonian in Eq. (1). The elements arê It can be verified that the algebra is closed under commutation. The corresponding symplectic matrices in the (â,â † ) T basis, which we call A 0 , A + , and A − , are given by The symplectic matrix that encodes the evolution of the system is given by We then differentiate this matrix with respect to time t to find d dt S(t) = ΩH(t) S(t).
We then multiply the expression by S −1 (t) on the right-hand side to finḋ Then, we make the following ansatz for the solution to S(t): Here, J 0 , J + , and J − are time-dependent coefficients that we wish to find. We then differentiate the ansatz in Eq. (A6) to finḋ Then we also know that the symplectic matrices obey SΩS † = Ω. This allows us to rewrite Eq. (A9) as which, after multiplying out the matrices, leaves us with However, we also know that the Hamiltonian matrix is given by Equating Eqs. (A11) and (A12) allows us to identify the differential equations By manipulating the expressions in Eq. (A13), it is possible to isolate the three coefficients J 0 , J + , and J − into the three differential equations [44]J We note, however, that J − does not feature in the first and second equations forJ 0 andJ + , which means that it can be entirely solved once the other two equations have been solved. This confirms that S(t) is fully determined by only two real parameters. We now wish to relate J 0 , J + , and J − to the functions P (t) and Q(t), which were introduced in Eq. (8). For the derivation of P (t) and Q(t), see Appendix B in Ref [42]. By rewriting S(t) in Eq. (A6) as a single symplectic operator, we find that the Bogoliubov coefficients α(t) and β(t) can be written as [44] where |α(t)| 2 − |β(t)| 2 = 1. Also from using Eq. (8), we are able to identify the relationships , It is then possible to write P (t) and Q(t) in terms of J 0 , J + , and J − as Similarly, the second derivativesP (t) andQ(t) can be found, which are long expressions, so we do not print them here. We then recall from the main text that P (t) and Q(t) are determined by the two differential equations By then inserting the expressions in Eq. (A17) and their derivatives into Eq. (A18), and using the relations in Eq. (A14), it is possible to show that J 0 , J + and J − and their relationship are also solutions to these equations.
Next, we note that it is also possible to define J 0 , J + , and J − in terms of P (t) and Q(t). Previously, it was shown that [44] cosh(4J + ) = |α 2 (t) − β 2 (t)|, With the help of the relations in Eq. (A19), we can identify .
Finally, we note that the solutions P (t) and Q(t) are valid for any choice of driving function f (t). The case of parametric modulations explored in the main text leads to Mathieu's equation, but many other driving patterns can be considered using these methods.

Hilbert space solution
Here, we use the solutions derived in the preceding section to cast the dynamics into a rotation and a single squeezing operator in the Hilbert space representation. The Hilbert space ansatz equivalent to that in Eq. (A6) iŝ We note that the operators in Eq. (A21) are equivalent to single-mode squeezing and a phase rotation withâ †â . The connection between the Hilbert space picture and the phase-space picture iŝ whereX = (â,â † ) T , as in the main text. We start by focusing on the two squeezing operators e −iJ+(â †2 +â 2 ) and e J−(â †2 −â 2 ) . It is possible to combine two squeezing operators by using the product theorem [77] where the squeezing operators are defined as S(z j ) = e (z * jâ 2 −zjâ †2 )/2 for z j = r j e iθj . The phase ϕ sq is given by for which t j = tanh(r j )e iθj . It then follows that We wish to solve for the total squeezing value r 3 and determine its behavior given the parametric modulations. By making the identification that in our case, we have we find [44] tanh(r 3 )e iθ3 = i tanh(2J To find an expression for tanh(r 3 ), we take the absolute value of Eq. (A27). By decomposing the right-hand side in Eq. (A27) in terms of squares of real and imaginary terms and then taking the square root, we find Then, using Eqs. (A20), which relate J ± to the functions P (t) and Q(t), and inverting Eq. (A28) for r 3 , which we rename to r sq as in the main text, we find . (A29) Let us analyze this expression for r sq . The initial conditions for P (t) and Q(t) read: P (t = 0) = 1 and Q(t = 0) = 1. This implies that there is zero squeezing r sq = 0 at t = 0, which is what we expect. Furthermore, since P (t) and Q(t) in Eq. (B1) grow exponentially with t, r sq tends to infinity in the limit of large t.
To summarize, we have shown that the parametric modulation imposes the unitary transformation of the statê where z sq = r sq e iθsq . The squeezing magnitude r sq is given in Eq. (A29) and we find the phase ϕ sq in Eq. (A33) is given by [44] ϕ sq = arg If we ignore the global phase in Eq. (A33), we can writê where ϕ(t) = J 0 − ϕ sq .

Appendix B: Approximate solutions to the dynamics
In this appendix we outline the derivation of the dynamics generated by the Hamiltonian in Eq. (1). We solve the dynamics perturbatively using two separate methods: first, we use a well established two-time perturbative solution of the Mathieu equation, which underpins the dynamics, and second, we use a perturbative solution of the unitary time-evolution operatorÛ (t) in Eq. (3) to first order. We also examine the stability and error of the solutions, where we show that the approximate solution forÛ (t) produces an error in the Bogoliubov coefficients that grows in time, while the Mathieu equation result in an error that oscillates in time.

Approximate solutions to Mathieu's equation
Mathieu's equation can be approximately solved using a standard two-time solutions, see e.g. Ref [78]. The solutions were previously presented in Refs [42] and [24] and are valid for 2λ/ω 0 1. The approximate solutions for P (t) and Q(t) are, where we have rescaled tω 0 → t and λ/ω 0 → λ [24,42]: which are used to derive α(t) and β(t) in Eq. (12) after expanding in λ to first order, and with factors of ω 0 restored.

Time-evolution perturbation theory
We now present an alternative method by which the dynamics can be solved. To treat the dynamics of the Hamiltonian in Eq. (1), we make use of time-dependent perturbation theory. We start by writing down the timeevolution operatorÛ (t)Û We now divide the Hamiltonian in Eq. (1) into two parts: one that contains a modified free evolution term withâ †â , and the other part that contains the interaction term where we have ignored a scalar term since it results in a global phase. We then consider the frame that rotates witĥ H 0 (t). For the choice of f (t) = λ cos(2ω 0 t + φ p ) in this paper, the evolution generated byĤ 0 (t) is given bŷ where we have defined θ(t) = ω 0 t + 2 λ ω0 cos(ω 0 t + φ p ) sin(ω 0 t). The interaction Hamiltonian in this frame evolves witĥ U 0 (t) such thatĤ where we have used the fact that e ixâ †ââ e −ixâ †â = e −ixâ . The evolution operator in the interaction frame is thereforê Returning to the laboratory frame, the full evolution can be written asÛ (t) =Û 0 (t)Û I (t). When λt 1, we can expand the exponential in Eq. (B6) to first order in λ to find We then examine the evolution ofâ and find Then, we insert the approximate form ofÛ I (t) shown in Eq. (B7) into Eq. (B8) to find Expanding and evaluating the integral, we may identify the Bogoliubov coefficients α(t) and β(t) as per Eq. (7). We find, expanding λ to first order, As can be seen, both coefficients contain linear corrections of λ, but they are a bit different from those derived in the preceding section.

Error analysis of the perturbative solutions
The first step we perform in order to determine the error of the perturbative method in Appendix B 1 is to plot the solutions in Eq. (B1) against numerically obtained solutions of Mathieu's equation. We do so in Fig. 3, where we have defined ∆P (t) and ∆Q(t) as the deviations away from the numerical result. The phase is set to φ p = π/2. As can be seen, for short times the error remains similar in magnitude to the driving amplitude λ/ω 0 .
Another way in which we can quantify the errors of our approximate solutions is by considering the Bogoliubov normalization relation |α| 2 − |β| 2 = 1. Starting with the solutions obtained by expandingÛ (t), shown in Eq. (B10), we find to second order in λ that Here we note from Eq. (B11) that the error grows with t, which means that the solutions derived in Appendix B 2 will become increasingly inaccurate. In contrast, using the expressions for the Bogoliubov coefficients obtained from the perturbative solutions to the Mathieu equation, we find that the error is given by We note that our solution is exact whenever φ p = nπ/2 and 2ω 0 t + φ p = nπ/2, for integer n. For example, when φ p = π/2, the solution is exact at ω 0 t = π.
In Fig. 4 we compare the error of the approximate solutions shown in Eqs. (B10) and (B1) with a numerically obtained solution of Mathieu's equation for an initial coherent state |re iφ . The parameters are set to φ = 0, φ p = π/2, λ/ω 0 = 0.01 and r = √ 10. We note a few things from this figure. First, we note that the exact solutions (black solid lines) have a periodicity that is about twice that of the approximate solutions to Mathieu's equation (purple dotted line). The missed oscillations can also be observed as errors in Fig. 3. It might be possible to further improve the accuracy of the two-time scale solutions by adding a third time-scale, which is stretched by √ . We leave such an analysis to future work. Second, we note that the error of the solutions from expandingÛ (t) (blue dashed line) grows in time, and thereby diverges from the numerical solutions to a greater extent than those obtained from Mathieu's equation. They are however more accurate for shorter time-scales, since they reproduce the shorter oscillations of the numerically obtained solutions.

Mathieu equation stability analysis
The Mathieu equation is numerically unstable, which means that certain parameter combinations result in diverging solutions [78]. When a = 1, there are in fact no stable solutions that can be obtained. However, since we measure As shown in Eq. (B12), the approximate solution is accurate whenever ω0t is a multiple of π, given that φp = π/2. the state at the beginning of each cooling cycle, we effectively reset the instabilities that would have been introduced for the full running time of the protocol. In this way, the inclusion of measurements also prevents the buildup of instability from the modulations of the potential (See Fig. 4).
The analytic extension to non-unitary dynamics is likely to change the stability of the equations of motion, however such a stability analysis would require a full analytical solution of the open systems dynamics with the time-dependent frequency modulation. We leave this to future work. Heren = Tr{ˆ thn } = (e ω 0 k B T − 1) −1 . By then averaging over all possible outcomes of r and φ, we can determine the average cooling power.
We find to first order in (λ/ω 0 ) that the occupation number on average is n(t) = 1 +n(1 − 2λt) + O[(λ/ω 0 ) 2 ], where the angular brackets here indicate averaging over many measurement outcomes. This means that the average cooling power at early times is 2λn. In Fig. 2(a) of the main-text, we compare the analytical prediction for the average quanta n(t) for a single cooling cycle with numerical simulations and found excellent agreement. which is the lowest number of quanta a coherent state with coherent state magnitude r can be cooled to. If we squeeze beyond this value, quanta are added to the system rather than removed. For example, given a coherent state with r = 1, the optimal squeezing value is r sq = log(5)/4, which results in ξ|Ŝ † (r sq )â †âŜ (r sq ) |ξ ≈ 0.6. It is not possible to reduce the number of quanta beyond this value by squeezing alone.
By knowing the optimal squeezing value and the approximate expression for r sq ≈ λt, it is possible to derive the optimal modulation time. We know from Eq. (D2) that r sq ≈ λt. By equating this to the optimal squeezing value and solving for time, we find the optimal modulation time t op ≈ log 1 + 4r 2 /4λ. For an initial coherent state with r = 1 and a modulation strength λ/ω 0 = 0.01, the optimal modulation time is t op ≈ 40/ω 0 . We note that for low occupation number r, the optimal modulation time is short, which might make it challenging to cool the state optimally.
It should be noted that these results only apply to closed-system dynamics. In the presence of finite thermal dissipation, the optimal time occurs earlier than that predicted here. In Fig. 2(c), we numerically computed the occupation value for dissipative dynamics and determine at what time the minimum value is achieved. We then terminated the modulation protocol at the closest multiple of π, which is where the potential returns to its original value. To analytically determine the optimal modulation time for dissipative dynamics, one would have to solve the master equation analytically. We leave this to future work. Here, by numerically integrating the exact Q function, we plot the final occupation number n f as a function of the initial coherent state amplitude |ξ| for simply modulating for the optimal duration (red solid line), subsequently measured and modulated (one cycle, Nc = 1)(black dotted line), and similarly up to four cycles. We see that the final occupation number achieved in our protocol for a larger cycle becomes independent of the initial coherent state parameter ξ and achieves the value n f = 0.83. It also corresponds to an invariant cycle of cooling.