Continuous optimization by quantum adaptive distribution search

In this paper, we introduce the quantum adaptive distribution search (QuADS), a quantum continuous optimization algorithm that integrates Grover adaptive search (GAS) with the covariance matrix adaptation - evolution strategy (CMA-ES), a classical technique for continuous optimization. QuADS utilizes the quantum-based search capabilities of GAS and enhances them with the principles of CMA-ES for more efficient optimization. It employs a multivariate normal distribution for the initial state of the quantum search and repeatedly updates it throughout the optimization process. Our numerical experiments show that QuADS outperforms both GAS and CMA-ES. This is achieved through adaptive refinement of the initial state distribution rather than consistently using a uniform state, resulting in fewer oracle calls. This study presents an important step toward exploiting the potential of quantum computing for continuous optimization.


I. INTRODUCTION
Continuous optimization has become a critical element across a wide range of fields in recent times [1,2].However, the challenge of global optimization of non-convex functions remains an unresolved issue that demands further exploration [3][4][5].The potential benefits of conquering this challenge are immense, and efficient optimization methods for non-convex functions lead to breakthroughs in numerous fields.
In quantum computing, researchers have been investigating the potential of quantum computers in addressing continuous optimization problems.Grover Adaptive Search (GAS) [6,7], designed for fault-tolerant quantum computers (FTQC), has emerged as a notable optimizer for discrete problems.This algorithm utilizes Grover's algorithm [8,9] to mark regions where the function value is below the current optimum and iteratively finds superior solutions.Furthermore, the extension of GAS for the continuous optimization problem has been proposed in Refs.[10,11].Since Grover's algorithm assures a quadratic improvement in performance with respect to the number of oracle calls, every update step of the GAS can also achieve such a speedup.
However, the vanilla GAS is not the best choice for the optimization problems commonly found in practical situations, as it employs a uniform search across the entire domain of the function.Bulger [12,13] has proposed integrating local search within the oracle to concentrate the search points around the local optimum within each basin.While it achieves considerable speedup for some cases with respect to the oracle calls, the coherent arithmetics to perform the local search generally increase the cost to implement the oracle itself, and thus, the overall speedup over the vanilla GAS is not apparent.Additionally, this algorithm does not take into account the structure of the function outside of individual basins, although some optimization problems could benefit from considering their overall structure.
In this article, we propose an extension of GAS without modifying the oracle but by modifying the initial state, thus avoiding making the oracle complex in contrast to Refs.[12,13].Namely, we adaptively update the initial state of GAS during the optimization process, utilizing the information about the objective function obtained from the optimization history.In the updating rule, we incorporate the concept of a classical optimization method known as covariance matrix adaptationevolution strategy (CMA-ES) [14].CMA-ES is known for efficiently minimizing unimodal test functions and is also highly competitive in the global optimization of multimodal functions [15], making it widely applicable in various fields, including generative models [16] and tensor network models [17].
Inspired by the idea of adapting probability distributions in CMA-ES, we propose a natural extension of GAS, which we call quantum adaptive distribution search (QuADS), that combines Grover adaptive search and CMA-ES principles.Our algorithm aims to optimize continuous functions by adaptively changing the belief distribution and using the quantum-based search capabilities of amplitude amplification.Through numerical simula-tions in scenarios of up to 10 dimensions, we observe that GAS constantly needs high computational costs regardless of the type of function instead of the assurance of finding the global optimum.Also, CMA-ES typically converges faster than other methods, but it frequently falls into local optima.QuADS consistently outperforms GAS and performs better than CMA-ES, especially for high dimensional cases in terms of expected oracle counts until obtaining the global optimal solution.We believe QuADS will be one of the promising tools for accelerating continuous optimization on quantum computers.
The rest of this paper is organized as follows.In Section II, we provide summaries of GAS and CMA-ES.In Section III, we detail the main concepts and algorithms of QuADS.In Section IV, we validate our method through numerical simulations and show that the proposed method surpasses traditional approaches.Finally, we conclude the paper in Section V. Code for our numerical simulation can be found at Ref. [18].

A. Grover Adaptive Search
Grover's algorithm [8,9] achieves a quadratic speedup in searching for specific items within unstructured tables.Specifically, given a set of elements A = {a i } (i = 0, . . ., n − 1) and a subset of 'good' elements G ⊂ A, Grover's algorithm efficiently identifies the index of the element that belong to G. Grover's algorithm is a special case of the more general concept of amplitude amplification [19], where the initial state is uniform.The algorithm for amplitude amplification is shown in Algorithm 1. O represents an oracle gate, which selectively inverts the phase of the amplitude of states corresponding to the 'good' elements.λ is selected in the range (1, 4/3) because it is guaranteed to achieve quadratic speedup within this range [9].
Grover Adaptive Search (GAS) [6,7] is a quantum optimization method that utilizes Grover's algorithm.GAS begins with a uniform superposition state and, using the current minimum value as a threshold, amplifies the amplitudes of states below this threshold through Grover's search.See Algorithm 2 for the pseudo-code of the algorithm.O f,θ is an oracle with objective function f to perform a phase flip in regions where f is less than the threshold θ and H denotes Hadamard gate.
GAS has been extended to continuous optimization problems in Refs.[10,11].It is done by treating continuous optimization problems the same way as discrete ones by discretizing the continuous space using fixedpoint representation.In continuous problems, the number of discretization grids is 2 Dτ , where D is the function's dimension, and τ is the bit count for the continuous variable.This technique is especially effective for optimization problems of highly discontinuous functions due to the inherent nature of Grover's algorithm, which accelerates the search in "needle in a haystack" scenarios.However, in practical situations, GAS is not the best choice because employing methods beyond uniform search is advantageous.[14] is a black-box optimization algorithm based on the evolution strategy [20] for complex and nonconvex functions.Compared to optimization methods using derivatives, this method is more robust to multimodal functions [15].Also, the well-designed parameter update rule makes the algorithm invariant to coordinate affine transformations and be able to handle bad scaling where the scale of the function varies for each dimension.
Update µ by weighted mean of X: wiXi.
Update pσ to track how much the samples have moved from the mean (µ eff = 1/ i w 2 i ): Adjust the step size to keep the length of the evolved path close to E∥N (0, I)∥: Define h g σ where g denote current iteration number.h g σ is used to prevent C from increasing too fast.
Update C by average direction and one-step covariance: where µ is the mean, C is the scaled covariance, and σ is the step size.While C and σ contribute to defining the covariance, C primarily determines the shape and orientation, whereas σ determines the scale of the search.The algorithm proceeds through the following iterative steps: Initially, M search points, x 1 , . . ., x M , are gen-erated from the normal distribution N (µ, σ 2 C).Subsequently, the function values f (x 1 ), . . ., f (x M ) at each candidate point are evaluated, and the top K(< M ) individuals with the lowest function values are selected.
Then, the parameters of the normal distribution are updated accordingly.µ is adjusted toward the average of the best individuals.C is updated to reflect the direction in which the function value changes significantly, and σ increases when µ consistently moves in a similar direction over several iterations and decreases when the movement of µ appears more random.For updating C and σ, the evolution path (moving average) p c and p σ are utilized.The detailed parameter update rule is shown in Algorithm 4 (see Ref. [14] for further detail).By repeating these operations, the normal distribution progressively approaches the optimal solution.

III. QUANTUM ADAPTIVE DISTRIBUTION SEARCH
Algorithm 5 QuADS (proposed method) The main concept of our method is to extend GAS by using an adaptive multivariate normal distribution for the initial state in quantum search.Denote p is the probability of finding a point below a specified threshold in the distribution.Compared to GAS approach, QuADS is expected to achieve a higher p by initiating the quantum search with an adaptive distribution instead of a uniform distribution.This is especially advantageous when the threshold is near the global optimum because while GAS consistently demands computational resources proportional to O( ), QuADS has the potential to reduce this cost when the adaptive distribution is properly updated toward the optima.From another perspective, our method enhances the efficiency of the sampling process in the CMA-ES by using amplitude amplification to sample from regions where function values are below an adaptive threshold.Classically, the number of oracle calls required to find a sample is O(1/p).However, amplitude amplification can reduce this to O(1/ √ p) oracle calls.It should be noted that QuADS may occasionally converge on local optima due to its adaptive distribution updating strategy, in contrast to GAS, which always identifies the global optimum.Nonetheless, in terms of the expected oracle call to locate the global optimum, we find QuADS outperforms GAS, as will be discussed in detail in Section IV.Algorithm 5 describes the QuADS, where G µ,Σ is a preparation operator for a multivariate normal distribution with mean µ and covariance Σ, that is, where Z is the normalization constant.The construction of G µ,Σ has been discussed in Refs.[21,22].This operator can be implemented using a polynomial number of quantum gates with respect to D and τ .Our approach begins by sampling K superior points from the region below the adaptive threshold θ.In the sampling, amplitude amplification is used, with the normal distribution serving as an initial state.Figure 1 illustrates the corresponding quantum circuit for the amplitude amplification.These sampled points are then used to update both the thresholds and the normal distribution parameters.For updating the distribution parameters, we employ the same updating rule as in CMA-ES.By utilizing the updating rule from CMA-ES, we can combine insights about the structure of the function and lead to an efficient search.As for the updating rule for thresholds, QuADS uses a moving average of the q-quantile of the function value at the sampled points, i.e., approximately equivalent to considering the best M qth value out of M samples, for some constant q.The reason for using this threshold update rule rather than using the minimum of sampled values as done in GAS is that if the threshold is updated prematurely, the distribution may not have sufficient weights on the region below the threshold, and that causes inefficient sampling.Fig. 2 shows an example of the QuADS optimization process for the alpine02 function in Table I.The mean of the normal distribution progressively shifts toward the global optimum, while the distribution's covariance broadens in the direction of this movement.Each iteration's points are sampled through amplitude amplification using the normal distribution as a prior.Notably, more samples are drawn from areas where the normal distribution's weights are larger, within regions with small function values.

IV. NUMERICAL SIMULATIONS A. Performance estimation by amplitude simulation
In this section, we compare the performance of QuADS, GAS, and CMA-ES.QuADS and GAS are simulated on a classical computer.The direct circuit simulation of these algorithms would need many ancilla qubits repeat r times q0 : for performing G µ,Σ and O f,y .To avoid such a situation, we simulate the effect of each operation directly.More precisely, the simulation process is conducted as follows: We first initialize the wave function as Let the Grover iteration be then |ψ n ⟩ can be calculated through two steps.First, flip the signs of the points whose function values are below the threshold θ: Subsequently, perform a reflection on the initial state: Using this strategy, we can simulate up to D = 3 and τ = 8 on a laptop.
For multiple test functions shown in Table I, we run each method for 100 times.We chose these test functions for their multimodal characteristics and their ability to be defined in arbitrary dimensions.We scaled the domain of the D dimensional test function to be [0, 1] D .The plots of the test functions are shown in Appendix C. In CMA-ES and QuADS, we uniformly sampled the initial mean µ 0 from the function domain for each trial.We initialized the covariance matrix, C 0 , as the identity matrix, and set the initial step size, σ 0 , to 0.5.We used f (µ 0 ) as the initial threshold of QuADS.For the threshold of GAS, we used f (x) as the initial threshold, where x is a sample from a uniform distribution on the domain.The termination conditions for CMA-ES and QuADS are local convergence or finding the global optimum.Finding the global optimum is defined as sampling a point in the ϵ = 0.01 neighborhood of the global optimum.Note that the global optimum of the test functions is known.Also, we regard the local convergence as achieved when σ becomes smaller than ϵ σ = 0.01.For updating normal distribution in QuADS and CMA-ES, we use the hyperparameters shown in Appendix A. For the threshold update in QuADS, we heuristically use α = 0.5 and q = 0.2.For the number of samples M and the number of selected samples K in CMA-ES, we use the default setting in Ref. [14], For QuADS, we use the same K as CMA-ES.Finally, for amplitude amplification, we set the increase rate λ to 6/5, suggested in [9].
First, we compare the methods by the expected oracle calls until global convergence is reached.The expected numbers of oracle calls in a single run of the algorithms can be written as where since we need to run the algorithms 1/p global times on average to achieve the global optimum.Fig. 3 shows o total of each method.Additionally, the figure includes results for two well-established classical optimizers: particle swarm optimization (PSO) [23,24] and basinhopping optimizer [25] for reference.The hyperparameters and implementations for the PSO and basinhopping optimizer are detailed in Appendix B. The data clearly show that QuADS consistently outperforms GAS in all cases.For all functions except the griewank function, GAS shows almost the same number of oracle call counts.The reason for the increased oracle call count in the griewank function is due to its significant fluctuations around the global optimum (See Appendix C), resulting in a smaller success region, where the distance from the global optimum is within ϵ σ and the function value is below the threshold.Compared to classical methods CMA-ES, PSO and basinhopping, our approach demonstrates at least comparable performance and, notably, it surpasses them on rastrigin, schwefel, styblinski tang, alpine02, wavy function.These functions, other than the rastrigin function, are characterized by multiple big valleys within the search space, each containing promising local solutions, whose optimization is difficult for CMA-ES.On the other hand, in the case of the ackley, alpine01, and griewank functions, characterized by numerous small local optima within a large valley, QuADS demonstrates performance nearly identical to that of CMA-ES in a three-dimensional setup.This structure makes optimization more straightforward for CMA-ES, leading to an increased p, which consequently reduces the influence of quadratic acceleration.However, in scenarios with higher dimensions, where p is likely to be lower, the benefits of quantum search would become more pronounced.
In Fig. 4a and 4b, we show the optimization process of the three-dimensional rastrigin function and schwefel function, for which the QuADS performs significantly better than the other methods.Our results indicate that CMA-ES converges quickly but mostly fails to find a globally optimal solution; GAS finds a globally optimal solution on every trial, but its search is inefficient and requires a large number of oracle calls.We observe that QuADS is more likely to converge to a global solution than CMA-ES and performs its search more efficiently than GAS.

B. Performance estimation by classical algorithms
Next, we examine the performance of our method in higher dimensional settings.As the dimensionality increases, the number of local optima grows exponentially, significantly complicating the optimization problem.Consequently, this leads to a reduction in p, enhancing the effectiveness of the quadratic acceleration inherent in quantum search.Therefore, it is anticipated that QuADS becomes increasingly advantageous under these   2)) when D = 3.We utilized the bootstrap re-sampling method to generate the error bars in these figures.This method involves randomly selecting data from the pool of 100 simulations and recalculating Eq. ( 2) for each of these selected data sets.We present the estimated 5th and 95th percentiles of the expected oracle evaluation count as confidence intervals.Confidence intervals outside the figure indicate that upper bounds cannot be estimated from bootstrap.
conditions.Since the size of the state vector in quantum simulators increases exponentially as the dimension increases, conducting simulations on a quantum simulator becomes difficult in more than 4 dimensions when τ = 8.To measure performance in higher dimensions, we replace the quantum sampling in QuADS and GAS with classical sampling and estimate the number of oracle calls required for the entire optimization in the case of quantum sampling from the optimization process of the corresponding classical algorithm.
For this, we utilize the optimal oracle call count, N opt (p), of amplitude amplification, where p represents the probability of obtaining a correct sample in the classical sampling.When we set the number of rotations r in amplitude amplification to N opt (p), and for θ satisfying sin 2 (θ) = p, we find that sin 2 ((2N opt (p) + 1)θ) = 1.This result demonstrates that N opt (p) indeed represents the optimal number of rotations.Thus, N opt (p) serves as a lower bound of the expected oracle call count within each amplitude amplification step of GAS or QuADS.We estimate p for each step by classical sampling; if obtaining M i samples under the threshold requires K i function evaluations at ith iteration of the corresponding classical algorithm, we estimate probability p i as pi = M i /K i .We then estimate a lower bound of o local (o global ) by an average of i N opt ( pi ) using the optimization trials that achieved local (global) convergence.Then, a lower bound to o total , which we denote as õlower , is estimated through Eq. ( 2) using the corresponding estimated lower bounds for o local and o global .Finally, o total can be inferred from õlower , by multiplying a certain constant coefficient.This approach is valid because both o total and õlower scale in a similar manner with respect to D. We conducted 100 individual optimizations to estimate õlower for each of the rastrigin, schwefel, styblinski tang, ackley, alpine01, and griewank functions up to feasible dimensions in classical algorithms.We choose the functions because QuADS surpasses GAS and CMA-ES greatly for the former two, while for the latter functions, differences between CMA-ES and QuADS are small (see Fig. 3).
First, we checked the consistency of using õlower to estimate o total .Figure 5 shows the relationship between õlower and o total .Each data point in the plot corresponds to a specific function and dimension for which both õlower and o total can be computed.It shows o total is correctly lower bounded by õlower .Furthermore, it can be observed from simulation that o total can be closely approximated by o total = 2.3õ lower .Thus, we use õtotal := 2.3õ lower as an estimator for o total of GAS and QuADS in high dimensional setting.More detail analysis about consis-tency of this approximation in high dimension is shown in Appendix D. Figure 6 presents the estimated õtotal and o total of GAS and QuADS and o total of CMA-ES.We found that QuADS consistently outperforms other methods in highdimensional settings across all tested functions.Furthermore, compared to other methods, QuADS exhibits a smaller increase in the number of oracle calls as the dimensionality increases.Although GAS exhibits a nearly constant slope regardless of the function, CMA-ES and QuADS show distinct slopes for different functions.Notably, for the rastrigin and schwefel functions, QuADS achieves quadratic acceleration compared to CMA-ES.

V. CONCLUSION
This paper proposes a method for quantum continuous optimization in a quantum computer.In our method, we enhance GAS by integrating the efficient search capabilities of CMA-ES.Our method can efficiently utilize function structure in the search process through the updating rule of CMA-ES.Our numerical simulation shows our method surpasses both CMA-ES and GAS in terms of expected oracle call count for finding global optimum.QuADS has a smaller increase in oracle call count with respect to dimension, and this observation suggests our method's significant potential in addressing high-dimensional problems commonly encountered in practical applications.
We list possible future studies in the following.First, although we neglected the cost for the initial state preparation, investigating the overall performance of QuADS including this cost is needed.However, we expect that the state preparation cost can be negligible for complex objective functions f (x).Second, we can extend our approach to discrete variable cases.Utilizing distribution in discrete optimization is generally referred to as "estimation of distribution algorithm" [26].Third, for the common challenge of continuous optimization on a quantum computer, we need to find practical oracles that are efficient in real time.The conditions for such an oracle would be multimodal functions which are difficult to optimize on a classical computer.Also, oracles that are more efficient when computed on a quantum computer, like the energy of materials, are attractive candidates.In this section, we describe hyperparameters used in Section IV.For CMA-ES update (Algorithm 4) in CMA-ES and QuADS, we employ the default hyperparameters used in [14].The hyperparameters for CMA-Update are enumerated in Table II.PSO [23,24] is a swarm-based optimizer that iteratively improves candidate solutions through communication among the swarm.PSO simulates the social behavior of birds within a flock.The position of each bird in the search space represents a potential solution, and its flight is guided by its own experience as well as the experience of other birds.The update equations for the velocity and position of each particle can be written as: is the current position of particle i, p i is the best known position of particle i, p g is the global best known position among all particles, w is the inertia weight, c 1 and c 2 are the personal and global coefficients, respectively, and r 1 and r 2 are random numbers between 0 and 1.For the implementation of PSO, the PySwarm library [27] was utilized, with the hyperparameters set as w = 0.9, c 1 = 0.5, and c 2 = 0.3.
Basin-hopping is a global optimization algorithm that enhances a local optimization method through the integration of stochastic perturbations to the coordinates, enabling the algorithm to escape local minima.In our implementation, the basinhopping module from SciPy optimization library [28] was employed.Due to the default configuration does not handle bounds on the optimization function, we modified the random perturbation process to ensure it does not exceed the defined If a perturbation results in exceeding the bounds, the solution is adjusted back to the nearest boundary.As a subroutine of the basinhopping algorithm, the L-BFGS-B method [29] was adopted.
In Section IV B, we estimate oracle counts of quantum computation in high dimension from the number of function calls obtained from classical computations using the equation o total = 2.3õ lower .Does the relation hold true for higher dimensions as well?This section aims to demonstrate the validity of this transformation in higher dimension.Specifically, we investigated the ratio of the expected number of oracle calls required for amplitude amplification to N opt (p) when λ = 5/4 for each p.
First, let's derive the expression for the expected number of oracle calls in amplitude amplification when p is fixed.The maximum rotation count n k in the k-th trial is given by n k := λ k−1 and the acceptance probability a r after performing r amplitude amplifications is a r := sin 2 ((2r + 1) arcsin √ p) .
In the k-th trial, we uniformly select r from integers 0, . . ., λ k−1 and perform r amplitude amplifications.Thus, the probability of rejection b k in the k-th trial is expressed as Therefore, the expected total number of oracle calls S required until acceptance is computed as follows: We compute this sum of infinite series by finite approximation with a sufficiently large K. Figure 9 shows the ratio S(p)/N opt (p) for each p.This ratio converges around 2.3 when p is small, which is consistent with Fig. 5.

FIG. 1 .FIG. 2 .
FIG.1.Structure of the quantum circuit for the search part of QuADS.Gµ,Σ gate represents the circuit for preparing a normal distribution.The dashed block is repeated r times, where r represents the rotation number for amplitude amplification.In QuADS, we iteratively perform sampling from this circuit using random r and subsequently update the parameters µ and Σ by the samples.

ra s tr ig in s c h w e fe l s ty b li n s k i ta n g a c k le y a lp in e 0 1 a lp in e 0 2 d e fl e c te d c o rr u g a te d s p ri nFIG. 3 .
FIG.3.Expected oracle call counts for finding global optimum for each method (Eq.(2)) when D = 3.We utilized the bootstrap re-sampling method to generate the error bars in these figures.This method involves randomly selecting data from the pool of 100 simulations and recalculating Eq. (2) for each of these selected data sets.We present the estimated 5th and 95th percentiles of the expected oracle evaluation count as confidence intervals.Confidence intervals outside the figure indicate that upper bounds cannot be estimated from bootstrap.
FIG.4.Optimization processes for the three-dimensional rastrigin function (4a) and schwefel function (4b).The upper figure presents the smallest function value obtained up to a specific number of oracle calls in each trial.A circle marker at the end of each trial represents the trial found a global solution, and a cross marker indicates local convergence.The lower figure shows the proportion of trials that found a global solution (solid line) and that reached the terminal condition (dashed line) up to the number of oracle calls.The number of oracle calls required for local convergence can be larger than those for finding the global optimum.This arises from the algorithms' termination criteria: they terminate immediately upon locating the global optimum, but may continue until convergence of σ if the global optimum remains unfound.

FIG. 5 .
FIG.5.The consistency verification between classical estimation and quantum simulation regarding the expected number of oracle calls across the functions in Fig.6.The dotted line represents o total = õlower .The classical estimation results (õ lower ) serve as a lower bound to the quantum simulation results (o total ), which is consistent with theoretical predictions.The results of the quantum simulations are approximately twice that of the classical estimations, as indicated by the solid line o total = 2.3õ lower for both QuADS and GAS.
Appendix B: Particle swarm optimization and basin hopping optimizer

FIG. 6 .
FIG.6.Oracle call counts required to find the globally optimal solution estimated by running the corresponding classical optimization process.The solid line represents the regression line of õtotal for GAS and QuADS and o total for CMA-ES, and the dotted line represents o total for GAS and QuADS.The equation of the regression line is displayed in the same color, and r 2 indicates the coefficient of determination for the regression.

FIG. 8 .
FIG. 8.The landscape of one dimensional griewank function.The red dots indicate the minimum point of the griewank function.
o local and o global are the average number of oracle calls used to reach local or global convergences, respectively, and p global is the probability to find the global optimum.Note that we can approximate it as p global = n global /100 where n global is the number of simulations in which we found the global optimum.Then, the expected oracle calls until global convergence can be calculated as

TABLE I .
Function definitions and domains

TABLE II .
Hyperparameters setting for CMA-ES update