Improving Gradient Methods via Coordinate Transformations: Applications to Quantum Machine Learning

Machine learning algorithms, both in their classical and quantum versions, heavily rely on optimization algorithms based on gradients, such as gradient descent and alike. The overall performance is dependent on the appearance of local minima and barren plateaus, which slow-down calculations and lead to non-optimal solutions. In practice, this results in dramatic computational and energy costs for AI applications. In this paper we introduce a generic strategy to accelerate and improve the overall performance of such methods, allowing to alleviate the effect of barren plateaus and local minima. Our method is based on coordinate transformations, somehow similar to variational rotations, adding extra directions in parameter space that depend on the cost function itself, and which allow to explore the configuration landscape more efficiently. The validity of our method is benchmarked by boosting a number of quantum machine learning algorithms, getting a very significant improvement in their performance.


I. INTRODUCTION
Machine learning is revolutionizing society.We have witnessed it recently with the advent of game-changing applications such as ChatGPT, where Large Language Models [1] allow for unprecedented human-computer interaction.Such systems have a neural structure at their roots, with weights that must be optimized so as to minimize some error cost function.This optimization is usually done via gradient methods such as gradient descent, stochastic gradient descent, adaptive moment estimation, and alike, which are not free from problems such as barren plateaus and local minima.One important consequence is that current artificial intelligence (AI) systems suffer from long and complex training procedures, amounting to dramatic and unsustainable computational and energy costs [2].And given the increasingly-huge demand of AI systems, the situation is even worse: we are in big need of more efficient machine learning.
Mathematically speaking, numerical algorithms used for optimization problems are mostly based on techniques that sweep the whole hyperspace of solutions.This landscape might present a tractable shape where the optimal solution can be easily found, as in convex problems.However, most interesting problems have an ill-defined landscape of solutions, with non-predictable shapes plenty of complexities impeding analytical and numerical methods to properly act on them.Gradient methods explore this landscape by computing local gradients of the cost function and updating the parameters according to the computed local slopes.More complex optimization methods, such as the widely-used stochastic gradient descent and adam optimizer, are based on the same idea.
The work that we present here addresses the two main caveats of employing gradient methods for optimization purposes: local minima and barren plateaus.These features emerge inherently due to the shape of the cost function.In fact, both limitations can be understood as com-ing from moving along certain directions in the landscape of solutions.Changes in the optimization variables are the result of changes in the cost function, as the variables are modified.This is evidently an obstacle when we find a flat region in the landscape of the cost function.
In this paper we propose an alternative way to navigate the landscape of solutions by introducing extra freedom in the directions of the parameters' update.This is done by using (i) changes of coordinates, and (ii) adding extra dimensions related to the cost itself.In our specific implementation, we consider changes to hyperspherical coordinates, as well as frame rotations.In order to do that, we treat the cost as an extra variable to be optimized, and implement a self-consistent variational method in which the cost axis not only represents the value to be optimized, but also serves as an extra dimension to escape from local minima and barren plateaus.Importantly, we stress that our approach aims not to lower the cost function values, but rather and specifically to move out from barren plateaus.
The structure of this paper is as follows.In Sec.II we present our boosting methodology via coordinate transformations.In Sec.III we show a set of benchmarks to validate our idea, where we systematically improve the performance of a number of well-known quantum machine learning algorithms.Finally, in Sec.IV we present our conclusions and discuss future directions.

II. METHODOLOGY
An optimization problem can be stated as the minimization of a scalar real cost function f (⃗ x) of n variables ⃗ x ≡ (x 1 , x 2 , ..., x n ).Methods based on gradient descent apply an update of parameters mainly based on the calculation of changes in the function when there is a change < l a t e x i t s h a 1 _ b a s e 6 4 = " y n 7 2 u j 8 s / D T U / 8 k M N F S Q D J p 4 L E M = " > A A A B 6 H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s W 7 A P a Q T L p n T Y 2 k x m S j F C G f o E b F 4 q 4 9 Z P c + T e m 7 S y 0 9 U D g c M 6 5 5 N 4 T J I J r 4 7 r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x + 0 d J w q h k 0 7 U S Y N w 8 k i e y S t 5 c 5 T z 4 r w 7 H / P W g p P P H J I / c D 5 / A K c 3 j z I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " y n 7 2 u j 8 s / D T U / 8 k M N F S Q D J p 4 L E M = " > A A A B 6 H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s W 7 A P a Q T L p n T Y 2 k x m S j F C G f o E b F 4 q 4 9 Z P c + T e m 7 S y 0 9 U D g c M 6 5 5 N 4 T J I J r 4 7 r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x + 0 d J w q h k 0 W i 1 h 1 A q p R c I l N w 4 3 A T q K Q R o H A d j C 6 n f r t J 1 S a x / L e j B P 0 FIG. 1: (a) Cost function f (x) presents a plateau as a function of optimization variable x, so that gradient methods get stalled at point P due to a null gradient.(b) A change in the polar coordinates of point P leads to a point P ′ , which can then be "collapsed" back to the landscape of the cost function, leading to point Pc.Optimization from point Pc is no longer stalled, since the gradient is non-zero.(c) A frame rotation leads to a description of point P with different cartesian coordinates fr(xr) and xr.In the new rotated frame, the gradient at P is non-zero, so that gradient methods are no longer stalled. in the coordinate values, i.e., where ∆⃗ x = ⃗ x 1 − ⃗ x 0 for some arbitrary ⃗ x 1 , and where ⃗ x 0 is the point in parameter space where the gradient ⃗ ∇f (⃗ x) of the function is computed.As is clear from this equation, a gradient close to zero represents flat regions in the landscape of solutions, implying no updates in the parameters, and therefore disabling further optimization steps, so that the optimization gets stuck.
To avoid that, we propose a new method based on a change of coordinates.Our idea can be well understood by considering the case of optimization of a onedimensional cost function, represented by the landscape in Fig. 1.In Fig. 1a, the cost function has a plateau and therefore the gradient is zero, so that optimization is stalled.However, in Fig. 1b we see that we can avoid this by changing the polar coordinates of point P in the landscape, i.e., in the two-dimensional plane formed by the axis corresponding to the coordinate x to be optimized and the cost function f (x).The new point P ′ can then be "collapsed" back to the landscape of the cost function, so that we can escape from the null-gradient region.Alternatively, we can also rotate the frame, as in Fig. 1c.In the new rotated frame, the rotated cost function f r (x r ) does not present a null gradient with respect to the rotated parameter x r , so that moving by gradient methods is again possible in the new frame.
While the above is a simple picture explaining the basic idea of our algorithm, it captures many of its basic ingredients.Long story short: if the problem coordinates do not work, then we change them by including the cost as an extra coordinate.This change of coordinates can be a rotation, or a more generic one.
Let us now explain how to implement our method in full generality.As sketched above, there are two versions of it: changing to hyperspherical coordinates, or rotating the frame.As we shall see, both approaches are similar, though not equivalent.Let us start with the change to hyperspherical coordinates.

A. Option 1: hyperspherical coordinates
Consider the cost function f (⃗ x) and the original n cartesian coordinates ⃗ x.In the landscape space, this defines a point P in the (n+1)-dimensional space described by the coordinates and the cost function: ( The procedure to follow in one iteration step is as follows: 1. Make a change of coordinates.Without loss of generality, we will use (n + 1)-dimensional hyperspherical coordinates.Therefore, make the change with { ⃗ θ, r} being the n + 1 hyperespherical coordinates for point P .
2. Perform gradient descent on the new set { ⃗ θ, r} with reference cost value f (⃗ x).In practice, we will simply update the variables in the hyperspherical coordinates using changes of the cost function in the original cartesian description (the one we want to minimize) each time we update these parameters.We can do so by projecting each point in the hyperspherical description into the cartesian cost function to obtain that reference value.This translates into an iterative switiching process from one description to the other in orther to be able to move in the hyperspherical coordinates while the changes in the original cost function are the ones driving the parameter updates.This procedure will end up giving us a new point in the (n+1) dimensional plane, defined by the change { ⃗ θ, r} → { ⃗ θ ′ , r ′ } after all the parameters have been updated: 3. Once the gradient descent is completed, we can make a final transformation )}, retrieving the point P c which is just defined by the projection of point P´into the original cost function: where c stands for "collapsed".
These steps are to be repeated until convergence, as in usual methods based on gradient descent, with the difference that the gradient is computed via the change of coordinates.Additionally, this change of coordinates can be implemented only when we fall in local minima or barren plateaus, in order to move out from the region, or alternate between the two during the optimization.

B. Option 2: plane rotations
An alternative to the change to hyperspherical coordinates is to rotate the frame.Consider again the cost function f (⃗ x) and the original n cartesian coordinates ⃗ x.Following Eq.( 2), this generates a point P in the (n + 1)dimensional space defined by the coordinates and the cost function.This point can also be described as a vector ⃗ P in the considered frame.The procedure to follow this time in one iteration step is then as follows: 1. Make a change of coordinates in the frame.Without loss of generality, we will use a (n + 1)dimensional rotation of the axis.
with R the (n + 1)-dimensional axis rotation and subscript r standing for rotated.A convenient simplification is to implement a 2-dimensional rotation of the plane formed by some individual coordinate x i and the function f (⃗ x), i.e., where subindex r indicates the rotated variables, and with R 2 ∈ SO(2) the 2 × 2 matrix corresponding to a rotation of the (x i , f (⃗ x)) plane.This is a good choice if we think that there may be a barren plateau or a local minima in the direction of coordinate x i .Additionally, the rotation angle may not be necessarily fixed, so that it can work as an extra hyperparameter in the optimization, which has been the case for most of our results.For simplicity, and without loss of generality, from now on we assume such a 2d rotation.
2. In the rotated frame, perform a similar procedure to the one explained in point 2 from subsection II A to compute the gradient descent.Now, instead of switching from one description to the other by means of projections from the updated parameters to the original cost function, we will simply move from one to the other by means of relative rotation matrices.
This will lead us to a final point, after all the updates have been performed, given by the inverse rotation R −1 2 , recovering a new updated cost value f u (⃗ x u ): Notice, that we use subindex u to make explicit the fact that now the change of coordinates does not come from a projection, so there is no collapsed cost function, but is instead directly obtained from relative matrix rotations.
Again, these steps are to be repeated until convergence, as in usual methods based on gradient descent.Furthermore, one could implement the rotations just when falling in local minima or barren plateaus, in order to move out from the region, or alternate between the two frames during the optimization.
It is also worth noticing that placing the rotating point elsewhere does not affect the performance of the method.All in all, computing the gradients with respect to any other point in the parameter space while retaining description of parameters with respect to the original should not give either any improvement.
The two methods presented here make use of a higherdimensional space, where the cost function is included as an extra dimension, and is optimized self-consistently.One may be tempted to say that all the advantage of our method comes from extending the variables to a higher-dimensional space.While this is tempting, it is actually not true.We have also tested implementations where we just extended the coordinate space, but without adding the cost function as an extra coordinate, in a way similar to kernel methods.In such implementations, we have seen no significant advantage in convergence, in stark contrast with the two methods presented above.We therefore conclude that including the cost function as an extra coordinate to be optimized self-consistently is in fact fundamental.We want to remark that, physically, in the parameter space, we are not introducing any new dimension.The parameter landscape is built upon a number of variables which are combined in a certain manner to define a cost function.So evaluating these parameters at a given point gives some cost value.We are proposing to use that same cost function, which has support on a given parameter space, to add one more dimension through which we can navigate to optimize the parameter search.
Concerning the learning rate, upon applying coordinate transformations to escape these plateaus, the optimization landscape undergoes a significant change.The gradients' scale can be substantially different in the new coordinates, which implies that the previously optimal learning rate may no longer be effective.A higher learning rate, in theory, can indeed introduce a greater degree of perturbation in the parameters, facilitating the escape from a plateau.However, this must be balanced carefully; too high a learning rate can lead to overshooting minima or increased oscillations in the training process, potentially destabilizing convergence.This is also aligned with the fact that learning rate is a problem-dependent parameter.It usually requires a preliminary grid search to find the optimal learning rate for each use case.Here, we are altering the optimization process itself so, in the same way learning rate is problem-dependent, one can think that any modification in the optimization process will require further tuning of important hyperparameters as the learning rate.
In what follows we show the benefits of our approach by implementing a number of benchmarks.

III. RESULTS
For the purpose of implementation and testing, a set of quantum machine learning algorithms has been used for the sake of comparison, with the original implementation being done via PennyLane [8].Some of these algorithms introduce new methods for solving optimization processes, as it is the case of quantum natural gradient descent [9], or the construction of local cost functions in [3].Some other methods propose optimization algorithms for solving certain tasks.
Here we show how our algorithm actually boosts the performance of all the tested methods.For the purpose of the analysis, we have revisited 18 different algorithms, claiming significant advantages (i.e., faster convergence or lower cost function values) for 15 of them.Let us also stress that this improvement is obtained against optimal and fine-tuned implementations of the original algorithms using PennyLane, and not merely naive ones.This also accounts for using the most efficient optimizer for each case, which happens to be mostly Adam optimizer.
Let us start by showing in Table I a summary of our results for the method using the change to hyperspherical coordinates.As we see in the table, the change to hyperspherical allows for substantially-faster convergence in five quantum machine learning algorithms, with respect to their original optimal implementation.Let us now describe four of these examples.
In Ref. [3], it is shown that local cost functions with a shallow quantum circuit improve the trainability of variational quantum algorithms.Global cost functions show barren plateaus that cannot be overcome by usual means, whereas local cost functions improve the convergence.In this case, we implemented simulations for a set of 50 different initial configurations to solve a certain optimization problem.The original global approach showed 30% success in overcoming barren plateaus with more than 100 iterations in average for convergence.Local cost functions showed a ratio of success of 98% with an average number of iterations equal to 29. using our approach with hyperspherical coordinates, we reach a success of 100% with 4.7 iterations in average, implying a very significant improvement, see Fig. 2. Next, in Ref. [5] it was considered the idea of quantum signal processing in order to perform an optimization process for function fitting.Again, we have observed that our implementation using hyperspherical coordinates provides a speed up in optimization steps for a faster convergence to lower values of the cost.In partic- ular, to reach a limiting cost value of 10 −5 , we were able to reduce the number of iterations for convergence from 90 in the original implementation to 54 in ours, this is, a reduction of 40% of iterations, see Fig. 3. Additionally, in Ref. [6] the authors proposed a quantum neural network based on a variational quantum circuit to perform classification of a data set.Here we plot in Fig. 4 the convergence of the classification procedure for a specific quantum circuit configuration (namely, a fixed ansatz with the same number of qubits and layers)   to classify the Iris dataset.The plot shows that our approach with hyperspherical coordinates improves significantly the convergence stability of the algorithm while reducing the number of steps to achieve convergence, both in the accuracy of the classification as well as in the cost function to be optimized.For instance, to achieve a similar value of the cost of ≈ 0.23, we reduce the number of steps from roughly 100 to roughly 30, implying a speedup of 70% with the new implementation.From this example we also see that our new implementation provides a smoother convergence.Avoiding randomness and drastic variations in the optimizing curve can also become a concern for certain applications, and we understand this as a sign that our implementation fits better the problem landscape.
As a last example for the hyperspherical approach, we consider the algorithm in Ref. [7].Here the authors proposed a generalization of the variational quantum eigensolver to compute thermal states via a hybrid optimiza-tion procedure.Again, as shown in Fig. 5, we can reduce the number of optimization steps to find the thermal states.In particular, we reach convergence in almost half the number of optimization steps, reducing from 103 in the original implementation to just 51 in ours.
Next, let us consider our second approach, namely the frame rotations.Our results are summarized in Tables II  and III, showing examples of improvement in the number of iterations and in the cost function value, respectively.The method of frame rotations allows for substantiallyfaster convergence in ten quantum machine learning algorithms and lower cost functions in three, with respect to their original optimal implementation.Let us briefly sketch three of these examples.
In Ref. [14], the authors proposed a novel algorithm to solve linear systems of equations by using a hybrid quantum-classical approach suitable for near-term quantum devices.We have compared this approach with two different configurations of our rotation method, where ro- tations were implemented for different planes.As shown in Fig. 6, depending on the rotated plane we can achieve faster convergence as well as lower values of the cost function, as compared to the original approach.
Next, the work in Ref. [13] describes an optimization process built in order to test the expressive power of specific quantum circuits and their ability to fit certain functions by learning features defining their Fourier series.Our comparison is shown in Fig. 7.Even if the final cost is tiny for both cases, we can observe in the plot that the computed cost function in our approach is almost always lower than the one computed with the original method.Additionally, in Ref. [4], the authors use a variational quantum eigensolver in order to find the lowest-energy state of the hydrogen molecule, directly fixing three different spin sectors.In Fig. 8 we plot the results for the case of total spin S = 1, showing a significant advantage of our method both in smoothness and number of iterations to converge to a certain cost value.In order to find the ground state of the molecule our implementation needs 34 iterations, in sharp contrast to the 54 needed by the original method.Also, in Ref. [11] the authors proposed to substitute the original Hamiltonian describing our quantum systems/cost function in variational quantum algorithms, by another one called "gadget" Hamiltonian.The idea is that the gadget Hamiltonian is built from local interactions, so that the landscape of solutions is less likely to present local minima and barren plateaus during the optimization.In this case, our implementation is able to reduce the number of optimization steps from 140 in the original method to 110 by using frame rotations, as shown in Fig. 9.
Last but not least, in Ref. [12] the authors introduced a variational quantum optimizer to tackle optimization problems with continuous variables, by encoding these variables inside the 3 continuous parameters within each qubit's Bloch sphere.In Fig. 10 we show how the error in performing the integral of the function e −x 2 , implemented by encoding the solution in Fourier modes.We see that the new approach reduces significantly the optimization steps for an arbitrary error bound.

IV. CONCLUSIONS AND FURTHER WORK
In this paper we have proposed a boosting strategy for machine learning methods based on gradient descent, and benchmarked it for different quantum machine learning algorithms.Our idea is based on using rotation matrices involving the parameters of the hyperspace of solutions and the cost and, alternatively, the use of hyperspherical coordinates inside the hyperspace of solutions in which the axis of the cost is also included.The procedure allows to alleviate the effect of many local minima and barren plateaus, thus accelerating the convergence of machine learning methods.
The procedure discussed in this paper is completely general, and can be applied both to classical and quantum machine learning algorithms.According to our benchmarks, the improvement in convergence, stability, and efficiency can be very significant.As such, we believe that our boosting procedure has a huge potential to improve the performance of AI systems, in turn reducing its computational cost and energy consumption.
There are different lines of work that can be explored in the future.For instance, it would be interesting to test different changes of coordinates, even dynamical and non-linear ones.Moreover, the performance of the boosting should be further tested in the context of complex deep learning algorithms such as convolutional neural networks, transformers, and generative models, which are computationally quite expensive.In addition, it would also be interesting to explore the performance of this boosting technique in the context of Tensor Network methods [15].
i s J 0 C m 5 F e 9 D L x P 6 + b m P D a T 5 m M E 0 M l m S 8 K E 4 5 M h L L H 0 Y A p S g y f W I K J Y v Z W R E Z Y Y W J s P C U b g r f 4 8 j J p n d W 8 y 9 r F / X m l f p P H U Y Q j O I Y q e H A F d b i D B j S B w A i e 4 R X e H O G 8 O O / O x 7 y 1 4 O Q z h / A H z u c P b 5 + N 2 w = = < / l a t e x i t > f (x) < l a t e x i t s h a 1 _ b a s e 6 4 = " R n y z I g 4 I y 6 4 z h h o B A N N 9 E c B O I 5 H J F j c k o 8 c k V q 5 I 7 U S Y N w 8 k i e y S t 5 c 5 T z 4 r w 7 H / P W g p P P H J I / c D 5 / A K c 3 j z I = < / l a t e x i t > ✓ t e x i t s h a 1 _ b a s e 6 4 = " U / T 0 0 + j y M Z 9 K l 0 R l 7 h L j p C z D T p 8 = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q i 8 e I 5 g H 6 d N + d 9 3 l p w 8 p l 9 + A X n 4 x t x d I 3 r < / l a t e x i t > x r < l a t e x i t s h a 1 _ b a s e 6 4 = " n p R J F R Z W i V G Z h H N s p f u I k E H Z x l 0 = " > A A A B 7 3 i c b V D L S g N B E O z x G e M r 6 t H L Y B D i J e y K r 2 P Q i 8 c I 5 g H J s s x O Z p M h s 7 P r z K w Y l v y E F w + K e P V 3 v P k 3 T p I 9 a G w q 8 S N y c l C F H 3 S 9 9 d X s x T S M m D R V E 6 4 7 r J M b L i D K c C j Y u d l P N E k K H p M 8 6 l k o S M e 1 l 0 3 v H + N g q P R z G y p Y 0 e K r + n s h I p P U o C m x n R M x A z 3 s T 8 T + v k 5 r w y s u 4 T F L D J J 0 t C l O B T Y w n z + M e V 4 w a M b

FIG. 2 :FIG. 3 :
FIG. 2: [Color online] Improvement in the average number of iterations as a function of the learning rate, for the algorithm in Ref.[3] to alleviate barren plateaus with local cost functions.Comparison between PennyLane and the Change of Coordinates (CC) implementations.

FIG. 4 :FIG. 5 :
FIG. 4: [Color online] Convergence of cost function and accuracy versus number of optimization steps, in the variational quantum classifier from Ref. [6] for the Iris dataset.Comparison between PennyLane and the Change of Coordinates (CC) implementations.

FIG. 8 :FIG. 9 :
FIG. 8: [Color online] Convergence of cost function versus number of optimization steps, in the variational quantum eigensolver for different spin sectors from Ref.[4].Comparison between PennyLane and the rotation implementations.

FIG. 10 :
FIG.10:[Color online] Convergence of cost function versus number of optimization steps, for the variational quantum continuous optimizer from Ref.[12].Comparison between PennyLane and the rotation implementations.

TABLE I :
Advantage in number of iterations for a number of algorithms, with respect to the original implementation using Xanadu's PennyLane, and where the new implementation uses the change to hyperspherical coordinates.Numerical data is the average number of iterations.

TABLE II :
Advantage in number of iterations for a number of algorithms, with respect to the original implementation using Xanadu's PennyLane, and where the new implementation uses frame rotations.Numerical data is the average number of iterations.

TABLE III :
Advantage in cost function value for a number of algorithms, with respect to the original implementation using Xanadu's PennyLane, and where the new implementation uses frame rotations.Numerical data is the average number of iterations.