Guided quantum walk

We utilize the theory of local amplitude transfers (LAT) to gain insights into quantum walks (QWs) and quantum annealing (QA) beyond the adiabatic theorem. By representing the eigenspace of the problem Hamiltonian as a hypercube graph, we demonstrate that probability amplitude traverses the search space through a series of local Rabi oscillations. We argue that the amplitude movement can be systematically guided towards the ground state using a time-dependent hopping rate based solely on the problem's energy spectrum. Building upon these insights, we extend the concept of multi-stage QW by introducing the guided quantum walk (GQW) as a bridge between QW-like and QA-like procedures. We assess the performance of the GQW on exact cover, traveling salesperson and garden optimization problems with 9 to 30 qubits. Our results provide evidence for the existence of optimal annealing schedules, beyond the requirement of adiabatic time evolutions. These schedules might be capable of solving large-scale combinatorial optimization problems within evolution times that scale linearly in the problem size.


I. INTRODUCTION
Combinatorial optimization is a fundamental problem in computer science that has a wide range of important applications in finance [1], scheduling [2], machine learning [3,4], database search [5], computational biology [6], and operations research [7].However, finding optimal solutions to such problems can be challenging and computationally expensive, which often makes them intractable for classical computers today.Recent advances in quantum hardware are raising the expectations for demonstrating useful quantum computation in the upcoming years.In particular, solving large-scale combinatorial optimization problems is considered one of the great application areas of quantum computation, driving the need for quantum optimization algorithms suitable for nearterm quantum devices [8].
Quantum walks (QW) [5,[9][10][11][12] and quantum annealing (QA) [13][14][15][16][17][18][19][20][21][22] have emerged as two promising candidates for continuous-time quantum optimization algorithms in this context.QWs, introduced by Aharonov et al. [9], model the search space as a graph and govern the walker's transitions between vertices using a time-independent Hamiltonian.On the other hand, QA employs a time-dependent Hamiltonian to adiabatically evolve from an initial Hamiltonian to a problem Hamiltonian.Both algorithms have been extensively studied for various problems with tens of qubits, including Sherrington-Kirkpatrick spin glasses [11,23], Max-Cut [12], 2-SAT [24][25][26][27][28] and exact cover [2] problem instances.While these algorithms exhibit distinct dynamics, Morley et al. [29] have argued that they can be seen as extreme cases of annealing schedules in the context of search problems.However, the dynamics occurring in the intermediate region between QA and QW for combinatorial optimization problems have yet to be fully explored.We argue that these intermediate evolutions, which go beyond the scope of the adiabatic theorem, are a very promising regime for effective quantum computation.
In this paper, we utilize the theory of local amplitude transfer (LAT) to investigate continuous-time quantum algorithms.LAT theory focuses on the local energy structure of the problem Hamiltonian on the hypercube graph.It provides insights into the movement of probability amplitude between individual elements of the search space through a series of local Rabi oscillations.
Building upon these insights, we extend the concept of multi-stage QW [11,12] by introducing the guided quantum walk (GQW).The GQW combines multiple QWs through a time-dependent hopping rate, effectively guiding the transfer of probability amplitude throughout the graph.This approach relates to the problem of finding optimal annealing schedules [30][31][32], but goes beyond the requirement of adiabatic time evolutions, placing the GQW in between QW-like and QA-like procedures.
To evaluate the performance of the GQW, we numerically simulate its application to exact cover (EC) [2,33,34], traveling salesperson (TSP) [7,[35][36][37][38], and garden optimization (GO) [39] problems using the JUWELS Booster supercomputer [40].We extensively study the GQW on problem instances with up to 30 qubits.Our results provide evidence for the existence of optimal annealing schedules capable of solving large-scale combinatorial optimization problems with evolution times that scale linearly in the problem size.
The paper is organized as follows: In Sec.II, we introduce the types of combinatorial optimization problems that are used in our research.Section III provides a review of QWs and explores their dynamics on search and optimization problems using the LAT theory.We discuss how probability amplitude can be effectively guided through the hypercube graph, leading to the development of the GQW.Furthermore, we examine the relationship between the GQW, QA and QWs at different evolution times.In Sec.IV, we present a comprehensive performance analysis of the GQW.Finally, in Sec.V, we summarize our findings and their implications for future research.

II. COMBINATORIAL OPTIMIZATION PROBLEMS
This section presents the combinatorial optimization problems studied in our research.First, we describe the problem types and their encoding into a quantum setting in Sec.II A. Subsequently, we introduce a benchmarking metric, the solution quality, which ensures fair comparisons of quantum optimization algorithms across different problem types and sizes in Sec.II B.

A. Definition
A broad class of real-world problems can be framed as combinatorial optimization problems [7].These are problems defined on N-bit binary strings z = z N −1 . . .z 0 = {0, 1} ⊗N , with the objective of finding a string z opt that minimizes a given classical cost function C (z) : {0, 1} ⊗N − → R ≥0 .A natural way to express C in a quantum setting is to encode it into the energy spectrum of the computational basis states |z⟩ of a quantum cost Hamiltonian We consider the case in which C (z) consists only of linear and quadratic terms, C (z) = ij z i Q ij z j .Such problems are known as quadratic unconstrained binary optimization (QUBO) problems, with {Q ij } denoting the QUBO coefficients.For these problems, ĤC can be written in the form of an Ising Hamiltonian by substituting z i → (1 − σz i )/2, where σz i denotes the Pauli-Z operator applied to the i th qubit with the identity operator acting on the remaining qubits.One obtains with the coefficients h i = j (Q ij + Q ji )/2 and J ij = Q ij /4 representing the optimization problem.In our study, we investigate three types of combinatorial optimization problems, which can be expressed in Ising form and represent two categories of cost functions.
The first category (constraint-only) concerns cost functions consisting entirely of constraining terms, meaning that all states describing valid solutions to the optimization problem (called valid states henceforth) are assigned to the same cost value.Here, we focus on EC problem instances with N ∈ {12, 15, 18, 21, 24, 30}.Generically, these problem instances exhibit numerous distinct energy levels with low degeneracies and a non-degenerate ground state.
The second category (constraint+optimization) includes cost functions that involve both constraining and optimization terms, such that the set of valid states spans multiple energy levels.We consider TSP and GO problem instances with N ∈ {9, 16, 25} and N ∈ {12, 15, 18, 21, 24, 30}, respectively.It is worth noting that the GO instances generally exhibit higher degeneracies among their energy levels compared to the TSP instances.
The explicit cost functions for the three types of combinatorial optimization problems can be found in App. A. For each problem type and size, we investigate 10 randomly generated problem instances.Note that all cost functions have been rescaled and shifted, such that C (z opt ) = 0 and max z C (z) = 100.

B. Solution quality
Quantum optimization algorithms are designed to find the optimal solution to a given combinatorial optimization problem (Eq.( 1)) with high success probability P zopt = |⟨z opt |Ψ⟩| 2 , where |Ψ⟩ = z ψ z |z⟩ denotes the final quantum state.However, in many cases, approximate solutions, where not solely the solution state but also other valid states with a slightly larger cost function value are obtained with high probability, are also of interest, especially if they can be found significantly faster.Since P zopt does not take these approximate solutions into account, a common approach is to evaluate the performance of quantum optimization algorithms based on the energy expectation value often in the form of the approximation ratio r (Ψ) = E Ψ /E max , where E max = max z E z .Note that smaller approximation ratios are supposed to represent better solutions.
We emphasize, however, that the approximation ratio r in Eq. ( 3) is often not able to capture the 'quality' of the produced quantum state |Ψ⟩, that is the distribution of the measurement probabilities w.r.t. the energies of the valid states.This is because r considers not only the set of valid states (which we are generally interested in) but also the set of invalid states (i.e.states that violate at least one constraint and may shift E max arbitrarily).For the problems under investigation, the latter correspond to the majority of states, covering ≥ 95% of the total energy range.Hence, comparing the approximation ratio between different quantum optimization algorithms does not necessarily compare their ability to produce 'good' approximate solutions, as a final state with a large value of r might still provide valid states near the optimal solution with higher probability than a different state of smaller r value.
In order to address this issue of the approximation ratio, we propose the use of a theoretical benchmarking metric termed the solution quality where P z = |⟨Ψ|z⟩| 2 represents the measurement probability of the state |z⟩, and r valid (z) = E z / max k ∈ valid states (E k ) is used to give a weight among all valid states in the energy spectrum (note that |z opt ⟩ is at E zopt = 0 by definition).S q is designed to capture both the characteristics of the success probability and the approximation ratio, restricted to the set of valid states.This provides a theoretical benchmarking metric that is comparable across different problem instances by focusing on the practicality of the solutions obtained.Note that S q ∈ [0, 1], with S q = 1 corresponding to the solution state |Ψ⟩ = |z opt ⟩, and S q = 0 indicates a final state |Ψ⟩ that consists solely of invalid states and highest-energy valid states with r valid = 1.The latter are effectively excluded from increasing the solution quality S q , because obtaining any valid solution to the constraint+optimization problems (TSP and GO) can be done in polynomial time.Hence, we do not consider an information gain from these states.Furthermore, in the case of constraint-only problem instances (EC), we set r valid (z) = 0, such that S q = P zopt .

III. CONTINUOUS-TIME QUANTUM WALK
In this section, we provide a concise review of the continuous-time QW (Sec.III A) and investigate its dynamics on search (Sec.III B) and combinatorial optimization problems (Sec.III C) using LAT theory.Our analysis shows that the QW can be systematically guided on the hypercube graph based on the energy spectrum of the problem Hamiltonian.Building upon these insights, we introduce the GQW in Sec.III D. In Sec.III E we examine the relationship between the GQW, QA and QWs.

A. Definition
The continuous-time QW is a quantum algorithm that assigns the computational basis states |j⟩ of an N -qubit Hilbert space to the set of vertex labels V = {j} of an undirected graph G (V, T ).In this framework, the vertices encode the walker's position, and the set of edges T indicate allowed transitions between label pairs (j, k).The latter is described through an adjacency matrix A, whose elements satisfy A j,k = 1 if an edge in G connects vertices j and k, and A j,k = 0 otherwise.As G is undirected, A is symmetric and can be used to define the quantum-walk Hamiltonian, given by: where ĤD denotes the driver Hamiltonian, and Γ is the hopping rate.It is important to note that Eq. ( 5) is not the only possible Hamiltonian for a QW.In the literature, the Laplacian L = − ĤD − D of G is often used instead of ĤD , with the diagonal matrix D = j deg(j) |j⟩ ⟨j| encoding the degree deg(j) of each vertex j, i.e. the number of edges incident to j [5].However, for the regular graphs G considered in this paper, where deg(j) is constant with respect to j, both formulations are equivalent up to an unobservable global phase factor.We chose to use the adjacency operator form of ĤQW for consistency with other quantum optimization algorithms.
Given the QW Hamiltonian (Eq.( 5)), the state of the walker evolves from some initial state |Ψ 0 ⟩ according to the time-dependent Schrödinger equation, which yields the state of the system at a time T as where we have used units with ℏ = 1.The quantum dynamics implemented by this evolution clearly depend on the connectivity in the graph G.In the past, QWs have been studied on a variety of graph layouts, including the complete graph, which couples every vertex to every other, and the N -dimensional hypercube, which connects only vertices of Hamming distance one [5].In this paper, we focus on a hypercube, as it provides a natural encoding for a QW into qubits.Specifically, transitioning from one vertex to a neighboring vertex corresponds to flipping the computational state of one qubit.As such, the driving Hamiltonian ĤD is composed of N single-body terms, where σx j denotes the Pauli-X operator applied to the j th qubit with the identity operator acting on the remaining qubits.The corresponding QW Hamiltonian for the hypercube is given by ĤQW = −Γ • N j=1 σx j .A primary feature of this Hamiltonian is its ability to rapidly explore the vertices in the graph G, thereby providing high dynamics in the computational basis.By introducing a secondary problem Hamiltonian that is diagonal in the computational basis, the graph G becomes directed, leading to a concentration of amplitude in the ground state of the problem Hamiltonian (see App. B for further information).In the following sections, we investigate this ability of QWs to find ground states, starting with the search problem-a well-studied toy problem that can be analytically solved using the QW.

B. Quantum walk search
In the search problem, we aim to find a specific bit string z opt ∈ {0, 1} N from a set of 2 N possible strings.The problem can be mapped to a quantum setting using an oracle Hamiltonian that assigns one unit of energy less to the solution state |z opt ⟩ compared to the rest of basis states.Solving the search problem is then equivalent to finding the ground state of ĤO .The QW provides a means of solving the search problem by combining ĤO with the driving Hamiltonian ĤD (Eq.( 7)), and adjusting their relative strength via the dimensionless hopping rate Γ.The computation is performed by evolving the quantum system, initialized in the equal superposition state under the QW search Hamiltonian ĤQW S = Γ • ĤD + ĤO (10) for a time T and measuring the qubit register in the computational basis afterward.
Childs and Goldstone have solved the QW search problem analytically for various graph layouts [5], including the complete and hypercube graphs.For each layout, they have calculated optimal values Γ = Γ opt (see also [10]) for which the performance of the QW search achieves the same optimal quadratic speed up as Grover's search algorithm [41].
Figure 1 presents the application of the QW to an N = 12 qubit search problem, giving insights into the characteristics of the QW's dynamics.Specifically, the entire system evolves periodically, with the individual measurement probabilities of the vertices oscillating as a function of the evolution time T (see Fig. 1c).This behavior occurs because the QW performs Rabi oscillations between the initial state |Ψ 0 ⟩ and the solution state |z opt ⟩. Figure 1a shows the two lowest energy levels corresponding to the states |E 0 (Γ)⟩ and |E 1 (Γ)⟩ as a function of the hopping rate Γ.When Γ = Γ opt , the relative strengths of the two contributing Hamiltonians in ĤQW S are balanced equally, and the two energy levels undergo an avoided level crossing.If N is large enough, |E 0,1 (Γ opt )⟩ is approximately equal to the uniform superposition of the initial and the solution  state, i.e. |E 0,1 (Γ opt )⟩ ≈ (|Ψ 0 ⟩ ± |z opt ⟩) / √ 2 (see Fig. 1b).Hence, ĤQW S drives transitions between |Ψ 0 ⟩ and |z opt ⟩ with a frequency ∝ [E 1 (Γ opt ) − E 0 (Γ opt )].Consequently, the overlap with the solution state | ⟨z opt |Ψ (T )⟩ | depends on the hopping rate Γ and the evolution time T , which both require high precision in order to obtain accurate results (see.Fig. 1c).

C. Guiding a quantum walk
Combinatorial optimization problems can also benefit from the application of QWs.We can address optimization problems within the QW by augmenting the driver Hamiltonian ĤD with the cost Hamiltonian ĤC defined in Eq. (1).The latter induces complex phase gradients between connected vertices based on their assigned cost value, thereby defining a direction of propagation for the walker in the graph.The QW optimization Hamiltonian on the hypercube mapping is given by with Γ balancing the relative strength of the two contributing parts.The QW is performed analogously to Sec.III B, by initializing the qubits in the equal superposition state |Ψ 0 ⟩ (see Eq. ( 9)), evolving the system under ĤQW O for a time T , and then measuring it in the computational basis.However, unlike for the search problem, the evolution under ĤQW O cannot be efficiently calculated analytically.This makes it impractical to predict optimal parameter sets Γ opt and T opt which maximize the final overlap with the solution state for an arbitrary optimization problem.This is because the energy spectrum of ĤC typically features numerous distinct levels with unknown energy gaps, in contrast to the almost completely degenerate spectrum of ĤO .Consequently, the energy levels of the combined Hamiltonian split as a function of Γ and thereby undergo numerous avoided level crossings (see Fig. 2a).Since the system is initialized in a superposition across multiple energy levels, ĤQW O drives transitions between various eigenstates, and the simple two-level description of the walker's dynamics used previously is no longer applicable (see Fig. 2b).As a result, the oscillation of the solution quality S q becomes highly complex, as multiple streams of amplitude transfers at different energy levels interfere with each other (see Fig. 2c).
Previous studies have explored heuristic approaches to obtain near-optimal hopping rates Γ for the QW within polynomial time.For instance, Callison et al. proposed estimating Γ opt from the overall energy scale of ĤC by matching the total energy spreads of the two Hamiltonians in ĤQW O [23].Later, the authors extended this strategy by sampling Γ opt based on a maximization of the average dynamics on Sherrington-Kirkpatrick spin glass problems [11].Recently, Banks et al. investigated the link between time-independent Hamiltonians and thermalization, leading to an estimate of Γ opt through the eigenstate thermalization hypothesis on Max-Cut problem instances [12].
While these strategies demonstrate the general ability of QWs to solve combinatorial optimization problems, they necessitate additional adjustments for each problem type (e.g., estimating energy gaps) and focus solely on the average dynamics in the hypercube graph.However, local variations in these dynamics across different  regions of the graph are the primary reason for the observed distortions in the oscillation of the solution quality in Fig. 2c.These distortions not only make it challenging to estimate T opt from a few samples, but also limit the maximally achievable solution quality for any T as the system approaches a stationary state (cf.Fig. 4

below).
The optimal solution quality typically scales exponentially with the problem size N (cf.Fig. 8b below) because the QW can only drive amplitude transfers within a fixed energy range, neglecting the amplitude originat-ing from exponentially many states outside this range.Consequently, the QW as defined in Eq. ( 11) is not wellsuited for large combinatorial optimization problems.
Inspired by these strategies, but being interested in achieving practical quantum computation (as measured by a large solution quality) for any given evolution time T , problem size N and problem type, we investigate the dynamics of QWs by applying the LAT theory to the hypercube graph G shown in Fig. 3a.By analyzing the transfer of probability amplitude in local subspaces spanned by basis-state pairs with Hamming distance 1 (i.e., states connected by an edge in G), we aim to derive a mechanism to control the movement of the walker locally in the graph, such that backpropagation of amplitude into (undesired) high-energy states can be suppressed.Thus, instead of maximizing transitions in the entire graph collectively, as proposed in prior study, our approach is to maximize them only locally at a time in order to guide the walker towards the solution state more effectively.
The LAT theory focuses on two-dimensional subspaces spanned by pairs of basis states, |j⟩ and |k⟩, that are connected in the hypercube graph G, i.e. ⟨j| ĤD |k⟩ ̸ = 0 (see inset of Fig. 3a).The effective two-level subspace Hamiltonian is given by assuming the rest of the system remains in its initial state.Here, Ĥ(j,k) ( If the system starts in the local equal superposition state, , the measurement probability of the desired lowerenergy state |k⟩, is given by Equation ( 14) represents a sinusoidal oscillation with a Rabi frequency of ω = √ Γ 2 + δ 2 similar to Sec.III B (note that this property holds for any initial state).Specifically, if Γ ≫ 1, the driving Hamiltonian dominates Ĥ(j,k) QW O , and the cost Hamiltonian has almost no influence on the system's evolution.Since |+⟩ (j,k) corresponds to the ground state of Ĥ(j,k) D , the system remains primarily in its initial state.Conversely, if Γ ≪ 1, Ĥ(j,k)   11).The dashed black curve illustrates the optimal relative strength for a QW.The bottom axis denotes the relative time in both algorithms, with time progressing from right to left, as the GQW progresses from high-energy to low-energy amplitude transfers.
As a result, amplitude transfers can only occur efficiently among specific subsets of vertex pairs with an energy gap ∆ C j,k ≈ |Γ∆ D | in ĤC .Transitions between states with significantly larger or smaller energy gaps are suppressed for fixed Γ. Since the energy gaps of vertex pairs typically vary throughout the graph, we can steer the walker's movement effectively by selecting Γ to activate only the desired transitions in the graph.
For all EC, TSP, and GO problems under investigation, we noticed empirically that the distribution of the energy levels acquires a characteristic 'onion shape' (see Fig. 3a) as soon as the problem instance is sufficiently complex (i.e., the number of qubits is sufficiently large).Additionally, the largest energy gap ∆ C j,k from vertex |j⟩ to a lower energy vertex |k⟩ increases approximately monotonically as a function of its energy level E j (see Fig. 3b).Consequently, large values of Γ are usually optimal in the regime of high-energy states, while small values are preferable near the solution state.Choosing a fixed value for Γ has the disadvantage of limiting the maximally achievable success probability because not all edges can sufficiently contribute to the amplitude transport.In particular, only amplitude transfers within a fixed energy range can be addressed, generally prohibiting states located at high energies from transporting amplitude to the solution state.
The strategy we propose in this paper is based on an energy-dependent hopping rate, where | Γ(E) • ∆ D | corresponds to the average ⟨∆ C ⟩ (E) of the largest energy gaps of ĤC at energy level E in the graph.The approach is to confine the walker to a gradually shrinking energy region around the solution state by progressing monotonically from high-energy to low-energy optimal hopping rates Γ.Therefore, we set Γ(t) = Γ (E (t)), where E (t) is a monotonic sweep, and define Equation 14 shows that the frequency of the local Rabi oscillations depends on the size of the energy gaps, resulting in faster amplitude transfers at larger gaps (see Fig. 3b).Thus, the GQW needs to spend less time driving transitions at high energies than at small energies to avoid amplitude going back to high energy state.Consequently, the energy sweep must be rescaled according to ⟨∆ C ⟩ (E), yielding the rescaled time with , where E min and E max denote the lowest and highest energy levels of ĤC , respectively, and T is the total evolution time.At t = 0, the algorithm starts with a relatively high hopping rate (Γ = Γ(E max )) that maximizes amplitude transfers only at high energy levels and suppresses dynamics at low and intermediate energies.As the hopping rate is decreased over time, the system starts to perform amplitude transfers at lower energy levels.Simultaneously, subspaces at higher energies become gradually detuned again, suppressing the action of the driving Hamiltonian.This is essential as it prevents the backpropagation of probability into high-energy states and enables us to actively guide the walker towards low-energy states.The primary advantage of this approach lies in the establishment of a continuous amplitude flow from all computational basis states towards the solution state, consequently overcoming the previous limitation on the maximally achievable solution quality.Furthermore, the latter is no longer subject to the complex oscillations, which required a precisely chosen evolution time T (cf.Fig. 2c).Instead, S q (T ) follows a collective sinusoidal oscillation of Eq. ( 14), where the selection of Γ(t) ensures that active subspaces oscillate with similar Rabi frequencies while high energy subspaces become gradually detuned, hence suppressing the backpropagation of probability amplitude.By varying the evolution time T , the speed at which Γ(E) is swept can be adjusted, thereby altering the duration that groups of subspaces remain active.As a result, S q (T ) depends solely on the order of magnitude of T , rather than its precise value.
To evaluate the efficiency of a GQW, we compare its performance to a conventional QW on an EC problem instance comprising N = 21 qubits.Figure 3b shows the average ⟨∆ C ⟩ (E) of the largest energy gaps between connected vertices at each energy level E. We obtain | Γ(E)•∆ D | by fitting a polynomial of degree 6 to the data points (see the black arrows in Fig. 3b).The bottom axis indicates the non-linear sweep of E (t).In comparison, for the QW, we determined Γ opt by sampling a classical optimizer (see dashed black curve).Γ opt corresponds to the optimal hopping rate that yields the highest success probability over T ∈ [0.1, 10.0].
Figure 4 compares the performance of the GQW and the QW as a function of the total evolution time T .For T ≤ 2, both strategies exhibit similar behavior, showing a rapid increase in solution quality, with the QW achieving its peak at T = 1.5.Notably, for T ≤ 2.5, the GQW yields lower solution qualities compared to the QW.This discrepancy arises from the relatively small optimal hopping rate of the QW, enabling it to focus its evolution on amplitude transitions near and into the solution state.In contrast, the GQW considers the entire graph and therefore spends the initial part of its evolution at high and intermediate energy levels (the right part of Fig. 3b).This approach leads to very short evolutions at each energy level for small T , causing only fractions of the probability amplitudes to be transported towards z opt .However, as T increases, the situation changes, and the GQW obtains superior solution qualities to the QW for T ≥ 2.5.Here, the QW exhibits oscillatory behavior between 10 −5 to 10 −2 .In contrast, the GQW follows a monotonic increase, where S q (T ) saturates at approximately 21%, outperforming the QW by approximately one order of magnitude.
The observed saturation results from imbalances within the local subspaces at small E. As a large amount of amplitude accumulates in the ground state, the system progressively deviates from the state of equal local su- In contrast, the QW employs a fixed hopping rate Γ that has been pre-optimized (see Fig. 3b).The performance is evaluated using the solution quality Sq, as defined in Eq. ( 4).Note that the results presented here are indicative and require knowledge of the complete energy spectrum.
perposition.Consequently, driving these subspaces with their respective optimal hopping rate Γ(E) eventually redirects amplitude back into the higher energy state, thus limiting S q (T ) (see also App.B).

D. Practical guided quantum walk
The previous section has shown the potential of GQWs for solving combinatorial optimization problems, by adjusting the relative strength of the two Hamiltonians in ĤGQW based on the walker's position in the graph.The main benefits are an increased maximum solution quality and a suppression of complex oscillations in S q (t), providing good results for arbitrary T and N .
Of course, obtaining the optimal function Γ (E(t)) is generally not efficiently possible, because it requires knowledge of the entire energy spectrum of ĤC .In order to still make use of the promising methodology described above, we propose a variational ansatz, in analogy to other quantum optimization algorithms [42][43][44].The idea is to imitate the optimal distribution Γ (E) by a function Γ (E, λ), which is tuned using a set of M hyperparameters λ = (λ 1 , . . ., λ M ).Note that henceforth we consider only linear sweeps of the energy spectrum, i.e.E (t) = (E min − E max ) • t/T + E max , thus encoding the sampling speed in the shape of Γ.The hope is that as long as Γ (E, λ) describes Γ (E) closely enough, similar dynamics to Sec.III C can be obtained.In fact, introducing hyperparameters into the algorithm even enables the guided quantum walk to overcome its limitations at small and large T .For instance, at small T , the GQW could selectively model Γ (E) only up to E < E max , thereby operating solely on a subspace of the entire graph near |z opt ⟩.Conversely, at large T , the GQW can prevent the transfer of amplitude into higher energy states when operating at small E by compensating the imbalances within the local subspaces through a reduction of the final hopping rate, Γ (E = E min ).
The proposed algorithm employs a hybrid quantumclassical ansatz, in which a classical optimizer adjusts the set of hyperparameters λ based on the minimization of the energy expectation value E Ψ (see Eq. ( 3)) of the final quantum state |Ψ⟩ obtained by a quantum device performing the GQW.Note that the number M of hyperparameters is fixed, and we investigate the impact of this optimization phase on the total run time in Sec.IV C.
We propose a function Γ (E, λ) based on cubic Bézier curves.We chose Bézier curves instead of simple polynomials because we expect the optimal hopping rate to be smooth and monotonically decreasing in E.Although polynomials can produce such functions for E ∈ [E min , E max ], their parameters are generally hard to tune, as small changes can lead to substantially different functions.In contrast, Bézier curves are much easier to optimize since their general shape can be predetermined.Moreover, the latter varies continuously and sufficiently slowly in its parameters, resulting in a sufficiently smooth search space for λ.Due to these properties, we strongly encourage their use in other fields of quantum computing, such as optimizing annealing schedules [30][31][32] or deriving optimal parameter sets for the quantum approximate optimization algorithm (QAOA) [2,42,[45][46][47].
Cubic Bézier curves are based on Bernstein polynomials and are defined through four control points C i = (x i , y i ) T in a two-dimensional plane as Choosing  18)) and is defined by four control points C0,1,2,3 based on the hyperparameters λ1−4, along with the boundary conditions Γ (t = 0, λ) = 10 2•λ 5 and Γ (t = T, λ) = 10 −3•λ 6 .The curve is obtained by optimizing a set of six hyperparameters λ using the classical Nelder-Mead optimizer [48,49], employing Nopt = 100 optimization steps starting from a random initial configuration.
Γ (t, λ) can vary significantly between different problem instances, we chose an exponential rescaling to simplify the optimization landscape.Note that the parameters −2 and 3 were found suitable for the problems under investigation, but generally depend on the energy scale of the cost Hamiltonian ĤC .In Fig. 5 we present Γ (t, λ) for the aforementioned EC problem and T = 10.

E. From quantum walk to quantum annealing
The LAT theory, used in the derivation of the GQW (see Sec. III C), offers a new perspective on the working principle of quantum annealing (QA) and its relationship to QWs.QA belongs to the class of continuous-time quantum optimization algorithms that rely on an adiabatic transition from the driver Hamiltonian ĤD (i.e., Γ (t = 0) → ∞) to the problem Hamiltonian ĤC (i.e., Γ (t = T ) = 0) throughout the time evolution.According to the adiabatic theorem of quantum mechanics, if this transition occurs sufficiently slowly, and the system is initially prepared in the ground state of ĤD , it will remain in the instantaneous ground state of the combined Hamiltonian ĤGQW in Eq. ( 15), ultimately reaching the solution state |z opt ⟩.
To explore the relationship between QWs and QA, we use the GQW to examine the strategies governing optimal quantum evolutions across different time intervals T .Figure 7 presents simulation results of the GQW applied to an N = 15 qubit EC problem, covering short (T = 0.5), intermediate (T = 2.0), and long (T = 12) evolutions.Panels (a)-(c) depict the average measurement probabilities of the energy levels E C within the problem Hamiltonian ĤC .Panels (d)-(f) showcase the evolution of the instantaneous energy levels E GQW of the combined Hamiltonian ĤGQW .Furthermore, Fig. 6 presents the optimal range of Γ (t, λ) as a function of the total evolution time T for the same EC problem.
Short evolutions: In the case of short evolutions, the optimal hopping rate schedule Γ (t, λ) determined by the GQW maintains a near-constant relationship between ĤD and ĤC in Eq. (15).Consequently, the instantaneous energy levels mostly remain unchanged during the evolution, and the system starts and ends in a superposition of the instantaneous basis states (see the dashed line in Fig. 7d).For T = 0.5, Γ (t, λ) decreases from 0.6, approximately equivalent to Γ (E min ), to 0.3.This transition initially drives the subspaces involving |z opt ⟩, hence resembling a QW-like procedure, before progressively detuning them to prevent the back-propagation of amplitude into higher energy states (see Fig. 6).The latter is necessary as these subspaces gradually move away from a close-to-equal superposition state (see App. B). Figure 7a demonstrates that this strategy results in a guided movement of amplitude towards |z opt ⟩ for E C ≤ 12, evident from the emerging gradient in the measurement proba- bilities.Probability amplitude originating from intermediate and high energy levels, on the other hand, fails to reach the solution state and instead becomes trapped at these energies within the graph (see horizontal dark and bright stripes in Fig. 7a).This highlights the inherent limitations of QWs.
Intermediate evolutions: In the case of intermediate evolution times, the optimal hopping rate schedule operates over a wider range of values, with the initial hopping rate Γ (t = 0, λ) increasing monotonically as a function of T (see Fig. 6).In doing so, the GQW extends the subset of the hypercube graph in which amplitude is actively guided through the local subspaces.This is evident from the increased final success probability and the gradual transport of amplitude from high to low energy states for t ≤ 0.5 (see Fig. 7b).The presence of bright stripes at E C ≥ 10 indicates that the GQW is still operating on a subset of the hypercube graph, causing amplitude from high-energy states to become trapped at intermediate energy levels.This demonstrates that even for intermediate evolution times, it is more advantageous to neglect amplitude at high-energy states and focus on amplitude transfers near the solution state.Interestingly, also the final hopping rate Γ (t = T, λ) increases for longer evolution times T up to a maximum at T ≈ 2. Subsequently, for T > 2, Γ (t = T, λ) declines, again detuning the local subspaces at small E. The shape of Γ (t = T, λ) is likely influenced by the density of states of ĤC .As Fig. 3a) illustrates for an N = 21 qubit EC problem, the state density of ĤC for our EC instances rapidly increases for smaller E, reaching a peak at E p , followed by an exponential decline.Consequently, as Γ (t = 0, λ) initially increases, the accessible amplitude likely grows faster than the transport of amplitude into |zopt⟩ within T .This keeps the local subspaces surrounding |z opt ⟩ closer to an equal superposition state, requiring less final detuning.The peak of Γ (t = T, λ) aligns with the point where Γ (t = 0, λ) ≈ Γ((E p ).As Γ (t = 0, λ) further increases, the growth of accessible amplitude slows down, causing the local subspaces at lower E to move away from an equal superposition state.Consequently, a larger final detuning is required, resulting in a smaller Γ (t = T, λ).
Long evolutions: For long evolutions, the optimal hopping rate schedule determined by the GQW resembles a QA-like schedule by transitioning nearly entirely from ĤD to ĤC in Eq. ( 15) (see Fig. 6).Consequently, the system follows the instantaneous ground state of ĤGQW throughout the evolution, indicated by the dashed line in Fig. 7f.Throughout this process, the GQW effectively drives amplitude transfers across the entire graph, initiating from high energy levels and moving towards low energies, evident by the absence of horizontal stripes in Fig. 3c.Notably, the confinement of amplitude occurs exponentially in t, indicating a non-linear sweep through the energy spectrum (cf.Eq. ( 16)).Furthermore, the propagation of amplitude follows a wave-like pattern, as illustrated by the dashed line in Fig. 7c.This pattern arises due to the confinement of amplitude in a decreasing number of vertices, leading to deviations from the equal superposition states in the local subspaces.This results in a temporary backpropagation of amplitude into higher energy states (see App. B).Nevertheless, the continuous decrease of the hopping rate Γ (t, λ) ensures, on average, the transportation of amplitude into the solution state.
The hopping rate schedules derived from the GQW not only emphasize the intrinsic connection between QWs and QA but also highlight the existence of optimal quantum evolutions that extend beyond the scopes of these two algorithms.While a QW-like strategy proves optimal for short evolutions and a QA-like procedure is favored for long evolutions, our investigations reveal that intermediate values of T necessitate a combination of both strategies to maximize the solution quality.
This observation can be explained through LAT theory (see Sec. III C and App.B), as QWs and QA can be viewed as two distinct formulations of the same underlying concept.Both approaches aim for the optimal transfer of probability amplitude within local subspaces of the graph.QWs achieve this by employing a constant, small hopping rate Γ, focusing exclusively on direct transfers into the solution state during short evolutions.However, this strategy becomes less effective for long evolutions, where sufficient time is available to guide amplitude at higher energy levels as well.Consequently, QA guides amplitude through the entire graph by linking multiple local QWs together, employing a continuously decreasing hopping rate Γ (t).Thus, QWs and QA represent the two extremes within the GQW framework, with one concentrating solely on subspaces around the solution state (i.e., Γ (t = 0) ≈ Γ (t = T ) ≪ 1) and the other considering the entire graph (i.e., Γ (t = 0) ≫ 1 and Γ (t = T ) ≪ 1, cf.Fig. 5).
The GQW operates in the transition region between these two extremes, striking a balance between the number of guided local subspaces (i.e., the amount of guided amplitude) and the time spent at each energy level (i.e., the amount of amplitude transferred within each local subspace) for a given T .As will be discussed in Sec.IV B, these intermediate evolutions, which surpass the limitations of adiabatic time evolutions, might be capable of effectively solving large-scale optimization problems.

IV. RESULTS
In this section, we present a comprehensive performance analysis of the GQW.We compare the solution quality S q both as a function of the evolution time T and the system size N to the conventional QW and QA (see Sec. IV A).Section IV B provides scaling results of the three algorithms in the regime of large problem instances (i.e., N ≫ 1).Our analysis indicates that the GQW achieves significantly higher solution qualities within a linearly growing timespan, rendering it a promising candidate for near-term quantum devices.Finally, in Sec.IV C we address the impact of the classical optimization phase on the total run time, demonstrating that the average time-to-solution scales better by a factor of ≈ 2 (≈ 4) compared to linear QA (QW).

A. Comparison of GQW, QW and QA
We assess the effectiveness of the GQW through numerical simulations carried out on the JUWELS Booster supercomputer at the Jülich Supercomputing Centre of the Forschungszentrum Jülich [40].We compare its performance against a conventional QW and QA.Our simulations consider EC and GO instances with problem sizes N ∈ {12, 15, 18, 21, 24, 30}, as well as TSP instances with N ∈ {9, 16, 25} qubits.To demonstrate the generality of our findings, we examine 10 randomly generated problem instances for each problem type and size.We remark that the energy spectrum of each problem has been obtained classically, providing the total energy range and the individual energies of the valid states for the calculation of S q (see Eq. ( 4)).Note, however, that this is done only for the purpose of benchmarking, and it is not required to apply the GQW in a practical scenario.
To obtain the final quantum state of the system at the end of each algorithm, we use the second-order Suzuki-Trotter product formula algorithm [51,52] to solve the time-dependent Schrödinger equation, i ∂t |Ψ(t)⟩ = H(t) |Ψ(t)⟩, with a time step of 10 −5 and total evolution times T ranging between 0.1 and 12.0.Given the consistency of our findings, we present the results for the EC instances in Fig. 8, showing the obtained solution qualities averaged within each system size, along with the standard deviations.The analogous results for the TSP and GO problems are available in App.C and support the same conclusions.

Guided quantum walk:
The GQW is optimized for each problem instance and evolution time T individually by tuning its 6 hyperparameters λ to minimize the energy expectation value E Ψ (see Eq. ( 3)) of the final quantum state |Ψ⟩.Each parameter set is thereby selected from a pool of N rep = 100 repetitions of the Nelder-Mead classical optimizer, where in each sample the 6 parameters are initialized randomly and adjusted a maximum of N opt = 100 times.We choose this approach to ensure The performance of all algorithms is assessed based on the solution quality Sq defined in (4).Each problem size N is evaluated using a set of 10 randomly generated problem instances, and the presented data corresponds to the geometric mean solution qualities (solid curves) along with the geometric standard deviations (shaded areas).Note that the GQW and the QW undergo a prior classical optimization phase.The dotted gray lines indicate horizontal cuts (Sq ∈ {1%, 10%, 90%}), which are detailed in Fig. 9.The inset in panel (a) provides a zoomed-in view of the region corresponding to large evolution times T .Additionally, the dashed lines in panel (b) indicate the stationary solution quality that the damped oscillation of Sq(T ) approaches for the QW at large T .
that the algorithm converges to a near-optimal minimum in the parameter search space.However, we note that a sufficient set of parameters is typically found within the first N rep = 20 repetitions.In Sec.IV C, we discuss the impact of this optimization phase on the total run time.Figure 8a presents the simulation results of the GQW, showing the scaling of the solution quality S q (T ) as a function of the evolution time T .Across various problem types and sizes, we observe consistent patterns in the scaling of S q (T ), corresponding to the three regimes of evolutions (see Sec. III E).
Initially, the solution quality exhibits rapid growth, matching the results obtained by the QW algorithm for T ≤ 0.5 (see Fig. 8b).At S q ≈ 10%, however, the scaling decelerates, with solution qualities above 70% for all investigated problem instances at T = 12.This scaling behavior is primarily influenced by the energy range considered during the algorithm's evolution.Since the GQW cannot sufficiently transport amplitude from all states towards the solution state at short evolution times T , the algorithm focuses its efforts on a subset of the graph to maximize the accumulation of amplitude into |z opt ⟩.As T increases, the GQW accesses a larger number of states, and consequently, S q (T ) seems to scale according to the amount of accessed amplitude (cf.disscussion in Sec.III E).
Quantum walk: For simulating the conventional QW, we employ a procedure similar to that of the GQW.Specifically, we determine the optimal hopping rate Γ for each problem instance and T separately, using the Nelder-Mead optimizer with N rep = 100 repetitions and a maximum of N opt = 100 parameter evaluations, aiming to minimize the final energy expectation value E Ψ .
Figure 8b illustrates the evolution of S q (T ) as a function of T for the QW.Across all problem instances, S q (T ) exhibits a damped oscillation pattern, converging to a stationary solution quality (indicated by dashed lines) below 1 at T ≥ 12. Notably, this stationary solution quality decreases exponentially in the problem size N , because the QW, with a constant hopping rate Γ, can only drive a few local subspaces sufficiently (cf.discussion in Sec.III C).Consequently, the GQW surpasses the QW in terms of performance even for short evolution times (e.g.T = 0.5), highlighting the significance of local adjustments to Γ already at short time scales.Furthermore, it is noteworthy that the QW is the only algorithm investigated that fails to achieve solution qualities greater than 20% for any problem instance and T .
Quantum annealing: In the context of QA, we examine two annealing schemes: a linear annealing scheme represented by Γ(t) = (1 − s(t)) /s(t), where s(t) = t/T , and an optimized locally adiabatic schedule [50] employing a rescaled time s opt (t).The latter is determined by numerically computing the instantaneous energy gap ∆(s) between the ground and first excited state of ĤGQW across various values of s, followed by solving ds opt /dt ∝ ∆ 2 (s opt ) to derive s opt (t).This approach yields an optimized schedule that decelerates the annealing process in regions with small energy gaps while accelerating it elsewhere.A comprehensive discussion on this approach is given in [50].It's important to note that the linear scheme represents the baseline performance of QA, where no prior optimization phase is required.Conversely, the optimized locally adiabatic schedule mirrors the theoretical best performance of QA for T ≫ 1, assuming complete knowledge about the optimization problem.
Figure 8c presents the solution qualities achieved with linear QA (solid lines) and optimized locally adiabatic QA (dashed lines) as a function of T .As expected from the adiabatic theorem, S q (T ) increases with the evolution time T for both approaches, with optimized locally adiabatic QA achieving up to one order of magnitude higher solution qualities at T = 12 then linear QA.Interestingly, for the N ∈ {21, 24} qubit problems, linear QA beats the optimized locally adiabatic schedule for T ≤ 4.This is likely caused by diadiabatic transitions in the context of fast annealing.When comparing QA to both the GQW and the QW, we observe a significantly steeper increase in solution quality for the latter two, underscoring the importance of focused amplitude transfers near the solution state for short time scales.For intermediate and long evolution times, QA surpasses the QW.The GQW, however, outperforms both algorithms across all investigated problem instances and T .Interestingly, even in the case of long evolutions (e.g., T = 12) the optimized locally adiabatic QA fails to match the solution qualities achieved by the GQW, indicating the existence of optimal schedules beyond the adiabatic theorem.

B. Performance on large problem instances
The previous section has demonstrated the efficiency of the proposed guiding procedure for quantum walks in solving combinatorial optimization problems.The GQW outperforms both the QW and QA by providing significantly higher solution qualities across all studied problem instances and evolution times T .However, to determine how these algorithms compare for real-world problem sizes (N ≫ 1) that exceed the capabilities of our numerical simulations, we analyze the scaling of the evolution time T Sq (N ) required to reach a solution quality S q ∈ {1%, 10%, 90%} as a function of the problem size N .The corresponding data is shown in Fig. 9a together with linear (a + b • N ) and exponential (a • 2 b•N ) fits to the data points obtained by the GQW and QA, respectively.The QW was excluded from this analysis, as it could not reach the required solution qualities.It's worth noting 12  that the QW is not designed as a single-shot algorithm, and we will discuss the multi-shot QW in the subsequent section.
The data demonstrates that quantum annealing (QA) exhibits exponential scaling for both linear and optimized locally adiabatic schedules, with scaling coefficients b ∈ [0.12, 0.26] for the three solution quality levels S q .This is in line with the expectation derived from the adiabatic theorem, stating the instantaneous energy gaps, which shrink exponentially in N , demand an exponentially slow annealing process for the system to remain in its instantaneous ground state [53,54].Notably, the optimized locally adiabatic QA [50] achieves the best scaling with b = 0.12 for S q = 0.9, showing that local adjustments to the annealing speed can significantly reduce the annealing time needed to reach high solution qualities.In contrast, T Sq (N ) follows a linear scaling in N for the GQW.This can be explained by the fact that the depth of a hypercube graph scales linearly in the number of qubits N (i.e., the largest Hamming distance between any two states cannot be larger than N ).Hence, the GQW must at most drive amplitude transfers within N local subspaces to transport amplitude from any state into the solution state.Since the Hamming distance from z opt to each computational basis state seems to correlate positively with the energy gaps of these states (cf.Fig. 3b), these amplitude transfers are performed simultaneously for all states, yielding a linear in N evolution time T .In the regime of large T , the GQW can thus be seen as an optimized annealing schedule, where the al-  gorithm selectively spends more time in critical parts of the graph, while progressing faster elsewhere.
Although more extensive studies are required to verify the observation of a linear scaling, our findings identify the GQW as a highly efficient algorithm that achieves high solution qualities within short time scales.This makes the GQW an attractive choice for near-term quantum devices with limited coherence times.

C. Classical optimization phase
We have examined the performance of the GQW, showcasing its capability to achieve optimal quantum evolutions by guiding amplitude transfers locally in the hypercube graph.Our findings indicate a linear scaling of the total evolution time T Sq (N ) with respect to the problem size N , surpassing the performance of both the QW and the QA when provided with an optimal set of hyperparameters λ.However, our investigation has pri-marily focused on the quantum aspect of this hybrid algorithm, without counting the classical optimization phase responsible for fine-tuning the six hyperparameters λ.
In Fig. 10, we investigate the influence of the classical optimization phase on the GQW by analyzing the scaling of the average solution quality S q as a function of the total evolution time T and the number of parameter evaluations N opt .The latter refers to the number of iterations performed by the Nelder-Mead algorithm during the initial optimization phase.Panels (a)-(e) present the averaged results for the EC problems with sizes N ∈ {12, 15, 18, 21, 24}, respectively.
The data reveals consistent characteristics across all investigated problem instances.Specifically, we observe that the number of parameter evaluations N Sq opt necessary to reach a minimum solution quality S q decreases exponentially as a function of the total evolution time T (see dashed curves in panels (a)-(e)).Additionally, for a fixed value of T , N Sq opt scales exponentially in the problem size N .These observations indicate that for short evolutions, where the GQW prepares QW-like hopping rate schedules, the parameter search space tends to be more complex, thereby requiring longer optimization phases.This complexity arises, as the GQW is considering only a few subspaces, making it crucial to precisely set the hyperparameters λ to effectively drive amplitude transfers within these subspaces.Notably, the performance of the GQW is particularly sensitive to the choice of λ 4 and λ 5 in this regime of T .On the other hand, in the case of long evolutions, high-quality solutions can be obtained with just a few iterations from the classical optimizer.This is because deviations from the optimal schedule have minimal impact on the overall evolution of the quantum system, as the optimal hopping rate schedules approach a QAlike evolution, and success is increasingly guaranteed by the adiabatic theorem.
To incorporate the classical optimization phase into our performance evaluations, we consider the time-tosolution where P gs and T denote the success probability and the evolution time of the algorithm, respectively.T T S Ptarget represents the total run time required to measure the solution state |z opt ⟩ at least once, with a probability of P target , over multiple runs of the algorithm.Note that the optimization phase is accounted for through the offset N opt • T .In Fig. 10f, we show a comparison of the scaling of the optimal (smallest) T T S 99.99% achieved by the GQW, the QW, and linear QA as a function of the problem size N .Note that optimized locally adiabatic QA is excluded from this analysis, as it requires knowledge of the full spectral information about the optimization problem.Exponential fits (a • 2 b•N ) are included as a reference.The data reveals that for small problem sizes, both the QW and linear QA demonstrate faster convergence to the solution state compared to the GQW, due to the GQW's initial optimization phase.However, for N ≥ 15 and N ≥ 21, respectively, the GQW surpasses both algorithms with a scaling factor of b = 0.12, which is approximately four (two) times better than b = 0.47 (b = 0.22) for the QW (linear QA).Although the initial optimization phase in the GQW leads to an exponential scaling of T T S 99.99% , the GQW's ability to focus solely on a subset of the graph for intermediate values of T enables a significantly more efficient utilization of computational resources compared to the other algorithms.Furthermore, the fact that the exponential scaling is shifted into the optimization phase, while the single run times T scale at most linearly with N (see Sec. IV B), offers the opportunity to distribute the optimization phase across multiple quantum computing devices, thereby providing an option to parallelize the process.This is a feature generally not feasible for QA due to the exponential scaling of T , but it could potentially allow solving large optimization problems on near-term quantum devices.

V. CONCLUSION
We have utilized the theory of local amplitude transfer (LAT), which offers a new perspective on the operational principles of quantum annealing (QA) and quantum walks (QWs) beyond the adiabatic theorem, while also providing insights into the design of optimal quantum evolutions.The theory is rooted in the description of a quantum evolution within the eigenspace of the problem Hamiltonian ĤC .In this context, the search space is represented as a graph G where the states are interconnected through the driving Hamiltonian ĤD (see Fig. 3a).By decomposing G into two-dimensional subspaces spanned by pairs of eigenstates, we have demonstrated that probability amplitude traverses the graph through a sequence of local Rabi oscillations occurring within these subspaces.The amplitude of these oscillations depends on the relative strengths of the local driving and problem Hamiltonians, controlled by the hopping rate Γ.
We have highlighted that for sufficiently complex problems, the average energy gap of the local problem Hamiltonian monotonically increases as a function of the energy level (see Fig. 3b), allowing to selectively drive amplitude transfers within distinct regions of the graph.This property provides a new understanding of how probability amplitude propagates through the search space during continuous-time quantum algorithms.
In particular, we have identified QWs and QA as two formulations of the same underlying principle (see Fig. 7), with QA corresponding to a sequence of distinct QWs with gradually decreasing hopping rates Γ.We have shown that a QW-like approach employing a small and constant hopping rate is generally optimal for short evolutions, as it allows for localized dynamics near the solution state.However, it becomes suboptimal for long evolutions, as it fails to effectively transfer amplitude from higher energy states.Conversely, a QA-like strategy is preferred for long evolutions, as it guides amplitude throughout the entire graph, but it is suboptimal for short evolutions, since it spends insufficient time in subspaces near the solution state.Based on these insights, we have argued that optimal quantum evolutions must adapt to the total evolution time T by striking a balance between the number of guided local subspaces (representing the amount of guided amplitude) and the time spent at each energy level (reflecting the amount of amplitude transferred within each local subspace).
Within the LAT framework, we have introduced the guided quantum walk (GQW) as a promising approach for solving large-scale combinatorial optimization problems in the transition region between QWs and QA.The GQW progressively drives local subspaces at gradually decreasing energy levels by utilizing a monotonically decreasing hopping rate Γ(t).The hopping rate is controlled through a cubic Bézier curve (see Fig. 5) defined by six hyperparameters, which allows for fine-tuning the quantum evolution to each problem instance (i.e., energy spectrum of ĤC ) and evolution time T .
We assessed the performance of the GQW in comparison to QA and QWs on exact cover (EC), traveling salesperson (TSP), and garden optimization (GO) problems ranging from 9 to 30 qubits.Across all investigated problem instances and evolution times T , the GQW outperformed both the QW and QA significantly.Specifically, at intermediate timescales, our data reveals an up to four (three) orders of magnitude better performance on 30 qubit problems compared to QA (QW), see Fig. 8.This observation is further supported by the scaling of the minimal evolution time necessary to reach a fixed solution quality as a function of the problem size.In contrast to the exponential scaling observed for QA, the GQW demonstrates a linear scaling, strongly indicating the existence of optimal quantum evolutions that solve combinatorial optimization problems in linear time T , thus surpassing the limitations of adiabatic time evolutions.
It is worth noting that the achieved linear scaling is made possible by shifting the exponential scaling to the classical optimization phase of the hyperparameters.Nonetheless, even when considering the parameter tuning in the total run time, the GQW exhibits a time-tosolution scaling that is approximately two (four) times better than for QA (QWs), see Fig. 10f.This positions the GQW as a powerful tool for deriving optimal annealing schedules.Furthermore, the presence of the exponential scaling in the classical optimization phase, rather than in the single run times, offers the opportunity to distribute the optimization phase across multiple quantum computing devices, thereby enabling parallelization of the process.Moreover, short evolution times also suggest the possibility of discretizing the GQW into a few time steps, therefore adapting the Bézier curve parametrization into a QAOA-like scheme on gate-based quantum computers.These are features generally not feasible for QA due to the exponential scaling of T , but it could potentially allow solving large optimization problems on near-term quantum devices.While further investigation is needed to determine how these observations translate to real quantum devices in the presence of environmental noise and on problems with non-canonical energy spectra (e.g., with large degeneracies), our results strongly support the practicality of the GQW for real-world optimization problems, and we expect that our strategy is easily applicable to other types of optimization problems, beyond EC, TSP and GO instances.important to note, however, that for a time-independent β (e.g. in a QW) this process will eventually reverse, causing amplitude to flow back into the high energy state.This occurs because the fraction r j /r k grows as amplitude is exchanged between the two states.Thus, the higher energy state's second term eventually overwhelms the cos N (β) resulting in an increased amplitude, while the lower energy state's second term becomes suppressed causing an overall decrease in amplitude as cos N (β) < 1.This leads to oscillations in the probability amplitude between the two states (see e.g.Fig. 2).To prevent this, the GQW makes use of a time-dependent monotonically decreasing hopping rate Γ(t), which counteracts the increasing fraction r j /r k through the sin (β).By doing so, the GQW is able to prohibit the back-propagation of amplitude even in the case of large detuning by suppressing the amplitude transfer locally.
Figures 11 and 14 depict the scaling of the solution quality S q as a function of the evolution time T and the problem size N for the three algorithms on GO and TSP problems, respectively.The data exhibit similar characteristics to the EC problems presented in Fig. 8 and are qualitatively consistent with the discussion in Sec.IV A. Notably, the GQW achieves higher solution qualities for some large GO problems (e.g., N = 30) compared to some smaller problem sizes (e.g., N = 18) for long evolutions (see inset in Fig. 11a).This indicates that the hardness of the combinatorial optimization problems varies throughout the problem sizes, such that some large problem instances are easier to solve than their small counterparts once the GQW considers the entire graph.Consequently, we expect similar characteristics in the scaling of S q to appear for QA for long evolutions.
In Figures 12 and 15, we present the scaling of the minimum evolution time T Sq (N ) required to achieve a specified solution quality S q as a function of the problem size N .Similar to the results obtained for the EC problems, T Sq (N ) exhibits linear scaling for the GQW, in contrast to the exponential scaling observed for QA.This further supports the existence of optimal hopping rate schedules that can solve combinatorial optimization problems in linear time.Note, however, that for the GO problems, T 0.9 (N ) neither corresponds to an exponential nor a linear scaling, due to the differences in the hardness of the optimization problems.
Figures 13 and 16 show the scaling of the time-tosolution (see Eq. ( 19)) as a function of the problem size N .For both problem types, the GQW achieves a superior scaling compared to QA and the QW by factors of ≈ 2 (QA on GO problems), ≈ 4 (QW on GO problems) and ≈ 2 (QA and QW on TSP problems).16.Scaling analysis of the optimal (smallest) time-to-solution T T S 99.99% (see Eq. ( 19)) for the GQW (blue), QA (red), and the QW (green) on TSP problems as a function of the problem size N .The solid lines correspond to exponential fits (a • 2 b•N ) applied to the data points (see legend).

FIG. 1 .
FIG.1.Application of the QW to an N = 12 qubit search problem using the Hamiltonian ĤQW S in Eq.(10).(a) Energy spectrum of ĤQW S as a function of the hopping rate Γ.The blue and red curve denote the ground and first exited state, respectively.The data is shifted by the ground state energy.(b) Overlaps of the ground state (blue) and the first exited state (red) with the solution state (solid) and the initial state (dashed) as a function of Γ. (c) Solution quality Sq as a function of the evolution time T for Γ = Γopt = 1/2 N N r=1

5 FIG. 2 .
FIG.2.Application of the QW to solve an EC problem for N = 12 qubits using the Hamiltonian ĤQW O in Eq.(11).(a) Energy spectrum of ĤQW O as a function of the hopping rate Γ.The blue and red curve denote the ground and first exited state, respectively.The data is shifted by the ground state energy.(b) Overlaps of the ground state (blue) and the first exited state (red) with the solution state (solid) and the initial state (dashed) as a function of Γ. (c) Solution quality Sq as a function of the evolution time T for Γ = Γopt (solid blue), Γ = 1.2 • Γopt (dashed red) and Γ = 1.5 • Γopt (dash-dotted green).The value of Γopt has been computed numerically.

C
governs the evolution.Since Ĥ(j,k)C is diagonal and only induces phase rotations in the computational basis, no amplitude transfer occurs.Only for |Γ∆ D | ≈ ∆ C j,k , the two Hamiltonians' relative strengths are balanced, and transitions in the local subspace are maximized.

FIG. 3 .
FIG. 3. Energy spectrum analysis of an N = 21 qubit EC problem.(a) Hypercube graph representation of ĤC , where each point corresponds to a computational basis state |j⟩, ordered by increasing energy Ej = ⟨j| ĤC |j⟩ from left to right.The color of each point represents the total Hamming distance to the ground state zopt.Two states |j⟩ and |k⟩ are connected by an edge if the bitstrings j and k have a Hamming distance of 1, meaning that the driver Hamiltonian ĤD drives transitions between them.The inset in panel (a) provides a zoomed-in view near zopt, highlighting the energy gaps ∆ C j,k between connected states |j⟩ and |k⟩.(b) Distribution of the largest energy gaps ∆ C j,k from a vertex |j⟩ to a lower energy vertex |k⟩ as a function of Ej.The data reveals an increasing trend of ∆ C j,k with respect to Ej.The solid black curve represents a fit to the average energy gap ⟨∆ C ⟩ (E) at energy level E (using the scale of the top axis), used by the GQW to locally adjust the relative strength of the driving and problem Hamiltonian in Eq.(11).The dashed black curve illustrates the optimal relative strength for a QW.The bottom axis denotes the relative time in both algorithms, with time progressing from right to left, as the GQW progresses from high-energy to low-energy amplitude transfers.

FIG. 4 .
FIG. 4. Performance comparison between the GQW (blue)and the QW (red) on the N = 21 qubit EC problem depicted in Fig.3.The GQW dynamically adjusts the relative strength Γ of the driving and problem Hamiltonian in Eq. (15) based on the average ⟨∆ C ⟩ (E) of the largest energy gaps in ĤC .In contrast, the QW employs a fixed hopping rate Γ that has been pre-optimized (see Fig.3b).The performance is evaluated using the solution quality Sq, as defined in Eq. (4).Note that the results presented here are indicative and require knowledge of the complete energy spectrum.

FIG. 6 .
FIG.6.Ranges of the optimal hopping rate Γ (t, λ) (solid lines) for an N = 15 qubit EC problem as a function of the total evolution time T .Γ (t = 0, λ) and Γ (t = T, λ) denote the initial and final hopping rate, respectively, as determined by the hyperparameters λ5 and λ6 (see Sec. III D).Note that Γ (t, λ) is scaled by ∆ D to map to the energy gaps of the local subspaces in the hypercube graph.The heat map presents the evolution of the measurement probabilities averaged over states with equal maximum energy gaps ∆ C j,k (see Fig.3b)).Dotted lines mark the range of energy gaps of local subspaces involving the solution state.

FIG. 7 .
FIG. 7. Optimized quantum evolutions performed by the GQW on the N = 15 qubit EC problem shown in Fig. 6 for short (T = 0.5, (a) and (d)), intermediate (T = 2.0, (b) and (e)), and long (T = 12, (c) and (f)) evolution times.Panels (a)-(c) illustrate the average measurement probabilities of the energy levels EC of the problem Hamiltonian ĤC .Panels (d)-(f) display the evolution of the instantaneous energy levels EGQW of the combined Hamiltonian ĤGQW (Eq.(15)).The blue and red curves represent the instantaneous ground state and the first excited state, respectively.Dashed lines are included in all panels to indicate the instantaneous energy expectation values of the system with respect to ĤC (panels (a)-(c)) and ĤGQW (panels (d)-(f)).The final solution qualities are Sq = 0.7% (T = 0.5), Sq = 13.6% (T = 2.0), and Sq = 91.3%(T = 12.0).

FIG. 8 .
FIG.8.Performance comparison of (a) the GQW, (b) the QW, and (c) QA as a function of the total evolution time T on EC problems with problem sizes N ∈ {12, 15, 18, 21, 24, 30} (see legend).Panel (c) presents simulation results for both linear QA (solid lines) and optimized locally adiabatic QA[50] (dashed lines).The latter uses a rescaled time based on the size of the instantaneous energygap during the annealing process, assuming complete knowledge about the optimization problem.The performance of all algorithms is assessed based on the solution quality Sq defined in(4).Each problem size N is evaluated using a set of 10 randomly generated problem instances, and the presented data corresponds to the geometric mean solution qualities (solid curves) along with the geometric standard deviations (shaded areas).Note that the GQW and the QW undergo a prior classical optimization phase.The dotted gray lines indicate horizontal cuts (Sq ∈ {1%, 10%, 90%}), which are detailed in Fig.9.The inset in panel (a) provides a zoomed-in view of the region corresponding to large evolution times T .Additionally, the dashed lines in panel (b) indicate the stationary solution quality that the damped oscillation of Sq(T ) approaches for the QW at large T .

FIG. 10 .
FIG.10.Scaling analysis of the solution quality Sq as a function of the total evolution time T the number of parameter evaluations Nopt performed by the Nelder-Mead algorithm during the classical optimization phase of the hyperparameters λ in the GQW.Panels (a)-(e) depict the results for EC problems of sizes N ∈ {12, 15, 18, 21, 24}, respectively.The data is sampled with a granularity of ∆T = 1 and ∆Nopt = 5.Each data point represents the average solution quality obtained from 100 repetitions of the optimization phase.Furthermore, the data is averaged across 10 random problem instances for each problem size N .The blue circles mark the configurations of T and Nopt that yield the lowest T T S 99.99% (see Eq. (19)).The dashed curves indicate exponential fits to the Sq = 0.5 contour.In panel (f) we show the scaling of the optimal (smallest) T T S 99.99% achieved by the GQW (blue), linear QA (red), and the QW (green) as a function of the problem size N .The solid lines correspond to exponential fits (a • 2 b•N ) applied to the data points.

FIG. 15 .
FIG.15.Scaling behavior of the total evolution time TS q (N ) required to achieve a specified solution quality Sq on TSP problems as a function of the problem size N for the GQW (circles) and linear QA (triangles).Colors indicate results for various target solution qualities: Sq = 1% (blue), Sq = 10% (red), and Sq = 90% (green).The solid lines depict linear fits (a + b • N ), while the dashed lines represent exponential fits (a • 2 b•N ) applied to the data points (see legend).
FIG.16.Scaling analysis of the optimal (smallest) time-to-solution T T S 99.99% (see Eq. (19)) for the GQW (blue), QA (red), and the QW (green) on TSP problems as a function of the problem size N .The solid lines correspond to exponential fits (a • 2 b•N ) applied to the data points (see legend).
[50]9.Scaling of the total evolution time TS q (N ) required to achieve a specified solution quality Sq on EC problems as a function of the problem size N for the GQW (circles and solid lines), linear QA (triangles and dotted lines) and optimized locally adiabatic QA[50](squares and dashed lines).Colors indicate results for various target solution qualities: Sq = 1% (blue), Sq = 10% (red), and Sq = 90% (green).The solid lines depict linear fits (a + b • N ), while the dashed and dotted lines represent exponential fits (a • 2 b•N ) applied to the data points (see legend).