Investigating Quantum Approximate Optimization Algorithms under Bang-bang Protocols

The Quantum Approximate Optimization Algorithm (QAOA) is widely seen as a possible usage of Noisy Intermediate-Scale Quantum (NISQ) devices. In the standard version of the algorithm, two different Hamiltonians switch back-and-forth between being applied. The Hamiltonians are applied for a variable amount of time after each switch, but with a fixed total number of switches. Here we take an alternative approach and view the algorithm as a bang-bang protocol. In the bang-bang formulation, the total amount of time is fixed and broken up into a number of equal-sized intervals. The bang-bang protocol chooses which of the two Hamiltonians to apply during each interval. Thus the number of switches is not predetermined, and can become as large as the discretization allows. Using a randomized greedy optimizer for protocol performance called Stochastic Descent ($\mathrm{SD}$), we investigate the performance of bang-bang QAOA on MAX-2-SAT, finding the appearance of phase transitions with respect to the total time. As the total time increases, the optimal bang-bang protocol experiences a number of jumps and plateaus in performance which match up with an increasing number of switches in the standard QAOA formulation. At large times, it becomes more difficult to find a globally optimal bang-bang protocol and performances suffers. We investigate the effects of changing the initial conditions of the $\mathrm{SD}$ algorithm, and see that better local optima can be found by using an adiabatic initialization.


I. INTRODUCTION
The use of quantum computation to solve problems deemed hard for classical computation is an area of massive interest in both the physics and computer science communities.One candidate algorithm for practical speedups on Noisy Intermediate-Scale Quantum (NISQ) devices is the Quantum Approximate Optimization Algorithm (QAOA) proposed by Farhi et al. [1].The QAOA involves switching between two Hamiltonians, with the number of switches being defined by a parameter called p, as well as an optimization process to control how long each Hamiltonian should be applied.
As stated in the original QAOA paper, the common belief is that p controls the approximation ratio of QAOA, and so p should be as large as possible before the circuit becomes too deep and is overwhelmed by hardware noise [2].Indeed, Farhi et al. [1] were able to show that as p → ∞, QAOA is able to achieve a perfect approximation ratio, since in that limit QAOA is as powerful as Adiabatic Quantum Computation [3,4].
However, a recent paper from Shaydulin and Alexeev [5] gave evidence to the contrary, stating that the optimization of variational parameters is difficult at large p, and performance improvements at large p are marginal when dealing with bounded computation in the optimization process.We provide further evidence of this.Inspired by Day et al. [6], we give data from a large-scale classical simulation of a modification to QAOA, which we call bang-bang QAOA, applied to the problem of MAX-2-SAT.While not necessarily practical for NISQ devices, this modification acts as a thought experiment to show that even in the case where p is allowed to be fairly large, while the total time is instead bounded, one does not see large improvements with greater values of p. Similar to Shaydulin and Alexeev [5], we assert that this is because of a proliferation of local optima, making it difficult to find optima that are close to the global optima.
While we know that as p → ∞ that one can choose the QAOA parameters to correspond to a Trotterized Adiabatic Quantum Computation and achieve a perfect approximation ratio [1], in the finite p regime it is not fully understood whether or not the optimal parameters for QAOA should appear adiabatic [7][8][9][10].In the bang-bang QAOA model, we see that when total time is small, the best protocols do not appear adiabatic, but rather correspond to finite-p implementations of standard QAOA.When the total time is large, the proliferation of local optima means that our optimization procedure depends strongly on the initialization.Though we cannot say much about any global optima, we do see that adiabatic initialization provides a good heuristic for finding better protocols.As its name suggests, the QAOA is a quantum-based algorithm for combinatorial optimization designed to find the set of inputs that approximately optimizes an efficiently computable objective function.At a high level, it does so by encoding this objective function along the diagonal of a Hamiltonian.The algorithm then tries to find a circuit that efficiently brings the state |0 n as close as possible to the ideal state by applying two different Hamiltonians.We will first describe the standard QAOA as given by Farhi et al. [1], followed by our bang-bang QAOA modification.Note that there exist a wide variety of other interesting modifications to the standard QAOA [11][12][13].
A. Standard QAOA Let E = x∈{0,1} n f (x)|x x| be the Hamiltonian that encodes the objective function f along its diagonal.E will be referred to as the constraint Hamiltonian while X ⊗n = (|0 1| + |1 0|) ⊗n will be referred to as the mixing Hamiltonian.Now let is the Hadamard operator.The 2p parameters are optimized based on the expectation value ψ|E|ψ in order to increase the chance of measuring a good input when |ψ is measured in the computational basis.
The protocols are read from left-to-right in order of applying the Hamiltonians. 1 The QAOA parameters are not always restricted to be positive, but the β parameters are naturally periodic, and so too are the γ parameters when (as will be the case here) the objective function takes on integer values.
A bang-bang control scheme is a system that switches abruptly between two different modes and is an important part of optimal control theory [14].Here, the two modes will be the application of the Hamiltonians E and X ⊗n respectively.In order to explore the space of protocols computationally, we break up the total time T into N b blocks of time of T /N b time each, with each block assigned to one or the other Hamiltonian.A bangbang QAOA protocol then involves iterating through the blocks applying the corresponding Hamiltonian for T /N b amount of time.This simply involves applying either e iET /N b or e iX ⊗n T /N b , respectively.See Figure 1 for an example.
If one were to translate a bang-bang QAOA protocol into the language of the standard QAOA, the p value of said protocol could be as large as N b /2.However, the total amount of time is at most T .In addition, in the large N b limit this bang-bang QAOA model can approximate any standard QAOA protocol such that p i=1 (β i + γ i ) ≈ T .Later on we will also argue that it is not worthwhile to consider QAOA protocols for large T or p i=1 (β i + γ i ) in the bang-bang and standard QAOA, respectively, due to the difficulty of optimization.

III. MAX-2-SAT
For boolean expressions a conjunction is a logical AND and is typically represented as ∧.A disjunction is a logical OR that is represented as ∨.Finally, ¬ as a unary operator represents logical negation.Given boolean values x 0 , • • • x n−1 , a k-CNF (Conjuctive Normal Form) is the conjunction over disjunctive clauses of size k.More intuitively, a k-CNF is an AND-of-ORs where each OR involves k boolean values.2-SAT is then the problem of determining if there exists a assignment of x 0 , • • • x n−1 such that a given 2-CNF is satisfied.The natural optimization version of the problem, MAX-2-SAT, is then the problem of determining the maximum number of clauses satisfiable by an assignment of It is important to make the distinction between 2-SAT and MAX-2-SAT; 2-SAT is in P [15][16][17] while MAX-2-SAT is NP-Hard [18].Hardness of approximation results have shown that no Polynomial-Time Approximation Schemes (PTAS) exist for MAX-2-SAT with approximation ratios better than 21  22 ≈ 0.955 [19] assuming P = NP and ∼ 0.943 [20] when also assuming the Unique Games Conjecture.Here the approximation ratio of an algorithm refers to a guarantee that an algorithm with approximation ratio r for a problem instance with optimal solution C max achieves a result of at least rC max (potentially only with high probability if randomized or quantum).There does, however, exist an efficient algorithm based on Semi-definite Programming that achieves an approximation ratio of 0.94 [21].It is worth noting that a uniformly random assignment of literals will satisfy 3  4 of the clauses in expectation.By the probabilistic method this also ensures that at least 3  4 of the clauses are always satisfiable.Finally, we will show how to encode MAX-2-SAT as a Hamiltonian E. Given a disjunctive clause of 2 literals, there is exactly one assignment that does not satisfy the clause.We then design a diagonal Hamiltonian for the clause that is 1 for every literal assignment where the clause is not satisfied.For instance, given the clause C = (x i ∨ ¬x j ) where i < j the only assignment that does not satisfy C is x i = 0, x j = 1, where 1 is True and 0 is False.We can then define the Hamiltonian for C = (x i ∨ ¬x j ) The diagonal of this Hamiltonian is then 1 for every computational basis state x such that x i = 0 or x j = 1.The Hamiltonian of MAX-2-SAT is then the sum over all Hamiltonians induced by the clauses {C} in the 2-CNF One can see that the diagonal encodes the number of clauses satisfied by the assignment of literals.The objective function of QAOA is the expected number of satisfied clauses, ψ|E|ψ .
If C max is the maximum number of satisfiable clauses, by linearity of expectation this leads to the expected approximation ratio While we cannot normally directly compute the approximation ratio without knowing C max , maximizing the value in Eq. 2 will also maximize f obj due to being related by a constant factor.

IV. METHODS
In this section we will simply outline preliminary information to understand our results.We will henceforth set the number of variables in our MAX-2-SAT instances to be 10.This is due to the exponential nature of the dimension of the Hilbert space with respect to the variables (i.e.qubits).This means that adding a single variable will double the amount of computation needed.

A. Stochastic Descent
QAOA optimizes the quantum circuit in order to increase the probabilities of getting a good measurement.Given a bang-bang QAOA protocol P that produces state |ψ P , our objective function f obj (P ) will be the expected approximation ratio of the resulting state in Eq. 4. In the bang-bang QAOA our protocols fall into a discrete space.As such, we use the following greedy randomized optimization approach introduced by Day et al. [6] with k = 1: RandomProtocol, FindAllUpdate, Ran-domShuffle, UpdateProtocol, f obj 3: initialize: //Initialize protocol at random 4: goto shuffle return P old If bang-bang protocols are viewed as bit strings, the algorithm randomly iterates through all protocols of Hamming distance at most k away from the current one and updates itself to the first protocol it finds that performs better.If no protocol is better, then we say that the current protocol is a k-local optimum.Note that the number of protocols that need to be considered at each update grows with k as k i=1 N b i , which is near exponentially for k ≤ N b /2.Thus, increasing k quickly becomes very computationally expensive.

B. Random Protocol Initialization
An interesting aspect of SD is the distribution with which the initial random protocol is drawn from.The original algorithm proposed by Day et al. [6] uniformly samples at random.In this paper, we proposed two new initialization to study the relation between bang-bang QAOA with Adiabatic Quantum Computation.
We define X q to be the Bernoulli random variable with probability q of being 1 and 1 − q of being −1.Varying q as a function over blocks generates three different random initialization methods: Adiabatic : It favors X ⊗n in early blocks and E in late blocks of the protocols.

Uniform :
N b i=1 X 0.5 .The probabilities of X ⊗n and E being sampled are equal.This is the default distribution used in this paper.
Anti-adiabatic : It favors E in early blocks and X ⊗n in late blocks of the protocols.

C. Correlator 2
Given a set S of protocols with N b blocks, we define the correlator of S as the following: View a protocol as a collection of values P ∈ {−1, 1} N b , where P i refers to the value at block i.Let P i = 1 |S| P ∈S P i represent the empirical average of block i over all protocols in the set S. The correlator is defined as a certain variance of the protocol values, A small correlator means the protocols in S are similar to each other.

D. Protocol Smoothing
It is also important to analyze the actual structure of bang-bang QAOA protocols after SD 1 .While one could plot the protocols themselves as {−1, 1} values along a time-scale, this does little to see the effects of how a protocol may favor one Hamiltonian over the other at different points in time.As such, we also opt to smooth the protocols by taking a rolling average.In addition, this smoothing allows us to properly see how close many of these bang-bang QAOA protocols are to being standard QAOA protocols for small total time by smoothing over minor deviations.
More formally, let w be a positive integer known as the window size.Similar to the correlator , we will view bangbang QAOA protocols as P ∈ {−1, 1} N b where P i refers to the value at block i.We then defined the smoothed protocol where

E. Problem Instances
The 2-DNFs used were constructed by randomly generating a clause with two unique indices drawn uniformly, 2 Intuitively, it is really an "anti-correlator" as the value is small when the protocols are similar.We choose to keep the same name as Day et al. [6] for consistency.
as well as whether or not to negate each variable.Several of these clauses are then independently created, with the number of random clauses n c being a parameter specified at runtime.Note that it is possible that two identical clauses are created, and by the Birthday Paradox we expect this to happen when n c n.While this is the regime that we end up creating our problem instances with, one can simply repeat the process an expected constant number of times until success.We ensure that there are no identical clauses in our problem instances.See Appendix A for the actual problem instances used of 10, 20, and 30 clauses respectively.For clarity, we focus on the 10 clause problem instance in the proceeding results section.

V. RESULTS
In Figure 2 we can see how the protocols drawn uniformly at random perform without being optimized with SD (in grey) as compared to SD 1 , with a substantial increase in expected approximation ratio.Even with SD 1 it is easy to see the benefit of a greedy optimization strategy for bang-bang QAOA.Interestingly, without SD 1 , protocols perform worse than the naive classical algorithm of a uniformly random assignment of variables, which achieves at least a 3 4 approximation ratio.
A. Small Time Regime We will refer to the small time regime as T 6, though this value is likely problem instance specific.The important aspect of these figures in this regime is the rapid increase in median expected approximation ratio around T ≈ 1.5 and T ≈ 3.5, which we refer to as a phase transition in performance.We attribute this to be the minimal total time needed for protocols to start enacting nontrivial behavior, corresponding to SD 1 converging on a p ≈ 2 and p ≈ 3 protocol respectively when viewed as a standard QAOA protocol.
We can see from Figures 3a and 3b that at very small time the protocols only apply each Hamiltonian once.Then as the total time increases, the protocols then transition into two switchbacks as seen in 3d and 3e leading to the median expected approximation ratio to increase substantially.This again repeats with p ≈ 3 like in 3f, however the increase in median expected approximation ratio is not as great as before.
Looking at the correlator in Figure 2, at T = 0 it starts off around 1 since every starting protocol is a local optimum with a uniform probability of being selected.As SD 1 begins to optimize towards specific protocols, the value then quickly drops as there are only a few and very similar local optima.It is then when transitioning to a new local optima that the correlator begins to temporarily spike, as there is a mixture of protocols as with 3f.Once the transition has finished, the correlator then In the text we discuss the features of this figure, such as the rapid increase in median expected approximation ratio at specific times and the increase in the correlator over time.quickly decreases again.However, there is a general trend towards protocols becoming uncorrelated as the number of local optima start increasing with total time.

B. Large Time Regime
It is then at large time that increasing total time no longer becomes as beneficial for the global optima and the number of local optima starts to increase rapidly.Here, despite the fact that the best protocols continue to do marginally better at large T , looking carefully at Figure 2 the median trends downward.We believe this is due to the local optima no longer being close to the global as it becomes more and more difficult to find better optima using SD 1 .This tells us that within the realms of greedily optimized bang-bang QAOA, there is more than enough time necessary for a near-optimal protocol and any extra total time contributes to extra degrees of freedom that make optimization more difficult.This becomes even more apparent as the number of clauses increases and the median protocol begins to quickly fall off as total time increases.
Looking at the protocols in Figure 4c without smoothing, there is no apparent discernible profile with the protocols in how they relate to their expected approximation ratio.However, we do see in Figure 4d that the protocols tend to favor the constraint Hamiltonian and appear neither adiabatic nor anti-adiabatic.Together with Figure 4b and 4f, we find the trend of protocols remain qualitatively similar to their initialization.

C. Summary
Below a certain total time T , no bang-bang QAOA protocol does well since the resulting unitary of the circuit will still be close to identity.After a certain point, a select few protocols start exhibiting non-trivial behavior, which is then found by even SD 1 and the protocols transition to a much better expectation.As T increases, there begins to become excess time which the protocols cannot benefit from leading to an increase of local optima.Then at some point, T becomes large enough to allow for another set of non-trivial behavior, and this process continues until the transition to large time.At this point T becomes too large and it becomes too difficult to find a solution near the global optima.The protocols then start exhibiting less structure and become very different from each other quantitatively based on the correlator, but qualitatively do not deviate far from their initialization.

D. 100 vs 200 blocks
In Figure 5 we illustrate how the number of blocks N b affects the expected approximation ratio of bang-bang QAOA.While the shape of both graphs are very similar, one can see that between Figure 5a and 5b N b = 200 tends to give better expected approximation ratios, especially when the total time T becomes large.Additionally, the correlator dips lower around the phase transitions, indicating that the protocols actually concentrate better with larger N b around the phase transitions, despite the fact that there are exponentially more protocols available as N b increases.Because the ratio of satisfiable clauses tends to decrease as the total number of clauses increases, the baseline of random guessing is able to perform better as it is always able to satisfy 3  4 of all clauses.
It is of course important to analyze more than a single problem instance.In Figure 6, we find that the overall behavior remains relatively consistent between the problem instances with 10, 20, and 30 clauses respectively.More specifically, we see that the median expected approximation ratio increases in jumps in the small time regime, before trailing off at large time.This decay in median expected approximation ratio is especially pronounced in Figure 6c.Similarly, the correlator moves up and down in the small time regime, though increasing to a value of nearly 1 as time increases.

F. Effects of Random Initialization
Looking at Figure 7, they all perform similarly at small T .This is to be expected as there are few local optima such that the initialization only effects the starting distance to the global.However, at large T , Figure 8 shows us that the uniform random initialization tends to do poorly with respect to the median protocol.The adiabatic initialization however tends to consistently do well at large T , while the anti-adiabatic initialization exhibits large variance in its median expected approximation ratio over time (see Section IV B for the definition of adiabatic and anti-adiabatic initialization).As T becomes large, the adiabatic theorem, the driving force behind Adiabatic Quantum Computation [3], starts becoming relevant.If intuition from the adiabatic theorem and Adiabatic Quantum Computation extends to the local optima found by SD 1 , the local optima found around the adiabatic initialization are then likely to perform better on average than those initialized from uniform or antiadiabatic distributions.
Further evidence of this can be seen by once again examining the protocols themselves.Looking at Figure 4, we see that the randomly drawn protocols from each initialization are very similar to their expected starting protocol.Finally, if we instead look at the best protocols of each initialization, Figure 9 shows the best protocols appear to be qualitatively more adiabatic: adiabatic initialization leads to strongly adiabatic algorithms, antiadiabatic is largely uniform, and random initialization slightly favors adiabaticity.Thus even though the local optima themselves seem to be unbiased, the best protocols seem to be found in the space around qualitatively more adiabatic protocols than the initialization.

G. Iterations Plots
As a final demonstration of the difficulty of finding the globally optimal protocol as the number of local optima increases with T , we examine the average number of iter-  ations SD 1 needs to find a local optima.Looking at Figure 10, we can see that the number of iterations needed increases at the first phase transitions, before decaying as T increases.This is true for all three initialization.What this effectively means is that the distance from a random protocol to its nearby local optima decreases with T regardless of the three starting positions.

VI. CONCLUSION
Ultimately, it is not clear the bang-bang QAOA should be used in practice with NISQ devices.As stated in Section II B, the depth of the circuit can potentially be as large as N b /2, which is exactly the reason p = O(1) is used in standard QAOA to avoid this.However, as a thought experiment as to the value of the p parameter itself, this serves as further evidence that larger p values are not necessary to achieve the best approximation ratios when the optimization process is limited to bounded computation.We see that while a minimal amount of time is needed for bang-bang QAOA protocols to achieve non-trivial approximation ratios, they fail to substantially improve in the median expected approximation ratio for larger T .It is also not clear from the data alone how good of an approximation ratio one can get using bang-bang QAOA efficiently.
Due to the nature of classical simulation of quantum mechanics, collecting data is incredibly time intensive even with parallelization of sample collection.For example, because SD k takes time exponential in k for small k, we were restricted to SD 1 .Additionally, the number of variables was only set to 10, creating very small 2-SAT instances.It will be interesting to see if these behaviors remain the same even with larger problem instances and/or using SD k for k > 1.Additionally, though we examine N b = 100 and N b = 200 in Figure 5 there is not currently enough data to draw strong conclusions between the relationship between N b and performance.
Another consideration is that various other modifications to QAOA such as Li et al. [11], which modifies the objective function, can be combined with bangbang QAOA.One compelling modification could involve the ability to apply a Hamiltonian for negative time, corresponding to negative {β i } and {γ i } parameters in standard QAOA such that total time becomes T = i |β i | + |γ i | [22].Changes to SD are necessary, such as redefining the distance metric between protocols beyond Hamming Distance, as well as preventing the cancellation of Hamiltonians.How these modifications work in tandem with bang-bang QAOA may lead to interesting phenomena that could potentially lead to a more practical algorithm.nies, which includes Google, Verily, Waymo, and others (www.x.company).Quantum simulation and SD in this paper were implemented using Cirq [23] and Apache Beam [24].

Figure 2 :
Figure 2: Top panel represents aggregate statistics on the expected approximation ratio with respect to total time T .Percentiles are given respectively.10,000 protocols are sampled per time-step with N b = 200 blocks.Fill grey region represents protocols without SD 1 applied.The initial protocols are sampled uniformly at random.The bottom panel contains the corresponding correlator.The vertical lines and thumbnails indicate 10 randomly sampled protocols found by SD 1 at corresponding total times.More details of the profiles of protocols are in Figure 3 and 4.In the text we discuss the features of this figure, such as the rapid increase in median expected approximation ratio at specific times and the increase in the correlator over time.

Figure 3 :
Figure 3: Illustrates how bang-bang QAOA protocols converge towards standard QAOA protocols with small p values for N b = 200 and 10 clauses with uniform random initialization.Drawing 10 protocols uniformly at random with a window size w = 1.(a) T = 0.5 and (b) T = 1.0 are before the first transition point and are similar to standard QAOA with p = 1.(c) T = 1.5 is during the transition, where the rapid increase in the correlator is due to the mixture of two kinds of protocols.(d) T = 2.2 is after the transition where protocols resemble p = 2.Likewise, (e) T = 3.0 is before the second transition, (f) T = 3.5 is during, and (g) T = 4.2 is after.(h) T = 6.0 is at the point where the median expected approximation ratio begins to plateau.Colormap is based on the expected approximation ratios of protocols and the numbers on the right indicate their corresponding values.See Figure 2 to see the transition points.

Figure 4 :
Figure 4: Demonstrates the profiles of local optima at large time (T = 9.5) with (a)(b) adiabatic, (c)(d) uniform, and (e)(f) anti-adiabatic initialization for N b = 200 and 10 clauses.Protocols are sampled randomly and smoothed with a window size w = 1 and w = 50 respectively.Within each sub-figure, there is no qualitative difference in the profiles between the best and worst protocols and no strong underlying standard QAOA-like structure behind the protocols like with Figure 3. Additionally, the shape of the smoothed protocols being similar to the expected initial protocol even after SD 1 indicates that local optima can be found with qualitatively different protocols.Colormap is based on the expected approximation ratios of protocols and the numbers on the right indicate their corresponding values.Note that the same 10 protocols are shown with different smoothing for each initialization.Initialization explained in Section IV B.

Figure 5 :
Figure 5: (a) 100 vs (b) 200 blocks are compared side-by-side for the same problem instance containing 10 clauses.With 200 blocks one sees slightly better median expected approximation ratios.

Figure 6 :
Figure 6: Expected approximation ratio and the correlator when run on the (a) 10 clauses, (b) 20 clauses and (c) 30 clauses problem instances.10,000 protocols are sampled per time-step with N b = 200 blocks.Grey plots represent protocols without SD 1 applied.The initial protocols are sampled uniformly at random.Because the ratio of satisfiable clauses tends to decrease as the total number of clauses increases, the baseline of random guessing is able to perform better as it is always able to satisfy3  4 of all clauses.

Figure 7 :
Figure 7: Expected approximation ratio and the correlator when using (a) adiabatic, (b) uniform and (c) anti-adiabatic initialization.The probability distribution that the initial protocol is drawn from does have minor small T and more pronounced large T affects on how the protocols perform, even after SD 1 .Here 10 clauses are used with N b = 200.

Figure 8 :
Figure 8: Comparison of median expected approximation ratio of protocols based on initialization.(a) 10 clause, (b) 20 clause and (c) 30 clause problem instances are presented.Adiabatic initialization tends to consistently do well even at large total time, as opposed to uniform random which tends to drop off.Anti-adiabatic initialization leads to high variance in the median expected approximation ratio.

Figure 9 :
Figure 9: Top ten bang-bang QAOA protocols using (a) adiabatic, (b) uniform and (c) anti-adiabatic initialization at large time (T = 9.5), smoothed with window size w = 50 for N b = 200 with 10 clauses.Colormap is based on the expected approximation ratios of protocols and the numbers on the right indicate their corresponding values.Since 1 corresponds to the objective Hamiltonian, an adiabatic protocol will gradually increase in value.

Figure 10 :
Figure 10: Shows the average number of iterations of SD 1 are needed before a local optima are found on the (a) 10 clause, (b) 20 clause, and (c) 30 clause problem instances.As the number of local optima increase with total time, it becomes easier to find one such that the number of iterations quickly decreases.
Appendix A: Problem Instances Used in This Paper Notation: ∨ represents a disjunction, ∧ represents a conjunction, and ¬ represents logical negation.
, • • • γ p be positive real parameters for QAOA with depth p 1 .The state produced by QAOA is then |ψ = e iβpX ⊗n e iγpE• • • e iβ1X ⊗n e iγ1E H ⊗n |0 n , where H the list of updates with at most k-flips 5: ListOfAllUpdates ← FindAllUpdate(N b , k) 6: shuffle: //Shuffle updates in a random order 7: ListOfAllUpdates ← RandomShuffle(ListOfAllUpdates) //Iterate over all possible update 8: for update in ListOfAllUpdates do //Update protocol given the specified update