A Grand Unification of Quantum Algorithms

Quantum algorithms offer significant speedups over their classical counterparts for a variety of problems. The strongest arguments for this advantage are borne by algorithms for quantum search, quantum phase estimation, and Hamiltonian simulation, which appear as subroutines for large families of composite quantum algorithms. A number of these quantum algorithms were recently tied together by a novel technique known as the quantum singular value transformation (QSVT), which enables one to perform a polynomial transformation of the singular values of a linear operator embedded in a unitary matrix. In the seminal GSLW'19 paper on QSVT [Gily\'en, Su, Low, and Wiebe, ACM STOC 2019], many algorithms are encompassed, including amplitude amplification, methods for the quantum linear systems problem, and quantum simulation. Here, we provide a pedagogical tutorial through these developments, first illustrating how quantum signal processing may be generalized to the quantum eigenvalue transform, from which QSVT naturally emerges. Paralleling GSLW'19, we then employ QSVT to construct intuitive quantum algorithms for search, phase estimation, and Hamiltonian simulation, and also showcase algorithms for the eigenvalue threshold problem and matrix inversion. This overview illustrates how QSVT is a single framework comprising the three major quantum algorithms, thus suggesting a grand unification of quantum algorithms.


I. INTRODUCTION
Algorithms solve problems by presenting a process or set of rules to be followed, utilizing a basic set of building blocks provided. Computer science traditionally employs Boolean circuit components as the basic blocks, from which standard arithmetic operations may be composed, as Boolean functions. Quantum computation employs a different set of basic blocks, typically unitary operations on one or more two-state systems (qubits), to realize quantum circuits. A fundamental challenge arises, however, when seeking to unite the world of quantum circuits with that of Boolean functions: in general, Boolean functions need not be reversible, whereas quantum circuits are manifestly unitary transforms, and must thus be invertible.
Early in the history of quantum computation, this barrier was transcended by seminal work [1] showing that all Boolean functions can be made reversible, with only a small overhead in space and time. Toffoli and Fredkin famously illustrated this idea by showing how the ideal Newtonian dynamics of finite-radii spheres (billiard balls) can be used to simulate reversible Boolean circuits, via their collisions [2]. Following this concept, simulation of arbitrary Boolean functions can also be accomplished using quantum circuits, by first embedding the desired function into a reversible Boolean circuit, then constructing a quantum circuit realizing this invertible transform. Such an embedding is a core part of Shor's quantum factoring algorithm [3], for example, as used in the modular exponentiation of an input number.
Intriguingly, however, the two other major "primordial" quantum algorithms, Grover's quantum search al-gorithm [4], and the Hamiltonian simulation algorithm [5,6], do not employ an embedding of a reversible Boolean function. In fact, a key part of the quantum factoring algorithm is its use of the quantum Fourier transform, which has no direct classical analogue, in the sense that it is not at all like a quantum embedding of a reversible Boolean function for the Fourier transform. And yet, all three of these algorithms provide solutions to problems with clear classical counterparts, and attain known speedups over the comparable classical "Boolean function" approaches. So wherein lies the ability of quantum algorithms to address and speed up the solution to a classically specified problem?
As illustrated in this tutorial, a key idea in uniting quantum and classical computation is not to first make classical computation reversible. Instead, observe that the dynamical behavior of a subsystem of a quantum system can be non-unitary, and thus can directly realize irreversible, non-linear functions. An extreme case of this is projective measurement: the billiard ball model can realize non-invertible gates simply by discarding balls, but this would be inefficient. More constructively, the recently developed framework of quantum signal processing (QSP) [7,8] provides a systematic method to make a quantum subsystem transform under nearly arbitrary polynomial functions of degree d, using O(d) elementary unitary quantum operations. Crucially, the polynomial describes not the output of the full quantum system, but only a very specific and clearly identified subsystem. And remarkably, the essential ideas behind QSP originate from the early days of practical control of two-level quantum systems, with nuclear magnetic resonance [9][10][11].

arXiv:2105.02859v5 [quant-ph] 10 Dec 2021
With this framework, we present in this tutorial a pedagogical overview of the modern approach to quantum search, factoring, and simulation, focusing on how all three of these central quantum algorithms may be unified as instances of the recently developed quantum singular value transform (QSVT) algorithm [12]. The QSVT algorithm generalizes QSP and efficiently applies a polynomial transformation to the singular values of a linear operator (governing a particular subsystem) embedded in a larger unitary operator. And more recently, such a singular value transformation has been generalized to apply to an operator embedded in a block of a Hamiltonian [13].
Singular values naturally arise in this context from the fact that the input and output spaces of the embedded linear operator may be of different sizes. The polynomial transformation of the singular values is achieved by applying a specific sequence of SU (2) rotations to the embedded subspace, where each rotation is parameterized by an angle φ k ∈ R. The QSVT algorithm is parametric in that the polynomial transformation is completely characterized by the choice of phase angles {φ k }. Moreover, given the desired polynomial transformation, the QSP phase angles which generate it may be classically efficiently and stably computed [7,12,14].
This seemingly simple parameterization endows QSVT with immense flexibility and power. Using QSVT as a subroutine, we present quantum algorithms for the search problem and phase estimation, and give simplified arguments for optimal Hamiltonian simulation by QSVT and matrix inversion by QSVT. We also present a QSVT-based algorithm for the eigenvalue threshold problem, wherein one wishes to know if a (normal) matrix has an eigenvalue above some threshold, and use this to simplify the presentation of the use of QSVT for phase estimation. Each algorithm is realized simply as an instance of QSVT with a natural oracle, adaptively repeated and interspersed in a common pattern with simple quantum gates and measurements.
This common pattern is particularly fascinating. At face value, the algorithms for the search problem, factoring (aka phase estimation), and Hamiltonian simulation appear to share no similar structures, owing their quantum speedups to different sources, and yet they can all be derived from a single algorithmic primitive, and interpolated between by a simple change of parameters. In addition, these three central algorithms form the foundation for most quantum algorithms currently known. For instance, the Harrow-Hassidim-Lloyd algorithm for linear systems [15] incorporates Hamiltonian simulation and phase estimation in order to invert a Hermitian matrix, and similarly the quantum counting algorithm integrates quantum search with phase estimation to count the number of marked elements in an unstructured set [16]. As shown in this tutorial, by simply adjusting the parameters of QSVT, one can construct nearly all known quantum algorithms. It is in this sense that QSVT provides a grand unification of quantum algorithms.
While some of these applications have been covered in recent works on QSVT [12,17], here we aim to present these constructions as pedagogically as possible, providing detailed procedures and intuition for each, and including explicit examples to support abstract ideas where appropriate. We also supply performance bounds and resource requirements for each of the algorithms presented here, which we anticipate will be helpful. It is our hope that this presentation will make QSVT and QSP more accessible, catalyzing future developments in quantum algorithms.
Throughout this tutorial, we will assume familiarity with basic concepts in quantum computing, such as unitary dynamics and measurement, as well as the conventional quantum algorithms for search, phase estimation, and Hamiltonian simulation. For a comprehensive review of these subjects, see [18][19][20]. Further, we aim to present this work in a manner accessible to readers without prior knowledge of QSP and QSVT, but if more information on these topics is needed, [7,12,21,22] may serve as helpful references.

A. Road Map
We share the story of this grand unification of quantum algorithms by first surveying the development of QSVT in Section II, beginning with quantum signal processing and then demonstrating how this technology leads to quantum eigenvalue transforms and ultimately the quantum singular value transformation. Thereafter, we detail a statement of grand unification for currently known quantum algorithms, presenting QSVT-based algorithms for the search problem and the eigenvalue threshold problem in Secs. III and IV, respectively. Further fulfilling the unification promise, we introduce an algorithm for phase estimation by QSVT in Section V, and show how the quantum Fourier transform emerges from cascading QSVT sequences. We then highlight in Section VI how QSVT can yield intuitive algorithms for Hamiltonian simulation and matrix inversion. Lastly, in Section VII, we explore the implications of these results and discuss areas of future research in the utility of QSVT.

II. FROM QSP TO QSVT
In this section, we develop the framework of QSVT, starting with quantum signal processing and then providing a concrete application of this technology to amplitude amplification. Building off this example, we establish quantum eigenvalue transforms and finally quantum singular value transforms. Overall, this section is meant to be pedagogical and easily accessible to readers with a background in basic quantum circuits and linear algebra, as the constructions of this section underlie the remainder of the work. For more details on QSP and QSVT, including alternate introductions to the topics, rigorous proofs, and applications not covered here, see [7,12,21,22].

A. Quantum Signal Processing
Quantum signal processing (QSP) generalizes the results of composite pulse sequences [7,8,23], and is built on the idea of interleaving two kinds of single qubit rotations: a signal rotation operator W , and a signal processing rotation operator S. These rotation operations are about different axes through the Bloch sphere, e.g. commonly W is an x-rotation, while S is a z-rotation. Moreover, the signal rotation always rotates through the same angle θ, whereas the signal processing rotation rotates through a variable angle according to some predetermined sequence.
For example, let the signal rotation operator be which is an x-rotation by angle θ = −2 cos −1 a. And let the signal processing rotation operator be which is a z-rotation by angle −2φ. For a tuple of phases φ = (φ 0 , φ 1 , ...φ d ) ∈ R d+1 , and using these conventions 1 , the QSP operation sequence U φ is defined as W (a)e iφ k Z .
What is interesting is how QSP can modify the incoming signal. Suppose φ = (0, 0), i.e. there is no processing, such that U φ = W (a) is just the unchanged signal. If we plot the probability of a |0 qubit input staying unchanged under this operation, i.e. p = | 0|U φ |0 | 2 , as a function of θ, we find a nice cosinusoid (the lower plot in Figure 1), because for this case p = cos 2 θ 2 . Now if we do some signal processing, by letting φ = (π/2, −η, 2η, 0, −2η, η), where η = 1 2 cos −1 (−1/4), then for the new U φ using these phases, we find the new probability p = | 0|U φ |0 | 2 = 1 8 cos 2 θ 6 . This has the nice property that the qubit remains unflipped for a wide range of signal angles, but then a sharp transition happens around θ ≈ 2π/3. This increases sensitivity to specific values of the signal! 1 The convention presented in Eq. (3) may be called the Wx convention. See Appendix A for other conventions.
FIG. 1. The transition probabilities for |0 → |0 after the application of exp(iθX/2) (uniform dashed) and the QSP sequence for the BB1 protocol (dotted dashed). The latter transition probability is seen to be near-one for a wider range of θ.
Thus, sequences like this are widely employed in magnetic resonance imaging to increase image contrast. This particular sequence is famous in the field of NMR, and is known as the "BB1" pulse sequence [11]. Many such "composite pulse" sequences are known [9,24,25], with a wide range of variations, and in experimental quantum computation, they serve to suppress specific kinds of errors and enhance sensitivity to specific kinds of signals, across implementations ranging from quantum dots [26] and nitrogen vacancy centers in diamonds [27], to trapped ions [28][29][30] and superconducting qubits [31].
That QSP sequences can have such signal transformation properties is well known, because in general the matrix element P (a) = 0|U φ |0 becomes a polynomial in a, with the order of the polynomial being at most the length of the sequence of QSP phases φ. Specifically, for example, for φ = (0, 0), P (a) = a; for φ = (0, 0, 0), P (a) = 2a 2 − 1; and for φ = (0, 0, 0, 0), P (a) = 4a 3 − 3a. These are the Chebyshev polynomials of the first kind, T d (a).
Perhaps the most remarkable property of the QSP sequence of Eq. (3), however, is the reverse of this statement: it turns out that for a given polynomial P (a) (subject to some reasonable constraints), there exists a set of QSP phase angles φ such that P (a) = 0|U φ |0 . Specifically [7]: Theorem 1 (Quantum Signal Processing). The QSP sequence U φ produces a matrix which may be expressed as polynomial function of a: for a ∈ [−1, 1], and a φ exists for any polynomials P , Q in a such that: (ii) P has parity d mod 2 and Q has parity (d − 1) mod 2, The forward part of this theorem is easily proven by induction, starting from the d = 0 case, for which P = e iφ0 and Q = 0. The reverse direction of this theorem is more involved, and can be proven in a number of ways, including an elegant interpretation involving Laurent polynomial algebras [32,33].
Often however, we are interested not in the unitaries that can be constructed with QSP but rather the achievable polynomial transformations of the input, Poly(a), in a subsystem. If as above, we choose Poly(a) = 0|U φ |0 = P (a) we are limited to P for which there exists a polynomial Q satisfying the conditions of Theorem 1. This can be quite a limiting condition for some applications. For example, for a = ±1, W (±1) is proportional to the identity matrix and the entire sequence QSP sequence collapses to a single z-rotation limiting us to polynomials Poly(a) such that |Poly(±1)| = 1. This limitation can be overcome by instead defining Poly(a) = +|U φ |+ = Re(P (a)) + iRe(Q(a)) √ 1 − a 2 . In this case, it can be shown that we can accurately approximate any real polynomial with parity d mod 2 such that deg(Poly) ≤ d, and |Poly(a)| ≤ 1 ∀ a ∈ [−1, 1]. This can be achieved by selecting an appropriate P whose real part approximates the desired function and a Q with small real component. With this convention, the set of achievable polynomials is sufficiently expressive for all of the applications described in this tutorial. In general, the basis employed for the desired polynomial may be called the signal basis, and unless otherwise specified, below we take this basis to be |+ , |− . Appendix A elaborates on this, and discusses the different conventional statements of the QSP Theorem 1.
While Theorem 1 guarantees the existence of such a φ, it does not provide a method for determining it. Fortunately, Remez-type exchange algorithms can efficiently compute a φ that produces a good approximation to any feasible polynomials P and Q [7]. Further, more efficient and numerically stable algorithms have also been found [32][33][34], and novel optimization techniques are currently under development [35]. Appendix D gives explicit examples of φ for a wide family of polynomials used in realizing quantum algorithms, and illustrates an opensource code package accompanying this tutorial for generating QSP phase angle coefficients.

B. An Application to Amplitude Amplification and Search
Toward motivating QSVT from QSP, we introduce an illustrative example of how multi-qubit problems may be simplified by identifying qubit-like subsystems, to which the ideas of QSP may be applied. The concept demonstrated in this subsection is essentially that of qubitization [8], which forms a core tenet of QSVT.
Specifically, we discuss the problem of amplitude amplification; a similar construction will be discussed in Section III employing the major theorems of QSVT, but here we begin from basic principles. It is hoped that the geometric intuition behind the argument which follows, and the expediency of the fully-developed construction in Section III, will complement each other.
The utility of Theorem 1 is nicely demonstrated in solving the following problem. Suppose you are given a unitary U (which may act on some Hilbert space of large dimension; i.e., larger than just a qubit) and its inverse, U † , as well as two operators A φ and B φ , each of which rotates the phases of a specific, privileged state, namely: The challenge is to construct a circuit Q using U , U † , A φ , and B φ such that in the limit of a sufficiently large circuit, and assuming that the original matrix element A 0 |U |B 0 of U is nonzero.
This problem is known as amplitude amplification, and remarkably it can be solved without knowing the specific initial value A 0 |U |B 0 , by using the oblivious fixedpoint amplitude amplification quantum algorithm. We will show that such an algorithm arises only from Theorem 1, even in the multiple-qubit setting, by recognizing that there are two concentric Bloch spheres (qubit-like spaces) in this problem.
Specifically, one can recognize that U |B 0 is a quantum state which has a non-zero component along |A 0 , and another component perpendicular to |A 0 . So we define where N is the normalization factor needed to make |A ⊥ a unit vector. Then for a = A 0 |U |B 0 (we may assume that a is real because a possible phase may be absorbed into |B 0 )). Similarly, we may define some |B ⊥ such that These ideas are illustrated in the diagram of Figure 2, using the familiar Bloch sphere.
Thus, on the two-dimensional Hilbert space spanned 2. An illustration of amplitude amplification, where one desires to prepare the state |A0 (here the north pole of some Bloch sphere) obliviously to this state, given only |B0 , an operator U whose A0|U |B0 = a component is non-zero, and the ability to rotate about the states |B0 (blue) and |A0 (black) through chosen angles. The standard Grover iterate can be recovered in this model if one desires, by producing a simple rotation by θ = 2 cos −1 √ 1 − a 2 = 2 sin −1 a (red) toward |A0 .
by {|A 0 , |A ⊥ }, the action of U is that of a 2×2 unitary: which is convenient to represent in matrix form, as where the labels on the matrix indicate that columns act on |B 0 , |B ⊥ (from left to right), and the rows act on |A 0 , |A ⊥ (from top to bottom), such that U brings a state in the B basis into a state in the A basis. These two bases are the two Bloch spheres (or qubit bases) encoded in the problem. Moreover, U is a reflection operation, which we may represent in this qubit-like abstraction as R(a) = XR y (θ), with θ = 2 cos −1 √ 1 − a 2 = 2 sin −1 a.
This two-Bloch sphere picture provides the intuitive basis for the following theorem: Theorem 2 (Amplitude Amplification). Given unitary U , its inverse U † , and operators A φ = e iφ|A0 A0| , B φ = e iφ|B0 B0| , where d is odd, and Poly(a) is a polynomial in a = A 0 |U |B 0 of degree at most d that obeys the conditions on P from Theorem 1.
Why does this work? Note first that U |B 0 lives in the A-qubit space, spanned by |A 0 and |A ⊥ . This qubit then gets rotated around its "z" axis by A φ . The U † then rotates this around the y axis (and also does a reflection around Z, but we can disregard that for this intuition). U † also maps the state from the A-qubit space back into the B-qubit space, spanned by |B 0 and |B ⊥ . Next, B φ rotates the state around the B-qubit space's z-axis. Then the leftmost U does another y-rotation (and reflection), and maps the state back into the A-qubit space. The sequence in the square brackets maps the state back and forth between the two qubit bases, sandwiching y-axis rotations with z-axis rotations. This sandwich of rotations is thus just doing quantum signal processing as in Sec. II A; we understand this behavior well!
The formal proof of this theorem begins by noting that on the subspaces defined by the two Bloch spheres defined above, U = U † . Moreover, note that R(a), the 2 × 2 unitary representation of U (in the qubit bases relevant to our problem), is related to the W (a) of Eq. (1) by Substituting this into Eq. (13), and recognizing that A φ and B φ simply become z-axis rotations, we obtain where {φ k } are linear combinations of the original phases {φ k }. By Theorem 1, the term in parentheses is a matrix of related polynomials evaluated at a. The above matrix element thus evaluates to a polynomial Poly(a) of degree at most d (which can be seen by counting the instances of U and U † in Eq. (13)), completing the proof of Theorem 2. Theorem 2 takes on the meaning of performing "oblivious amplitude amplification" when the phases {φ k } are chosen such that the polynomial constructed approximates the sign function for small values of a. The technical aspects of generating the proper polynomials are further discussed in Sec. III and Appendix D 1.
Grover's celebrated quantum search algorithm [4], which is similar in character to the above, can easily be derived from the construction of amplitude amplification. In the search problem, some computational basis state |A 0 is unknown, but an oracle is given that implements and the goal is to create a quantum state close to |A 0 using as few queries to the oracle as possible. The search algorithm solves this problem by choosing U = H ⊗n (Hadamards on all the qubits in the search space), and starting with |B 0 = |0 . Note that since |A 0 is a computational basis state, A 0 |U |B 0 = 2 −n/2 = 1/ √ N because U |B 0 = |ψ 0 is an equal superposition over all N basis states in the search space. This means that the amplitude amplification algorithm can be applied, with the specific case of a = 1/ √ N being known, meaning we choose all the φ k = π for all k. This choice implies that which can be recognized as Grover's "inversion about mean" iterate. This choice also produces an oscillating polynomial that monotonically increases the matrix element up to d ≈ π/(2 sin −1 a) steps [36]. This may be seen by noting that each application of the signal rotation operator rotates through an angle 2 sin −1 a, and the target state is at most an angle π away, similar to the argument used to derive the query complexity of Grover's algorithm in [18]. The amplitude amplification algorithm thus gives Grover's quantum search algorithm in this limit, and the number of oracle calls required is , the known performance of Grover's algorithm. Once more, a similar result will be discussed in Section III, where the flow from qubitization to QSVT to fixed point amplitude amplification will be applied more generally.

C. Quantum Eigenvalue Transforms
Theorem 2 is known as amplitude amplification because it accomplishes a polynomial transform of one specific amplitude, namely the matrix element of U at |A 0 B 0 |. However, this polynomial transform can actually be performed over an entire vector space, not just a one-dimensional matrix element. In particular, we now show that this technology can be used to polynomially transform all the eigenvalues of a Hamiltonian H that has been embedded into a block of a unitary matrix U .
Specifically, suppose we have the unitary where H is some N ×N (possibly very large) Hamiltonian operator, located in the upper left block of U , labeled by an index qubit being in the state |0 (and it is said that H has thus been "qubitized"). We have included the indices 0 and 1 adjacent to the matrix representation of U to schematically indicate how H is encoded in U . At the cost of some generality let us make some assumptions for simplicity of exposition. In particular, assume the operator norm H is sufficiently small that this block embedding can be achieved, i.e. H ≤ 1 (if not, then one can instead embed some rescaled version of the Hamiltonian, H/α, but for now we will neglect this case for the sake of expository clarity). In particular, suppose that the eigenvectors and eigenvalues of H are given as Then, specializing to a specific block encoding for clarity, the missing blocks of U may be completed as where and it can be seen by inspection that U † U = I as required for the unitarity of U , as long as the eigenvalues λ are of reasonable scale. While a general block encoding need not take this specialized form, this choice of encoding will be sufficient for our illustrative purposes. Moreover, the treatment of general block encodings is presented in [8,12], wherein it is shown that a general block encoding takes a form similar to Eq. (20) in a special basis related to the eigenbasis of H. Our specialized choice of block encoding means that U may be expressed as a sum of two tensor products and therefore acts as which indicates that U contains a Bloch sphere (i.e. a qubit basis) for each eigenspace 2 of H corresponding to a certain eigenvalue. In particular, U may be expressed as a sum over N separate Bloch spheres: where R(λ) is defined as the operator in square brackets in the penultimate line above, and may be interpreted as a reflection and rotation about the Bloch sphere's yaxis, exactly the same as we found for the R(a) oper-ator that appeared in the amplitude amplification construction as per Eq. (12). We thus have a form for U which parallels that of the amplitude amplification scenario. However, unlike amplitude amplification, we do not have one-dimensional phase rotation operators A φ and B φ , because we have N Bloch spheres, and not just two! Nevertheless, there are still distinct vector spaces in which the input and output of H exist: these are, respectively, the column space and row space of the matrix H, within U . In the scenario of Eq. (18), these vector spaces are defined by the projector Π := |0 0| acting on the auxiliary qubit. Generalizing the way amplitude amplification employs the phase shift A φ of Eq. (5) acting on a single element |A 0 A 0 |, we may now define a projector controlled phase shift operation Π φ : which imparts a phase of e i2φ to the entire subspace determined by the projector Π. Note that if we want to be more precise, we may instead define this operation as which is a proper unitary transform and acts as a zrotation, like S(φ) from Eq. (2), but these two definitions differ only by a global phase which may be neglected. From a quantum circuit standpoint, Π φ may be realized by employing two instances of projector controlled-not gates (which we will refer to as Π-controlled-not, or C Π NOT for short) around a single qubit z-rotation by angle φ: . . . . . . The circuit used to realize the projector controlled phase shift Π φ , also termed the "phased iterate", in analogy to the bare exp(iφZ) operation used in QSP, used as the signal processing rotation operator. We note that this circuit is valid for an arbitrary projector Π, although it simplifies to a single rotation for the simple projector Π = |0 0| that we've used in this section.
The main relevant observation is that on the subspace of each of the N Bloch spheres of Eq. (25), Π φ acts as a z-axis rotation, with |λ as an eigenvector. This picture of N separate Bloch spheres evolving un-der z-and y-rotations provides the intuitive basis for the following theorem: Theorem 3 (Eigenvalue transformation). Given a block encoding of Hamiltonian H = λ λ|λ λ| in a unitary matrix U , with the location of H determined by projector Π, and given the ability to perform Π-controlled NOT operations to realize projector controlled phase shift operations Π φ , then, for even d, where is a polynomial transform of the eigenvalues of H. The polynomial is of degree at most d, and obeys the conditions on P from Theorem 1.
Similarly, for odd d, where Poly(H) has an analogous interpretation.
Why does this work? For the same reason that Theorem 2 does! To see this, let us use the specific encoding of Eq. (20) to rewrite Eq. (30) as an action on N separate Bloch spheres (the odd d case in Eq. (32) being analogous): recognizing that we have chosen conventions for R(λ) in Eq. (25) such that R † (λ) = R(λ). This allows us to relabel the sum and the phases to put Eq. (33) into a standard form of quantum signal processing (similar to how we obtained Eq. (15)): By Theorem 1, the term in brackets is a matrix of polynomials in λ, verifying Theorem 3. Finally, although we have specialized to the specific block encoding of Eq. (20), the proof for more general block encodings is contained in [8,12].
One immediately apparent utility of this eigenvalue transformation theorem is its usefulness in filtering eigenvalues. For example, we saw previously in Section II A that the BB1 sequence of phases can be used to increase sensitivity to the signal. In this eigenvalue transformation scenario, the signal is the eigenvalue λ, and a sequence of QSP phases can be used to selectively filter a range of eigenvalues, e.g. those below a threshold of λ ≈ 2π/3 in the case of BB1 (for now this is an imprecise statement, but a similar question is a major concern of Section IV). Measuring the projector Π will show that the QSP sequence flips the index qubit with higher probability if the Hamiltonian has an eigenvalue larger than this threshold, versus when all the eigenvalues of H are below the threshold. This result also assumes the ability to prepare a state which has some overlap with the relevant eigenstates of H, as input into U , but the specific amplitudes of the overlap are not crucial, because amplitude amplification can be employed to boost the signal.

D. Quantum Singular Value Transforms
Theorem 3 is known as the eigenvalue transformation because it accomplishes a polynomial transform of the eigenvalues of a matrix embedded within a larger unitary matrix. However, in general the embedded matrix need not have a well defined set of eigenvalues; for example, instead of being a square matrix, it could be rectangular. Remarkably, the same idea of transforming the embedded matrix using quantum signal processing can still apply. Specifically, we now show that QSP sequences can be used to polynomially transform all the singular values of a (possibly non-square) matrix A which has been encoded into a block of a unitary matrix U .

Such a general matrix A can be decomposed as
where W Σ and V Σ are unitary and Σ is diagonal and contains along its diagonal the set of non-negative real numbers {σ k }, known as the singular values of A, of which there are r = rank(A) nonzero values. All matrices have such a singular value decomposition (SVD).
As W Σ and V Σ are unitary, their columns form orthonormal bases, which we denote by {|w k } and {|v k }, respectively. {|w k } are the left singular vectors, which span the left singular vector space, and {|v k } are the right singular vectors, which span the right singular vector space. Using this notation, we may conveniently rewrite the singular value decomposition of A in a form analogous to the eigenvalue decomposition: Now, suppose that we are given a unitary matrix U such that A is encoded in a block of U , i.e.
whereΠ := k |w k w k | and Π := k |v k v k | are projectors which locate A within U , such that A =ΠU Π.
We have again included Π andΠ indices adjacent to the matrix representation of U to schematically indicate how A is encoded in U : Π selects the columns andΠ the rows in which A is encoded. Moreover, the projectors Π and Π also identify the left and right singular vector spaces, respectively.
For pedagogical simplicity, let us assume for now that A is a square matrix (this assumption is dropped in the next section). Again specializing to a specific block encoding, we may complete the missing blocks of the unitary U by writing where the 0 and 1 are index qubit labels for the block matrices within U , and where √ I − A 2 is formally defined in terms of the SVD of A, as It can then be verified straightforwardly that U † U = I, as long as the singular values {σ k } are less than or equal to 1 (which may always be achieved by rescaling A to some A/α). Again, while a general block encoding need not take this specialized form, this choice will be sufficient for our illustrative purposes. And moreover, the treatment of general block encodings is presented in [12], wherein it is shown that a general block encoding takes a form similar to Eq. (38) in special basis related to the left and right singular vectors of A.
Just as with the analysis of the block-encoded Hamiltonian H, there are multiple Bloch spheres in U . Specifically, note that so that U may be expressed as a direct sum over a number of separate Bloch spheres equal to the rank of A: where we have defined R(σ k ) analogous to R(λ) in Eq. (25). We now have a form for U which directly parallels that of the eigenvalue transform scenario, and for exactly the same reasons, the following theorem holds: Theorem 4 (Singular value transformation). Given a block encoding of a matrix A = k σ k |w k v k | in a unitary matrix U , with the location of A determined by projectors Π and Π, and given the ability to perform Π-andΠ-controlled not operations to realize projector controlled phase shift operations Π φ andΠ φ (defined as in Sec. II C), then, for odd d, where Poly (SV) (A) is defined for an odd polynomial as which applies a polynomial transform to the singular values of A. The polynomial is of degree at most d and obeys the conditions of P from Theorem 1.
Similarly, for d even, where Poly (SV) (A) is defined for an even polynomial as which is also a polynomial transform of the singular values of A, but with the modification that the input and output spaces are both the right singular vector space, spanned by {|v k }. Analogously, the polynomial is of degree at most d and obeys the conditions of P from Theorem 1.
The main difference between this theorem and the eigenvalue transform of Theorem 3 is that here, the Bloch sphere transformations of U also switch between the {|v k } and the {|w k } bases, similar to how the sequence in Theorem 2 (Eq. (13)) flips between bases. Thus, we should now carefully keep track of which vector space the system is in at each stage of operation. In particular, in each of the terms in square brackets in Eqs. (46) and (44), the U on the right moves the system from the {|v k } basis into the {|w k } basis, andΠ φ then rotates the system around the z-axis of each left singular vector space's Bloch sphere. Similarly, the U † on the left then moves the system back from the {|w k } basis into the {|v k } basis, and finallyΠ φ rotates the system around the z-axis of each right singular vector space's Bloch sphere. Therefore, the odd d sequence starts in the right singular vector space and ends in the left singular vector space, such that Poly (SV) (A) is accessed with Π andΠ, whereas the even d sequence starts and ends in the right singular vector space, such that Poly (SV) (A) is accessed with just Π.
With this crucial modification, Theorem 4 may be verified by following the same logic as that of Theorem 3, while the proof for general block encodings is presented in [12]. Theorem 4 is the essence of the quantum singular value transform (QSVT) algorithm, and we elaborate in the next sections upon what can be accomplished with it.

E. Block Encodings
A key idea behind the generalization of QSP from single-qubit dynamics, to multi-qubit transforms is the use of a block-encoded operator, as we have just seen in the last two subsections (and as is discussed in the following sections). While the embedding of one matrix inside a larger one is well known in mathematics as a dilation [37], as used here, block encoded operators are a quantum concept, and their construction comes with some caveats. We elaborate on these constructions in this subsection.
The starting point for many applications of QSP and QSVT is the availability of the desired signal as a linear operator, a matrix A, encoded as a block inside a larger unitary matrix U , as described in Eq. (37). What is such a block encoded operator, physically? Some limited intuition comes from the case when A is a unitary matrix, in which case U is simply a controlled-A operator, as depicted in Figure 4.
When A is a large operator, it can be challenging to Construction of |1 -controlled-A (for A a single qubit operator) given access to |λ a (+1)-eigenvector of A, |ξ the control qubit, and |ψ the target state. Note that A may in general act on a large Hilbert space, in which case the controlled swap operator is suitably generalized. The swaps can be made |0 -controlled to emulate Figure 4. We note that this is a similar construction to the circuits employed in [38] for matrix element measurement.
envision how a controlled-A operation may be physically realized. In particular, given A, how to construct a controlled-A operation is generally not possible without further information. For example, if a +1 eigenstate |λ of A is provided, then a controlled-A quantum circuit can be constructed using |λ together with two controlled-SWAP gates; see Figure 5. However, in general, much more work has to be done in achieving a block encoding, generally in ways specific to the relevant physical system [12,38]. Related to the issue of the block encoding is accessing the desired polynomial transform of the encoded block. As described in Section II A, after performing a QSP/QSVT sequence, a measurement may be done in the signal basis to determine if the desired polynomial transform was realized. In particular, in many of the algorithms, we wish to apply the transformed operator, Poly (SV) (A), to some state |ψ , which may be done using the block encoding, U φ . However, this computation will only be successful if we apply to |ψ the correct block (the one that encodes Poly (SV) (A)), so we must project the final state into being in this block by performing a projective measurement with the projectors Π andΠ, which are assumed accessible. Consequently, the relevant projector for single-auxiliary-qubit QSVT is often |+ +| ⊗ Π, which both (1) isolates the real part of the transforming polynomial P discussed in Section II A, and (2) determines if we have applied the desired block of the overall unitary. Since this projective measurement is a one time projection that is done at the end of the algorithm, its cost is not large, and moreover, the probability of the projective measurement yielding the desired state can be amplified using amplitude amplification or classical repetition. Depending on the construction, this probability can also be either the desired output, or an algorithmically useful output. For more about the signal basis and QSP conventions, see Appendix A.

III. SEARCH BY QSVT
Now that we have an overall perspective on the basic theorems of quantum signal processing and quantum singular value transformations, let us return to explicitly constructing the three "primordial" quantum algorithms using QSVT. Specifically, for each algorithm we indicate (a) the signal rotation operator, and (b) the appropriate QSP polynomial. This is not just a problem of reconstruction, but will in certain cases lead to slight improvements in algorithmic performance. We begin with the search algorithm.
A ubiquitous problem that admits a quantum speedup is unstructured search, the goal of which is to determine a single marked element m ∈ {0, 1, · · · , N − 1} =: [N ] from an unstructured database of size N . While the search problem can be solved in O(N ) time classically, Grover's quantum algorithm requires only O( √ N ) time, providing a provably optimal quadratic speedup. However, Grover's algorithm in its traditionally stated form suffers various shortcomings, including divergence from the proper solution if its underlying iterate is repeated too many times. Fortunately, by the parallels between this problem and amplitude amplification as discussed in Section II B, it is straightforward to recover an improved version of unstructured search using QSVT.
In quantum search, one is given access to a unitary operation U whose application flags some marked element by a phase flip, i.e., where m ∈ {0, 1, · · · , N − 1} is the (single) marked element.
As is conventional in quantum search, we will take our initial state to be the uniform superposition over all N states, which may be expressed as and is easily prepared by applying n Hadamards to |0 ⊗n . With this choice of initialization, our goal is to map |ψ 0 to the marked state |m .
In this context, how might we solve the search problem with QSVT? The standard method, which includes the construction of the Grover iterate, will not be necessary. Instead, we rephrase the problem. In the most general setting, we are given access to some initial state |ψ 0 such that its projection onto the desired final state |m is nonzero. Denoting this projection by Π, we may express this condition as for some known unitary V and |a| > 0. In our case, as illustrated in Eq. (49), we can simply take V as the identity, and a = 1/ √ N by our choice of initial state. In general for amplitude amplification problems, V need not be trivial, and is often viewed as a preparation oracle.
The search problem as framed above is precisely the statement of amplitude amplification as in Sec.II B, as we desire a circuit U (which will depend on U ) such that for some small > 0. For notational convenience we will refer to |ψ 0 ψ 0 | as Π . Note that this induces a block encoding of the scalar a, that is We now have a problem goal stated in Eq. (51), and a block encoding of a single singular value a = 1/ √ N stated in Eq. (52). What remains to solve unstructured search with QSVT is a choice of QSVT polynomial, and an analysis of the minimum required QSVT sequence length.
Fortunately, given that there is only one relevant singular value, a, and one relevant left (and right) singular vector, the entire problem is reduced to mapping a as close to 1 in magnitude as possible, as under this condition the circuit will induce, with high probability, the desired block-encoded map |m ψ 0 |. Luckily, there already exists a theorem which defines the conditions under which such transformations are possible: the major theorem of QSVT!
The requirements to perform QSVT as desired include the application of V , V † , C Π NOT, C Π NOT, and e −iφZ gates. Note that here we have been allowed to choose V = I, as all Grover oracle information is encoded in the projector-controlled-NOTs. Moreover, these projectorcontrolled operators are easy to construct, C Π NOT because Π depends on a fixed, known state. Similarly, C Π NOT may be crafted by first creating a controlled-U , which is easily achieved, and conjugating the control qubit by Hadamard gates, the result of which is a C Π NOT, where the control qubit now becomes the target of the projector-controlled-NOT.
Having now achieved a suitable block encoding and the necessary conditions for QSVT, we need only choose an appropriate QSVT polynomial. As our goal is to map a to a value close to 1, a sensible choice for such a polynomial can be derived if we start with the simple sign function Θ(x − c): As discussed in [12,21], one can estimate the sign function with arbitrary precision by finding a polynomial approximation to erf(k[x − c]) for large enough k. In particular, one can efficiently compute a degree That is, P Θ ,∆ (x − c) is bounded in magnitude by 1 and -approximates the sign function outside of the region is not in general a good approximation to the sign function. In this region, [21] for the explicit construction of P Θ ,∆ (x−c)). For visual intuition, we illustrate this polynomial approximation below in Figure 6.

Ideal function
Poly. approx.
6. An illustration of Θ(x) (solid) and its polynomial approximation, P Θ ,∆ (x) (dashed). Note how P Θ ,∆ (x) stays within of the sign function outside of the region ∆ 2 , ∆ 2 , while deviating from Θ(x) within this region. The shifted version of this function is constructed much the same way.
Note that the polynomial P Θ ,∆ (x) (i.e., with c = 0) has odd parity and is bounded in magnitude by 1, and so it may be implemented via QSVT (i.e. it obeys the conditions on Poly(a)= m|U φ |ψ 0 discussed in Section II A and Appendix A). We will use this scheme to polynomially approximate the sign function, e.g., , and subsequently apply it to the initial uniform superposition over N quantum states. Then, upon applying the pre-determined QSVT sequence, this yields |m in the register with probability p = |P Θ ,∆ (a)| 2 . We can now ask which values of ∆ and should we choose for our intended behavior? Because we desire that a = 1/ √ N be mapped to a value greater than 1− , we require that 1/ √ N ≥ ∆/2, or equivalently ∆ = O(1/ √ N ). Next, note that upon applying (P Θ ,∆ ) (SV) (V ) to the uniform superposition, the amplitude of the desired state, |m , is at least 1 − . Thus, upon measuring these qubits in the computational basis, the probability of the resultant state being |m is at least (1− ) 2 ≥ 1−2 . Therefore, if we want this procedure to succeed with probability at least 1 − δ, we set = δ/2.
Under these choices, the polynomial . Note that we again have O( √ N ) scaling and thus maintain the quadratic speedup for the search problem, while circumventing Grover search's convergence issues. We summarize this algorithm for search by QSVT in Algorithm 1.
Finally, we remark that this algorithm employs a simple block encoding (i.e. the identity) of a = 1/ √ N with nontrivial singular vectors (i.e. |ψ 0 and |m ). In this setting, our desired transformation would not have been possible had we used the quantum eigenvalue transformation of Sec. II C, as the eigenvalues of V are necessarily 1 and the input and output spaces identical. Hence, this instance illustrates the advantage of QSVT over the quantum eigenvalue transformation, as QSVT can achieve polynomial transformations between different input and output spaces.

IV. THE EIGENVALUE THRESHOLD PROBLEM BY QSVT
Beyond the determination of a desired state, as was the object in unstructured search covered in Algorithm 1 in Section III, a separate major intent of many quantum algorithms is determining whether a certain property holds of a given quantum system. Through the lens of QSVT, we show in this section that these two algorithmic goals are similar: here, rather than wishing to prepare a special state (e.g., the marked state for unstructured search), we now wish to encode some underlying unknown information about a quantum system simply into a state which we can then measure.
This linkage between search and property testing is made apparent by considering the eigenvalue threshold problem. In the setup of this problem, we are given access to a Hamiltonian H and would like to determine if H has any eigenvalues λ that are less than some chosen threshold λ th . Realistically, this must be determined to within some precision ∆ λ , so we are guaranteed that either the Hamiltonian has at least one eigenvalue λ ≤ λ th − ∆ λ Algorithm 1: Unstructured Search by QSVT Input: Access to a controlled version of the oracle U which bit-flips an auxiliary qubit when given an unknown target state |m ∈ {|j , j ∈ [N ]} and acts trivially otherwise, an error tolerance δ, and a ∆ ≤ 2/ √ N . Output: The flagged state |m .
Runtime : O √ N log (1/δ) queries to controlled-U , equivalently C Π NOT, C Π NOT, and e −iφZ gates and a single auxiliary qubit. Succeeds with probability at least 1 − δ. Procedure: 1 Use QSVT to construct the operator to the uniform superposition. With high probability, |m remains in the register and can be determined by measuring in the computational basis.
or that all of its eigenvalues obey λ ≥ λ th + ∆ λ . The eigenvalue threshold problem seeks to distinguish these cases.
To solve this problem, one is also provided a state |ψ that has reasonable overlap with the low-energy subspace (spanned by the eigenvectors with eigenvalues λ ≤ λ th − ∆ λ ) if it exists. Mathematically, we represent this condition as where Π ≤λ th −∆ λ is a projector onto the (possibly nonexistent) low-eigenvalue subspace mentioned above.
To solve this problem with QSVT, we will need to construct a block encoding of the Hamiltonian H. However, such a block encoding can only be constructed if H ≤ 1, so we instead must determine an α ≥ H and construct a unitary block encoding of H/α. In general, doing so may be nontrivial because determining such an α requires prior knowledge about H. We discuss this drawback in Sec. VII. Fortunately however, for a large class of Hamiltonians, such as sparse Hamiltonians and linear combinations of unitaries, one can determine a sufficient α and construct a block encoding of H/α [8,12]. With this rescaled block encoding, one can equivalently imagine that our goal is to determine if H/α has any eigenvalues less than λ th /α to within a precision ∆ λ /α.
Moreover, as the eigenvalue threshold problem deals with eigenvalues, it is most straightforwardly approached with a quantum eigenvalue transform. However, in accordance with the theme of this paper, we may equivalently solve this problem with QSVT if the eigenvalues of H are positive, such that the singular values are equal to the eigenvalues. If this is not the case, we may instead use the block encoding of H/α and the circuit in Figure 7 to construct a block encoding of the positive definite matrix 1 2 (H/α + I). Under this modification, the new goal would be to determine if 1 2 (H/α + I) has eigenvalues less than 1 2 (λ th /α + 1) to within a precision ∆ λ /α, which may be achieved with QSVT. In the remainder of this section, we will assume that this issue has been alleviated and specialize to the case of distinguishing the threshold λ th /α. With those concerns out of the way, how might this problem be solved with QSVT? A fruitful approach, which we will follow, is to apply QSVT to the Hamiltonian with a function that filters out large eigenvalues. While one might suspect that we need a function that locates eigenvalues below the threshold λ th /α, such as the eigenstate filtering function of [39], we can actually employ a much simpler construction. All we need is a function that behaves markedly different on eigenvalues greater than and less than λ th /α, and such a function may be constructed using the technology from Section III.
A good choice for such a function is Θ(λ th /α − x), which maps eigenvalues less than λ th /α to 1 and larger eigenvalues to −1. One could imagine approximating this function by P Θ ,∆ (λ th /α − x), but there is a catch: this polynomial does not have definite parity and thus does not satisfy the constraints of Theorem 1. Instead, we need to consider a symmetrized version with even parity, the eigenvalue threshold polynomial : The factors of 4 ensure that P ET ,∆,λ th /α (x) has the following desired properties, which are easily seen via the triangle inequality: So, P ET ,∆,λ th /α (x) behaves as P Θ ,∆ (λ th /α − x) for x ≥ 0 (the only relevant range for singular values). In addition, P ET ,∆,λ th /α (x) has definite parity and magnitude bounded by 1, so it may be implemented through QSVT (again, it obeys the conditions on Poly(a)= +|U φ |+ discussed in Section II A and Appendix A). Thus, this polynomial may be used to distinguish between eigenvalues less than λ th /α − ∆/2 and greater than λ th /α + ∆/2, which naturally indicates a precision ∆ λ /α = ∆/2.
To see how we may employ this technology, note that we may express |ψ in the eigenbasis as where λ≤λ th |c λ | 2 ≥ ζ 2 by the guarantee mentioned above. Next, consider using QSVT to develop (P ET ,∆ ) (SV) (H/α) and apply it to |ψ controlled by an ancilla qubit in the state |+ , as in the circuit of Figure 8. After applying a Hadamard to this ancilla qubit, the final (unnormalized) state of the system is We can solve the eigenvalue threshold problem by measuring the ancilla qubit in the computational basis. Observe that the probability of measuring the ancilla qubit in |0 is If all eigenvalues obey λ > λ th + ∆ λ , then −1 ≤ P ET ,∆,λ th /α (λ) ≤ −1 + , and so This implies that it is unlikely to measure |0 if all eigenvalues obey λ > λ th + ∆ λ . On the other hand, if there exists an eigenvalue that obeys λ < λ th − ∆ λ , This indicates that, for reasonably sized ζ, we are more likely to measure |0 if there exists an eigenvalue λ < λ th − ∆ λ . With this construction, the eigenvalue threshold problem is reduced to distinguishing between two Bernoulli distributions: one with mean ≤ 1 2 2 and one with mean ≥ ζ 2 (1 − ). To ensure that these means are well separated, we desire [21], this may be enforced by choosing an < ζ (i.e. = O(ζ)); for instance, = 1 4 ζ is a sufficient choice, which we will make use of in the following. We may then distinguish between these distributions with high probability by performing repeated measurements. This is quantified by the following lemma: Proof. To distinguish between these distributions, let us take n samples X i , and guess that X is the distribution whose mean is closest to the empirical mean of the samples. Suppose that X = X 0 and thus has mean a (the case X = X 1 may be treated analogously). We're interested in computing the probability of mistaking this distribution for X 1 , namely To bound this error probability, we may invoke the wellknown multiplicative Chernoff bound, with mean µ = E( n i=1 X i ) = na and relative difference from the mean ε, Employing the multiplicative Chernoff bound, we may ensure that the error probability be less than some chosen δ as which in turn requires n ≥ (3/aε 2 ) log(1/δ), or equiva-

Performing a similar calculation for the hypothesis
Summing these results over equal priors over X ∈ {X 0 , X 1 } provides the scaling n ≥ 6(a+b) Applying this lemma to our scenario in the eigenvalue threshold problem, we . Therefore, if we repeat the aforementioned measurement procedure O 1 ζ 2 log(1/δ) times, we can correctly distinguish between the two distributions and solve the eigenvalue threshold problem with probability at least 1 − δ. As each repetition performs QSVT with a polynomial of degree For consistency, we note that this 1/ζ 2 dependence is suboptimal, and can be improved to 1/ζ using Lemma 7 of [40]. We summarize the eigenvalue threshold problem by QSVT in Algorithm 2. Here, we note that the upper limit on the index of the for loop comes from the condition n ≥ 6(a+b) (b−a) 2 log(1/δ) according to Lemma 5, with the choice = ζ/4.

V. PHASE ESTIMATION BY QSVT
Another salient problem that admits a quantum advantage is phase estimation, which is as follows: given access to a unitary oracle U and an eigenvector |u such that U |u = e 2πiϕ |u , determine ϕ. Here, we approach this problem by integrating a feedback procedure inspired by Kitaev's algorithm [20,41] with the technology developed in solving the eigenvalue threshold problem in Sec. IV. We first provide a sketch of the algorithm, and then address a few caveats in its construction. Finally, we present a concrete formulation of QSVT-based phase estimation, analyze its capabilities, and demonstrate how the quantum Fourier transform naturally emerges from this construction.

A. Intuition
To motivate iterative phase estimation by QSVT, recall that Kitaev's algorithm for phase estimation [20,41] may be reinterpreted as a semi-classical feedback process [42,43]. One may view this feedback process as incorporating a classical parameter θ ∈ [0, 1], initialized at θ = 0, such that through a series of controlled-U j calls (H/α) to |ψ , controlled by an ancilla qubit in the state |+ (see Fig. 8). 4 Apply a Hadamard to the ancilla qubit and measure it in the computational basis. 5 If the fraction of |0 's measured is closer to ζ 2 (1 − ζ/4) than to ζ 2 32 , then there exists an eigenvalue λ ≤ λ th − ∆ λ with high probability. Otherwise, all eigenvalues obey λ ≥ λ th + ∆ λ with high probability. and a single-qubit measurements, each of which updates θ, one obtains a value θ ≈ ϕ at the end of the algorithm.
Our approach to phase estimation by QSVT is heavily inspired by this feedback procedure. Just as in Kitaev's algorithm, we introduce a parameter θ ∈ [0, 1], initialized at 0, that is updated throughout the algorithm and by the end of the procedure, the final value of θ will approximate ϕ. We sketch our algorithm in the following section.
Before proceeding, we note that a very similar QSVTbased phase estimation algorithm was independently discovered in [17]. A primary goal of their work is maintaining coherence, and accordingly their paper presents a thorough performance analysis of their algorithm including the derivation of constant factors as well as precise use of the diamond norm. Ref. [17] also extends their QSVT-based phase estimation algorithm to coherent energy and amplitude estimation.

Sketch of the Algorithm
To understand our algorithm, let us use the binary fraction representation of ϕ and assume that ϕ is an mbit number: ϕ = 0.ϕ 1 ϕ 2 ...ϕ m for some finite m (in this representation, ϕ = j 2 −j ϕ j ). We will show how to deal with the m = ∞ case later.
In the QSVT phase estimation algorithm, ϕ is determined bit by bit through a series of m iterations, beginning with the least significant bit, ϕ m , and proceeding down to the most significant bit, ϕ 1 . In each iteration, we construct a matrix whose singular values encode the least significant bits of ϕ as well as the current value of θ. By applying QSVT to this matrix, conditioned on an ancilla qubit, and subsequently measuring the ancilla qubit, we determine a single bit of ϕ and update θ accordingly. If each iteration succeeds, then θ = ϕ at the end of the algorithm.
To see how this may be achieved, consider the matrix A j (θ) := 1 2 (I + e −2πiθ U 2 j ), which has a singular value with corresponding right singular vector |v j = |u and left singular vector |w j = e iπ(2 j ϕ−θ) |u . Because exp(πi2 j ϕ) = exp(2πi2 j−1 0.ϕ 1 ...ϕ m ) = exp(πi0.ϕ j+1 ...ϕ m ), σ j may be re-expressed as This singular value is used to extract the bits of ϕ as follows. Consider the first iteration of the QSVT phase estimation algorithm, where we begin with j = m − 1 (and will decrement j down to j = 0). Because θ is initialized at 0, the singular value of interest is from which ϕ m may be evaluated as ϕ m = This expression provides a pathway to extract ϕ m , akin to the procedure used in the eigenvalue threshold problem: apply QSVT to A m−1 (θ) with the target function Θ 1 √ 2 − x 3 , and employ the circuit in Figure 9, the measurement result of which is ϕ m . After measurement, we update θ by setting the first bit of θ to θ 1 = ϕ m (again using the binary fraction representation θ = 0.θ 1 θ 2 ...), such that θ = 0.ϕ m , which completes the j = m − 1 iteration.
We then move to the next iteration, wherein we decrement j = m − 1 ← m − 2, map θ ← θ/2 = 0.0ϕ m , and construct A j (θ) = A m−2 (0.0ϕ m ), which has singular value Note that the updated value of θ allowed for a convenient cancellation of ϕ m . As this relation is identical to Eq. (66), ϕ m−1 may be determined by employing the QSVT procedure mentioned above and again executing the circuit in Fig 9. Upon obtaining the measurement result of this circuit, we set θ 1 = ϕ m−1 . It is clear that this procedure may be repeated to determine each bit of ϕ, such that at the end of the j = 0 iteration, one obtains θ = ϕ, which solves the phase estimation problem. Ultimately, this procedure succeeds because θ encodes the least significant bits of ϕ, from which the next bit of ϕ may be extracted by cleverly transforming the singular values of A j (θ). Accordingly, this algorithm may be seen as a binary search through the bits of ϕ, a key idea that we return to in Section V D.
At this stage however, one may object to this algorithm sketch as it requires m iterations, and it could be the case that m is unknown or prohibitively large. This is not an issue, as we prove in Appendix B 1 that if we start at j = n − 1 for some positive integer n, then we have (with a minor procedural modification needed for n < m): Theorem 6. If n ≥ m, and one can exactly implement the sign function through QSVT, then at the end of this procedure, θ = ϕ. and Theorem 7. If n < m (including the case m = ∞), and one can exactly implement the sign function through QSVT, then at the end of this procedure, |θ−ϕ| ≤ 2 −n−1 .
As evidenced by these theorems, which are easily proven by induction, this procedure necessarily outputs an n-bit approximation to ϕ and is the essence of the algorithm for phase estimation by QSVT. However, in spite of the simple appearance of this algorithm, a few issues must be addressed before presenting its concrete formulation.
FIG. 9. The quantum circuit used to evaluate ϕj+1 at each iteration of the phase estimation by QSVT procedure.

Caveats
Here we address a few caveats in the above algorithm sketch. First, as described thus far, this procedure applies QSVT to the matrix A j (θ), which requires a unitary block encoding of A j (θ). Fortunately, this encoding can be easily constructed as the circuit in Figure 10, which we denote by W j (θ). A simple calculation of Figure 10 indicates that W j (θ) has the desired form: Another concern with the aforementioned procedure is that the sign function, Θ 1 √ 2 − x , may seem overcomplicated and unnecessary, as the same results could be attained with a simpler function, such as 1 − x. However, the sign function is crucial when one desires merely an n-bit approximation to ϕ, which is necessary when m is unknown or prohibitively large, as is often the case in practice. In this situation, the use of the sign function (the exact sign function, rather than an approximation) will ultimately set θ equal to the n-bit number closest to ϕ. For example, if 0.ϕ m ϕ m+1 ... = 0.011... at the first iteration, then despite the fact that ϕ m = 0, the use of the sign function will set θ 1 = 1 because 0.1 is the best 1-bit approximation to 0.011....
Lastly, the most significant problem with this procedure as it currently stands is that the sign function is not a polynomial and therefore cannot be implemented exactly in QSVT. Instead, Θ 1 √ 2 − x must be approximated by a polynomial, a good choice for which is P Θ ,∆ 1 √ 2 − x from Sec. III. However, as in Sec. IV, in order to ensure that the target polynomial has definite parity, we must use a symmetrized polynomial, the phase estimation polynomial : Like the eigenvalue threshold polynomial, P PE ,∆ (x) is even, is bounded in magnitude by 1, and behaves as P Θ

,∆
1 √ This approximation comes with the potential for error, as an iteration of this procedure may now fail with a nonzero probability, where failure is defined as measuring an incorrect bit of an approximation to ϕ (an approximation that suffers error ≤ 2 −n−1 ), which will result in an inaccurate θ such that |θ − ϕ| ≥ 2 −n−1 . Therefore, we reformulate our goal probabilistically as: obtain an n-bit approximation to ϕ with probability at least 1 − δ.
The probability of failure is dictated by the values of ∆ and . In particular, for a nonzero ∆, note that if is not a good approximation to the sign function, then an error may occur. Fortunately, if ∆ is made sufficiently small, this error mode does not induce significant errors: is only possible at the j = n − 1 iteration, such that an error due to ∆ can only occur at the j th iteration. If an error is made at this iteration, then at the end of the algorithm, then |θ − ϕ| < 1 2 n , assuming no errors are made at later iterations.
This Theorem, proven in Appendix B 2, indicates that the error caused by ∆ is at most 2 −n when we obey this bound, which we assume is obeyed in the following.
Next, a nonzero can also induce errors.
, the failure probability of the measurement in Figure 9 goes as O( 2 ), and this type of error may occur at any iteration. To study this quantitatively, consider iteration j < n − 1, such that , and focus on the scenario σ j < 1 √ 2 − ∆ 2 , in which case P PE ,∆ (σ j ) ∈ [−1, −1 + ], and θ 1 = 0 is the ideal measurement (the case in which σ j > 1 √ 2 + ∆ 2 is analogous). By evaluating the circuit in Figure 9, with P PE ,∆ (σ j ) in place of the sign function, we find that the final (unnormalized) state of the ancilla qubit is such that the probability of failure (measuring |1 ) at this iteration is What is sufficient for our purposes? As our procedure consists of n iterations, we desire that each iteration fails with probability no greater than δ/n, such that the overall algorithm succeeds with probability at least 1 − δ by the union bound. From the above inequality, this can be enforced by the choice ≤ 2δ/n. Therefore, with these conditions on ∆ and , this algorithm sketch succeeds with probability ≥ 1 − δ at determining a θ such that |θ − ϕ| < 1 2 n , as desired.

B. The Complete Algorithm
After supplying intuition and addressing potential issues, we now present the complete algorithm for phase estimation by QSVT in Algorithm 3. We depict this algorithm's circuit in Figure 11, which illustrates is resemblance to the inverse quantum Fourier transform, a connection we elaborate on in Section V D. We further unpack the details of the phase estimation by QSVT circuit in Figure 12; here, the operator R is defined as which is a z-rotation, up to a global phase.

Algorithm 3: Phase Estimation by QSVT
Input: An oracle that performs a controlled-U j operation, an eigenstate |u of U with eigenvalue e 2πiϕ , and n + 1 ancilla qubits (or 1 ancilla qubit that is reused (n + 1) times). Also, an ≤ 2δ/(n + 1) and a ∆ < 2 cos 3π 16 − 1  Apply a Hadamard gate to the ancilla qubit, and measure it in the computational basis. 8 Set the first bit of θ (i.e. θ 1 ) equal to the result of this measurement. 9 With j = 0, repeat lines 5-8 of the above loop. 10 Set the first bit of θ (i.e. θ 0 ) equal to the result of the measurement.
Note that we have included the additional lines 9-10 n + 1 qubits 11. An Abstract overview of the circuit performing phase estimation through QSVT (Algorithm 3). The double lines indicate that measurement results are fed forward (via the parameter θ) to all future controlled QSVT operations, where this adaptive process is unpacked in Figure 12. In essence the circuit systematically computes the least significant bit of an unknown quantum phase, adaptively bit shifts this phase, and repeats. Note the similarity of this operation to the inverse quantum Fourier transform, a connection we elaborate on in Section V D.
in the algorithm to account for the bit in the one's place of θ (i.e. θ 0 ), which is needed when n < m such that θ is an approximation to ϕ. Typically, θ 0 = 0, but θ 0 = 1 is possible in the scenario that 1.0 is a good approximation to ϕ (for example, rounding 0.11111 to two binary decimals yields 1.00). As a result of this additional iteration, we must now require ≤ 2δ/(n + 1) to ensure that the algorithm succeeds with probability at least 1 − δ.
Also observe that Algorithm 3 consists of O(n) iterations, each of which performs an instance of QSVT with a degree O 1 ∆ log 1 = O log n δ degree polynomial, so phase estimation by QSVT queries the controlled-U j oracle O n log n δ times. In addition, the multiplicative factor 1/∆ is not particularly prohibitive because ∆ = O(1) need only satisfy ∆ < 2 cos 3π 16 − 1 √ 2 ≈ 0.25. While this O(n log n) query complexity may appear to provide a speedup over conventional phase estimation, whose gate count scales as O(n 2 ), we note that phase estimation by QSVT requires O(n log n) queries to the controlled-U j oracle, whereas conventional phase estimation requires only O(n) queries [18]. Nonetheless, we believe that this logarithmic factor could be removed by integrating phase estimation by QSVT with a more sophisticated phase estimation protocol, such as the inference procedures of [44,45], which already attain speedups over conventional phase estimation. Indeed, the QSVT-based phase estimation algorithm of [17] requires only O(n) queries to the oracle, which is achieved by using a procedure very similar to Algorithm 3, but varying the degree of the QSVT polynomial at each iteration (schematically, by increasing ∆ at each iteration).
Likewise, phase estimation by QSVT may be modified to be more applicable to specialized scenarios, such as if one is restricted in the polynomials that may be implemented through QSVT, or if one has prior knowledge about ϕ. For example, suppose that one cannot implement ≤ 2δ/(n + 1) with certainty, and so instead must choose = O(1). To alleviate this difficult, one could repeat the measurement in line 7 O(log n δ ) times and set θ 1 equal to the majority vote of the measurement results, which will yield an accurate value of θ with probability ≥ 1 − δ. Similarly, suppose that the constraint on ∆ cannot be implemented. Then, again using repeated measurements, if σ n−1 is very close to 1 √ 2 , the majority of the measurement results may not reflect the correct choice of θ 1 . But with high probability, the measurement results will be ambiguous, signaling that σ n−1 is near 1 √ 2 . In this event, one could move to an even higher iteration, say some j > n − 1, where is is likely that σ j is not near 1 √ 2 and ϕ j+1 is easily determined (and if σ j is again near 1 √ 2 , one could again move to a higher iteration j > j). With this bit correctly determined, one may proceed through the rest of the iterations and attain a good estimate of ϕ.

C. Applications to Factoring and Beyond
Here, we discuss how phase estimation by QSVT may be applied to the factoring problem and used for robust phase estimation.

Factoring
Phase estimation by QSVT may be straightforwardly applied to the factoring problem. Recall that in the factoring problem, the oracle U behaves as U |u = |xu(mod N ) for some x < N and has eigenvalues e 2πis/r , where r is the order of U (i.e. x r = 1(modN )) and s ∈ {0, 1, ..., r − 1} [18]. The goal is to determine some eigenphase s/r, from which r may be determined by the continued fractions algorithm and a factor of N may be extracted. In addition, in the factoring problem, one does not have direct access to an eigenstate |u s , but instead can prepare a uniform superposition of eigenstates · · · · · · · · · · · · · · · · · · . . .
H H · · · · · · · · · · · · · · · · · · . . . First, consider employing the phase estimation by QSVT algorithm, but replacing |u with the superposition 1 √ r s |u s . With this modification, the phase estimation algorithm begins with the superposition state 1 √ r s |u s and converges to a single eigenstate at the end of the algorithm. To see this, note that each measurement of an ancilla qubit will restrict the state to be a superposition over eigenstates whose eigenvalues are consistent with the measurement results thus far (i.e. the eigenphases whose least significant bits agree with the measurement results). Because the eigenvalues are not degenerate, the state at the end of the algorithm will be an eigenstate, say |u s , whose eigenphase is θ ≈ s/r, to which the continued fractions algorithm may be applied to determine r.
How accurate must the approximation θ ≈ s/r be? Note that if our estimate θ obeys then we can determine the fraction s r by applying the continued fractions algorithm to θ [18]. We can ensure that this condition is met by selecting n = 2 log 2 (N ) + 1, such that s r − θ ≤ 2 −n = 1 2N 2 ≤ 1 2r 2 , as desired. Thus, subject to these conditions, phase estimation by QSVT may be used to factor N in time O n log n δ = O log N log log N δ .

Robust Phase Estimation
Phase estimation by QSVT comes in handy when U cannot be implemented reliably. For instance, suppose that U 2 j can only be implemented approximately, such that the error in U 2 j may be interpreted as an additive error in the quantity 0.ϕ j+1 ϕ j+2 ...ϕ m of Eq. (65) (i.e. the erroneous singular value isσ j = | cos(π(0.ϕ j+1 ϕ j+2 ... + ε − θ))|, where ε is the additive error). If ∆ is made sufficiently small, then these errors are not large enough to induce an incorrect measurement result with high probability, and so these errors can be tolerated.
This robustness to error can be quantified via the analysis in Appendix B 2. Formally, if we choose a small enough ∆ such that 1 4 − 1 π arccos 1 √ 2 + ∆ 2 < γ for some γ > 0, then we can tolerate additive errors in 0.ϕ j+1 ϕ j+2 ... of magnitude < 1 8 − 3γ 2 . In this sense, phase estimation by QSVT provides protection against noise of this form, which is a nice improvement over conventional phase estimation.

D. Emergent Quantum Fourier Transform
An integral component of Algorithm 3 is the cascading of multiple QSVT sequences, using the results from one instance to control subsequent instances. In its simplest case, this cascading structure reduces to the celebrated quantum Fourier transform (QFT), and generalizes the QFT in more sophisticated constructions. We elaborate on this key connection in this subsection.
To spot this connection, observe that circuit of Figure 11 resembles that of the (inverse) quantum Fourier transform, which becomes especially apparent in the limit of a length-one QSVT sequence. From a high level, this arises because Algorithm 3 may be viewed as a binary search through each bit of ϕ, where a bit of ϕ is determined at each iteration. Similarly, the (inverse) quantum Fourier transform may be viewed as a binary search through the bits defining an input state, where each qubit stores a bit of the input state at the end of the computation. Denoting the input state by |x = |x 1 x 2 ...x n where x = 0.x 1 x 2 ...x n , recall that the action of the inverse quantum Fourier transform is to map Similar to Algorithm 3, this mapping is achieved by applying Hadamards and controlled rotations to extract a bit of x from each qubit, leading to the correspondence seen here. Let us explicitly show how the QFT naturally emerges from this construction. For the sake of pedagogical clarity, we will switch to performing QSVT by projecting into the |0 0| block of the QSVT sequence, as in the (W x , S z , 0| · |0 )-QSP convention of Appendix A, which will make more apparent the connection to the QFT. This change is permissible because the threshold function, an even extension of Θ 1 √ 2 − x , can be well approximated by a polynomial in the more restricted class of polynomials of this convention, as presented in Appendix D 7.
Under these conditions, let us make use of the block encoding of Eq. (68) (Figure 10), where Π =Π = |0 0| ⊗ I. Then, a single instance of the signal rotation operator (the block encoding) followed by the signal processing rotation operator (the controlled phase shift), which is iterated in QSVT, may be fully realized as in the circuit on the left side of Figure 13. In this depiction, the top qubit is used to implement the projector controlled phase shift as per Figure 3, and the middle qubit is the block encoding qubit used to access the encoding of A j (θ) as per Figure 10. Evaluating this circuit allows simplification to the right side of the figure, where we are now able to ignore the top qubit in favor of just the block encoding qubit, an identity that follows from the simple choice of projector. A block encoding qubit can also double as one of the ancilla qubits used to read out a bit of ϕ, as shown below. Let us refer to this type of qubit as a phase readout qubit. To make this identification explicit and draw a parallel with the QFT, let us also assume that ϕ is an m = 3 bit number. Then, employing a length d QSVT sequence, the resulting phase estimation circuit may be schematically expressed as in Figure 14. Here, each phase readout qubit doubles as a block encoding qubit 4 , which is crucial to the emergence of the QFT, and is indeed a valid restructuring. In particular, instead of measuring each phase readout qubit, updating θ, and then applying a controlled θ-rotation at the next iteration, as in Algorithm 3, here we directly apply rotations controlled by the phase readout qubit, making intermediate QSP measurement steps unnecessary.
We implement this with the rotation operations R defined in Eq. (73). Similar to Figure 12, our construction employs controlled R 's to implement the following behavior: if a phase readout qubit is |0 , then it does not contribute to θ and no phase shift is applied; alternatively, if the phase readout qubit is |1 , then it contributes to θ and the corresponding phase shift is applied. Moreover, we arrange the R 's in a pattern that implicitly performs the θ ← θ/2 operation after each iteration; specifically, if ancilla qubit k controls an R at one iteration, then at the next iteration ancilla qubit k controls an R +1 . It may be easily verified that this procedure correctly encodes A j (θ) = 1 2 (I +e −2πiθ U 2 j ) at each iteration and is entirely equivalent to the formalism of Algorithm 3, thus justifying the recycling of phase readout qubits as block encoding qubits. Further, let the QSVT sequence length be simply d = 1, giving the quantum circuit depicted in Figure 15. This allows us to further simplify the circuit, without changing any circuit elements, by observing that the controlled-U 2 j operations commute with the R operations, which are z-rotations. Slide the first three Hadamard gates over to the far left, and gather all the remaining gates on the control qubits on the far right, dropping the signal rotation operations φ, which are inconsequential at this level of simplification. This systematic simplification results in the quantum circuit shown in Figure 16, where the inverse QFT circuit emerges naturally as the set of gates following the controlled-U 2 j gates. This is the standard quantum phase estimation circuit. 16. Illustration of three-qubit quantum phase estimation using QSP, in the simplified case when d = 1, with gates slid along wires to highlight the three-qubit inverse QFT circuit which emerges (gates in dotted rectangular box).
As this pattern persists to higher orders, we see that quantum phase estimation emerges from cascaded QSVT (or QSP) employed to iteratively determine bits of the phase of an eigenphase. The QSVT construction of this transform allows for a richer variety of transformations, however, including a trade-off between the accuracy of each iterative step (by increasing d), the number of exponentiated powers U to employ, the number of qubits to use in the transform, and more.

VI. FUNCTION EVALUATION PROBLEMS BY QSVT
Another useful application of QSVT is to evaluate a function of a matrix, which we term Function Evaluation Problems. Schematically, if f (x) is the function of interest, such that we wish to evaluate f (A), then we could imagine solving this problem by employing QSVT with a polynomial P (x) that approximates f (x). While these problems are generally more approachable with a quantum eigenvalue transform, they are still straightforward to solve with QSVT.
Here we summarize prominent function evaluation problems, most notably Hamiltonian simulation and matrix inversion. Our discussion summarizes results from [12], wherein the full details of these procedures can be found.

A. Hamiltonian Simulation by QSVT
A motivating goal of quantum computation is to simulate the time evolution of a state under a Hamiltonian, a problem known as Hamiltonian simulation. That is, for a Hamiltonian H and some time t, we would like to approximate the time evolution operator e −iHt , which is evidently a function evaluation problem with the function f (x) = e −ixt .
In the setup of this problem, we assume access to H, of which we desire a unitary block encoding such that we may solve this problem with QSVT. However, as we discussed in Sec. IV, such a unitary block encoding is only realizable if H ≤ 1. In general then, we instead determine an α ≥ H and construct a unitary block encoding of H/α. Again, this requires some prior knowledge about H, a drawback we elaborate on in Sec. VII, but fortunately such a block encoding can be achieved for a large class of Hamiltonians [8,12].
With this rescaled block encoding, one can equivalently imagine that our goal is to simulate the time evolution of a system under the rescaled Hamiltonian H/α for a time tα. This equivalence holds because the corresponding time evolution operators are identical: How might this problem be solved with QSVT? Naively, one may try to employ QSVT with a polynomial approximation to e −ixt (here, we view t as a parameter, not a variable). However, because the exponential function does not have definite parity, this function does not satisfy the constraints on Poly(a)= +|U φ |+ discussed in Section II A and Appendix A). To circumvent this issue, one can instead apply QSVT twice -once with an even polynomial approximation to cos(xt), and once with an odd polynomial approximation to sin(xt), both of which have definite parities. Then, using the circuit illustrated in Figure 17, one can sum together the results of these two QSVT executions to obtain cos (SV) (Ht) − i sin (SV) (Ht) = e −iHt , as desired.
However, note that the above relation only holds if the eigenvalues of H are positive, such that the singular values are equal to the eigenvalues. As we discussed in IV, if this is not the case, we may instead use the block encoding of H α and a circuit analogous to Figure 10 to con-struct a block encoding of the positive definite matrix 1 2 H α + I . Applying the aforementioned QSVT procedure to this matrix for a time 2αt produces a block encoding of e −iHt up to a global phase. In the remainder of this section, we assume that this issue has been alleviated.

cos (Ht)
−i sin (Ht) Returning back to the QSVT scheme, we note that in [12], Gilyén et al. approximate the functions cos(xt) and sin(xt) by polynomials using the Jacobi-Anger expansion: where J i (x) is a Bessel function of order i, and T i (x) is a Chebyshev polynomial of order i. One can attain -approximations to cos(xt) and sin(xt) by truncating these expressions at a sufficiently large index k . The necessary truncation index k may be determined by a function r(t, ), which is defined implicitly as = |t| r(t, ) and scales asymptotically as In particular, truncating Eqs. yields -approximations to cos(xt) and sin(xt), respectively, where 0 < < 1/e. Because T i (x) is a polynomial of degree i with definite parity, these approximations are polynomials of degree 2k and 2k + 1, respectively, with the correct even/odd parity. Let us denote these polynomials by P cos (x; t) and P sin (x; t).
Lastly, because cosine and sine are bounded in magnitude by 1, these -approximations only obey |P cos (x; t)|, |P sin (x; t)| ≤ 1 + . However, we need the target polynomials to necessarily be bounded in magnitude by 1 in order to be implemented through QSVT. As in [23], we can fix this by rescaling the polynomials by a factor of 1 1+ , at the expense of increasing the error of these approximations to 2 . This can be seen with the triangle inequality as 1 1+ P cos (x; t) − cos(xt) ≤ 1 1+ (|P cos (x; t) − cos(xt)| + | cos(xt)|) ≤ 1 1+ ( + ) ≤ 2 , and similarly for P sin (x; t).
To determine the complexity of this Hamiltonian simulation algorithm, first recall that our effective goal is to simulate the rescaled Hamiltonian H/α for time αt.
In addition, note that the truncations of the Jacobi-Anger expansion used in this procedure should be /4approximate such that, when rescaled by 1 1+ /4 , they are /2-approximations to cos(xt) and sin(xt). With this choice, the sum of these approximations, which is the approximation to e −ixt , is -approximate by the triangle inequality. Incorporating these conditions, we see that this QSVT-based Hamiltonian simulation algorithm prepares an -approximate block encoding of e −iHt and queries U a total number of times In comparing this query complexity with results quoted in the literature, α may be replaced with H . This complexity has state-of-the-art scaling in t and for Hamiltonian simulation: it is linear in t, logarithmic in , and additive in these two terms. As such, it provides a significant improvement over other algorithms [8,12,23]. We summarize Hamiltonian simulation by QSVT in Algorithm 4.

B. Matrix Inversion by QSVT
Another straightforward, yet widely applicable function evaluation problem is that of matrix inversion. That is, given access to a square matrix A, one wishes to construct an approximation to A −1 . Harrow, Hassidim, and Lloyd presented a quantum algorithm for this problem in the case that A is Hermitian [15]. In their eponymous algorithm, they prepare the state A −1 |b , which provides a quantum solution to the linear system A|x = |b .
Let's now look at this problem through the lens of QSVT. Suppose that we have an N × N matrix A with  2 Apply QSVT to this encoding twice, using the polynomials 1 1+ /4 P cos /4 (x; t) and 1 1+ /4 P sin /4 (x; t), where P cos /4 (x; t) and P sin /4 (x; t) are obtained by truncating the series in Eqs. (76) and (77), respectively, at index k = 1 2 r( e 2 α|t|, 5 4 4 ) . 3 With the results of the above QSVT executions, which approximate cos (SV) (Ht) and sin (SV) (Ht), respectively, run the circuit in Figure 17.
where Σ contains the singular values along its diagonal. As per the setup of HHL algorithm, we also assume that the singular values of A obey σ i ∈ [ 1 κ , 1] for some finite condition number κ ≥ 1 (if not, A may be rescaled to obey this condition). As the singular values are nonzero, the inverse of A exists and may be obtained as Upon this realization, it is clear how to apply QSVT to matrix inversion: find an odd polynomial P (x) that approximates f (x) = 1/x over the range of singular values of A, and employ QSVT to construct P (SV) (A † ), which approximates A −1 . Finding a good polynomial P (x) is tricky because of the discontinuity in 1/x, but it indeed can be done. In addition, this procedure requires that one can construct a unitary block encoding of A, which is feasible because A ≤ 1 as per the assumption that σ i ≤ 1. Such a block encoding can indeed be achieved for a variety of matrices relevant to physics [8,12] (but again, we discuss some caveats of doing so in Sec. VII).
Moreover, because we require that the polynomial P (x) be bounded in magnitude by 1 such that it can be implemented through QSVT, we cannot necessarily use P (x) ≈ 1/x as our target function since 1/σ i ≥ 1 in general. To overcome this challenge, let us instead seek a polynomial approximation to a function that behaves as 1 2κ 1 x over the range [−1, 1] \ [ −1 κ , 1 κ ], which will invert each singular value and is bounded in magnitude by 1 2 over this range (we use the multiplicative factor 1 2κ instead of 1 κ to avoid the need to rescale by 1 1+ when we -approximate this function, which was done in Section VI A). This procedure will output an approximation of 1 2κ A −1 , and because κ is a priori known, this multiplicative factor is not prohibitive to calculations. However, due to this multiplicative factor, we now desire a 2κ -approximation to 1 2κ A −1 , from which we effectively attain an -approximation to A −1 .
The appropriate polynomial for matrix inversion is thus an 2κ -approximation to 1 2κ 1 x . In Appendix C, we demonstrate how to construct such a polynomial. While the construction is a bit involved, in essence it is a product of a polynomial approximation to 1 x over a restricted range (constructed as a sum of Chebyshev polynomials) and a polynomial approximation to a rectangular function (constructed as a sum of the sign function approximations, P Θ ,∆ (x)).
We term this polynomial the matrix inversion polynomial, denoted by P MI ,κ (x), and defer its rigorous definition to in Appendix C. In addition to being an 2κapproximation to 1 2κ 1 x , P MI ,κ (x) has odd parity and is bounded in magnitude by 1 for x ∈ [−1, 1]. Hence, the matrix inversion polynomial may be implemented through QSVT.
Moreover, we also show in Appendix C that P MI ,κ (x) has degree d = O(κ log(κ/ )).
As QSVT requires O(d) calls to the block encoding, we see that matrix inversion by QSVT has complexity O(κ log(κ/ )). This is an improvement over the conventional HHL algorithm, which has runtime O(κ 2 log(N )/ ). It is quite impressive that the QSVT algorithm provides an large improvement in the scaling with κ/ , although similar results have been achieved with non-QSVT methods [46]. In addition, the HHL algorithm uses a sparse Hamiltonian simulation subroutine with target Hamiltonian A, resulting in the log(N ) term in its complexity, whereas the QSVT algorithm does not use such a subroutine and thus N dependence is absent from its complexity (however, constructing the necessary block encoding of A may scale with N ). We summarize matrix inversion by QSVT in Algorithm 5.
Moreover, like the HHL algorithm, matrix inversion by QSVT may be used to solve the linear system of equations A|x = |b , by applying the block encoding of 1 2κ A −1 to |b , which yields an -approximation to A −1 |b upon rescaling by 2κ. As discussed at the end of Section II D, this procedure requires that we project into the desired block of the QSVT sequence operator, which may be performed with little overhead.
Lastly, we note that this result can be further extended. With some minor adjustments, this QSVT-based algorithm can be adapted to prepare the pseudo-inverse of a rectangular matrix, which is useful in various machine learning contexts [12].
Algorithm 5: Matrix Inversion by QSVT Input: Access to A, an error tolerance , and a condition number κ ≥ 1/(min i σ i ) Output: A block encoded 2κ -approximation of κ 2 A −1 , which is effectively equivalent to an -approximation to A −1 . Runtime : O κ log κ queries to (a block encoding of) A † . Procedure: 1 Prepare a unitary block encoding of A † . 2 Apply QSVT to this block encoding to compute (P MI ,κ ) (SV) (A † ), where the polynomial P MI ,κ (x) is defined in Eq. (C6) of Appendix C.

VII. DISCUSSION
In this paper, we have presented how the quantum singular value transformation encapsulates the three major quantum algorithms. Paralleling [12], we have constructed QSVT-based algorithms for search, Hamiltonian simulation, and the quantum linear systems problem. Toward this end we have also derived QSVT-based algorithms for the eigenvalue threshold problem and phase estimation. Moreover, the utility of QSVT is not entirely enumerated here -further applications of QSVT to the quantum OR lemma, quantum machine learning, quantum walks, fractional query implementation, and Gibbs state preparation appear in the literature [12].
It is insightful that QSVT encompasses such a broad spectrum of problems. Effectively, QSVT provides a series of dials (i.e., a well-defined parameterization) that can be turned to transform from one algorithm to another. In addition, when there is sufficient structure inherent to the problem of interest, the resulting algorithm often becomes more efficient. Consequently, QSVT provides one lens through which to analyze the source of quantum advantage, and make concrete the somewhat vague tradeoff conjectured between problem structure and quantum algorithmic efficiency (while maintaining a significant gap between optimal classical and quantum performance). In aggregate, these constructions suggest that it is wise to continue to search for new quantum algorithms in the setting of QSVT.
There is ample room for future research in this area. Notably, various quantum algorithms have not yet been constructed from QSVT-based subroutines, such as variational algorithms like the variational quantum eigensolver [47] or the quantum approximate optimization algorithm [48]. It would be fascinating to see if QSVT can encompass, or perhaps even enhance, these hybrid quantum algorithms as well. This work also begets the question of how else one might tweak the parameters of QSVT to create novel algorithms, or extend previously known algorithms to novel noise settings. As QSVT is intuitive and flexible, there is likely a large class of prob-lems that are amenable to analysis by QSVT and admit a significant quantum advantage.
Moreover, note that there is a significant caveat in the use of QSVT, arising from the requirement of block encodings. In a typical implementation of QSVT on quantum computer, we may imagine that the matrix to which we would like to apply a transform, say A, is provided in a quantum random-access memory ("QRAM"), from which we may straightforwardly construct a block encoding of A/ A F [12]. One could then apply QSVT to this encoding with a runtime linear in A F , similar to how the runtimes of the eigenvalue threshold algorithm and the Hamiltonian simulation algorithm of this paper are linear in α. However, assuming that one has a classical computer with sampling and query access to A, as a classical analog of having A loaded into QRAM, then one can acquire access to a singular value transformation of A by executing a classical (not quantum!) algorithm [49]; this classical algorithm impressively has a runtime polynomial in A F . This polynomial runtime suggests that QRAM-based QSVT cannot always achieve an exponential speedup over classical algorithms, but can still attain a significant polynomial speedup. Although this challenge could be an Achilles' heel for application of QSVT to general and unstructured problems, clearly QSVT is still of interest for speeding up solution of problems with structure, as illustrated by quantum factoring. Also, while every QRAM essentially provides a blockencoded matrix, there are ways to block-encode matrices which do not require a QRAM; this is good, given the fact that QRAM implementations generally face a number of practical challenges in their realization, e.g. requiring a number of ancillary qubits which grows linearly with the number of items stored (see, e.g. [50] and references therein).
It is also interesting to note that while physics has developed significant insight into the role of eigenvalues, appreciation of singular values has lagged. For example, eigenvalues are the bread and butter of quantum systems, as energies for eigenstates of the Hamiltonian, and as stability points for stochastic systems. In contrast, singular values apparently play few starring roles in physics. One of the few examples arises in the study of entanglement, where the singular value decomposition is the underlying construct behind the Schmidt decomposition of a bipartite quantum state.
Why are there so few prominent roles for singular values in physics? Maybe it is because physics is drawn to closed, Hamiltonian systems (think: square matrices), whereas singular value decompositions arise mostly in studies of subsystems (think: non-square matrices), where the input and output dimensions may be different. As discussed in the introduction, such disparate dimensions also arises naturally in computation. And indeed, singular value decompositions play a prominent role in modern computation, especially in machine learning, where it is the basis for principal component analysis, model reduction, collaborative filtering, and more.
As quantum information and computer science continue to grow into a unified field, perhaps it is not surprising that singular value decompositions -and singular value transformations -are emerging as a unifying bridge.
Acknowledgements: The numerical phase computation and algorithms work by AKT was supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-Design Center for Quantum Advantage under contract DE-SC0012704 and the Natural Sciences and Engineering Research Council of Canada (NSERC) [PGSD3-545841-2020]. The algorithm analysis work by ZMR was supported in part by the NSF EPIQC program. ILC was supported in part by the NSF Center for Ultracold Atoms. ZMR and ILC were also supported in part by ARO contract W911NF-17-1-0433.

Wx convention for QSP
One common convention is to choose the signal operator W to be an x-rotation in the Bloch sphere, where, compared with Equation 1, we introduce the additional x subscript for clarity. Under this convention, one also chooses the signal processing rotation to be a zrotation, S z (φ) = e iφZ . Theorem 1 describes the family of unitaries achievable in this convention. Typically, we are not concerned with the achievable unitaries, but rather the achievable functions that can be computed in a subsystem. If we choose to project out the 0| · |0 element, the output is Poly(a) = 0|U φ |0 = P (a). Here, the choice of Poly is restricted not only by the conditions on P of Theorem 1, but also the additional constraints below which ensure that a polynomial Q exists satisfying the conditions of Theorem 1. (v) if d is even, then ∀a ∈ R, Poly(ia)Poly * (ia) ≥ 1 The family of achievable polynomials in this case is significantly limited since the projective measurement is performed in the same basis M = {|0 , |1 } as the signal processing operations. For example, this immediately limits us to polynomial functions such that |Poly(±1)| = 1.
Properties (iv) and (v), are quite restrictive especially when approximating real functions. In fact, none of polynomial approximations described in this paper satisfy these last two constraints. Allowing a small but nonzero imaginary part, it is possible to find QSP phases in the (W x , S z , 0| · |0 ) convention approximating real functions that satisfy |f (±1)| = 1; an example of this is the phase estimation function plotted in Section D 7.
Since we are often interested in constructing real polynomials, choosing a different signal basis ends up being much more useful. In particular, when M = {|+ , |− }, then we may employ this theorem for real polynomials: Theorem 10 ((W x , S z , +| · |+ )-QSP). A QSP phase sequence φ ∈ R d+1 exists such that for a ∈ [−1, 1], and for any real polynomial Poly ∈ R[a] if and only if the following conditions hold: This QSP convention is expressive enough for all of the polynomials considered in this paper and is used in [7,12]. The proofs of Theorem 9 and Theorem 10 are given in [12].

Reflection convention for QSP
Another common convention is to choose the signal operator W to be a reflection (as in Eq. (12)).
The reflection operator is preferred in some cases as it has the added benefit of being Hermitian, which can simplify proof constructions, and in particular equations such as Eq. (34). Given a phase sequence φ ∈ R d+1 , we can find a φ ∈ R d+1 such that Using the relationship of Eq. (14), this can be accomplished by choosing φ 0 = φ 0 +(2d−1) π 4 , φ d = φ d − π 4 , and φ k = φ k − π 2 for k ∈ 1, . . . , d − 1. Therefore, these two conventions are equivalent regardless of the final measurement basis.

Wz convention for QSP
Theorem 10 can also be understood through its relationship to the convention used in [33]. Here the authors define QSP with the signal operator W being a z-rotation, where w := e iθ/2 , and the signal processing operator is an x-rotation, S x (φ) = e iφX . Furthermore, in this convention, it is typical to choose the signal basis as being M = {|0 , |1 }.
In our notation, this convention is written as (W z , S x , 0| · |0 )-QSP. This convention is equivalent to (W x , S z , +| · |+ )-QSP and is equally expressive which can easily be seen since The Laurent polynomial formulation of (W z , S x , 0| · |0 )-QSP of [33] can be related to our formulation as follows for complex polynomials P, Q ∈ C[a] and real Laurent polynomials F, G ∈ R[w, w −1 ] with parity d mod 2. Explicitly, and for k > 0, where the coefficients for P and Q are given in the Chebyshev bases and the coefficients for the F and G are given in the standard basis Requiring unitarity in Eq. (A9) is equivalent to A numerically stable method for computing phases for a given F (w) in the (W z , S x , 0| · |0 ) convention is discussed further in [33] and can also be seen as a constructive proof of Theorem 10.

Appendix B: Proofs about Phase Estimation by QSVT
In this appendix, we prove the theorems used in the development of phase estimation by QSVT in Section V.

Theorems 6 and 7
Here, we prove the Theorems 6 and 7 from Section V A 1: Theorem 6. If n ≥ m, and one has the ability to implement the sign function exactly through QSVT, then at the end of the algorithm sketch of Section V A 1, θ = ϕ. and Theorem 7. If n < m (including the case m = ∞), and one has the ability to implement the sign function exactly through QSVT, then at the end of the algorithm sketch of Section V A 1 (with a minor modification discussed below), |θ − ϕ| ≤ 2 −n−1 .
The minor modification needed for n < m is an additional j = 0 iteration used in the complete phase estimation by QSVT algorithm.
In the setup of these proofs, we suppose that ϕ is an mbit number and use the notation 0.ϕ [j:] := 0.ϕ j ϕ j+1 ...ϕ m to denote a contiguous string of binary digits (this string being infinite in the case m = ∞). We also assume that we can implement the sign function exactly in QSVT (which is unrealistic, but was addressed in Section V A 2). Hence, the procedure that we discuss in these proofs is effectively Algorithm 3 modulo the error analysis.
Proof. The proof proceeds by induction.
Considering the j = 0 iteration, we see that the final output of the algorithm is θ = 0.ϕ 1 ϕ 2 ...ϕ n = ϕ, which completes the proof of Theorem 6.

b. n < m
If n < m, this algorithm will produce an n-bit approximation to ϕ that suffers error at most 1 2 n+1 . To show this, we first prove the following lemma.
Proof. As before, the proof proceeds by induction, albeit longer than the n ≥ m proof, but no more complicated. One minor difference is that we will still write expressions like ϕ = 0.ϕ 1 ϕ 2 ...ϕ m to indicate the binary expansion of ϕ, where it is to be understood that this will really be an infinite sequence in the case m = ∞.
General Case: Proceeding now to the general case, assume that this claim is true at the end of iteration j +1 and all prior iterations. That is, θ = 0.φ j+2φj+3 ...φ n at the end of iteration j + 1. Then, at the start of iteration j, θ ← θ/2 = 0.0φ j+2φj+3 ...φ n . Again using Eq. (65), we see that A j (θ) has singular value As before, we are interested in Θ 1 √ 2 − σ j , which, as we will see below, is dictated by the values of ϕ j+1 , ϕ j+2 , andφ j+2 . Let's now look at the possible cases of these values.
Finally, we mention one last caveat. It is possible that σ j = 1 √ 2 , in which case the sign function is 0, and we are equally likely to measure 0 or 1. However, this is not a problem, as the condition σ j = 1 √ 2 implies that |0.ϕ [j+1:] −0.0φ [j+2:] | = 0.1 = 1 2 , such that both ϕ j+1 = 0 and ϕ j+1 = 1 are equally accurate approximations, and so the inequality in Theorem 7 is still obeyed. This is analogous to conventional rounding, wherein one could round 0.5 to either 0 or 1 without changing the accuracy of the rounding. With these concerns alleviated, the proof of Theorem 7 is complete.

Theorem 8
Here, we prove the Theorem 8 from Section V A 2: then an error due to ∆ can only occur at the j = n − 1 iteration. If an error is made at this iteration, then at the end of the algorithm, |θ − ϕ| < 1 2 n , assuming no errors are made at later iterations.
As discussed in Section V A 2, we assume that we perform QSVT with a function that behaves as P Θ ,∆ 1 √ 2 −x (for x ≥ 0), which approximates the sign function. Recall that this approximation fails in the region [ 1 , which we dub "the ∆-region". We show that the adverse effects of a finite sized ∆-region can be mitigated by choosing a sufficiently small ∆. Throughout this section, we again use the notation 0.ϕ [j:] := 0.ϕ j ϕ j+1 ... to denote a string of contiguous binary digits. We also use the definition ofr j from Section B 1 b.
To demonstrate our claim, recall that σ j = | cos π(0.ϕ [j+1:] − θ) | as per Eq. (65). Whether or not σ j is inside of the ∆-region is dictated by the value of |0.ϕ [j+1:] − θ|. In particular, in order for σ j to be inside of the ∆-region, we require | cos π(0.ϕ [j+1: We depict this condition graphically in Figure 18, which is a useful illustration for understanding the proof of this theorem.
As we show below, if we choose ∆ such that σ j is within ), then it is only possible for σ j to be within the ∆-region at iteration j = n − 1. In order to enforce this constraint on the ∆-region, we require that As it turns out, the + condition is actually more stringent, so we have the following lemma: Lemma 13. If we choose ∆ such that then σ j can only be inside of the ∆-region at iteration j = n − 1.
Proof. To prove this, we will begin by analyzing the possible values of σ j . We assume that, if σ j is outside of the ∆-region, then we can correctly determine θ 1 with high probability using an appropriate value of as in Section V A 2. In addition, note that our restriction on ∆ implies that, in order for σ j to be inside of the ∆-region, we must have |0.ϕ [j+1:] −θ| ∈ ( 1 4 − 1 16 , 1 4 + 1 16 )∪( 3 4 − 1 16 , 3 4 + 1 16 ) First, let j = n − 1 and suppose that we can correctly determine θ 1 (either σ j is outside of the ∆-region, or σ j is inside of the ∆-region and we get lucky). Next, proceed to iteration j = n − 2. Ifr n−1 = 0 at the previous iteration, then Lemma Again, because 0.011 = 1 4 + 1 8 > 1 4 + 1 16 , the corresponding value of σ n−2 is not within the ∆-region for either possible value of ϕ n−1 , and so we can correctly determine θ 1 with high probability at this iteration.
By inductively following this logic to further iterations, we see that, if we can correctly determine θ 1 at iteration j = n − 1, then the subsequent values of σ j will not be in the ∆-region, and the corresponding values of θ 1 can be correctly determined with high probability. date polynomial be bounded, we may multiply it by an even function that is close to 1 for x ∈ [−1, 1] \ [ −1 κ , 1 κ ], and close to 0 for x ∈ [ −1 2κ , 1 2κ ], letting the range x ∈ [ −1 κ , −1 2κ ] ∪ [ 1 2κ , 1 κ ] be a transition region between these two values. Such a rectangular function may be polynomially approximated by a linear combination of the step function approximations of Section III: which is easily seen to obey and has even degree O(κ log(1/ )). In the in-between region transitions between values close to 0 and close to 1, remaining bounded in magnitude by 1 throughout. We illustrate the behavior of this polynomial in Fig 19. Therefore, our target polynomial is the matrix inversion polynomial : = O κ . We illustrate this polynomial in Fig 19. In defining , the term 2 5κ ensures that this polynomial is an 2κ -approximation to  Finally, it is easy to compute the degree of P MI ,κ (x), which is the sum of the degrees of P Presented in this appendix are some explicit polynomials and corresponding QSP phase angles, for functions which are useful in quantum signal processing applications. These are not necessarily the optimal polynomials, nor the best QSP phase angles, but they are pedagogically clear starting points. Unless otherwise specified, all QSP phases are given in the (W x , S z , +| · |+ )-QSP convention. All the code for generating these phase angles is available in the pyqsp open source repository on GitHub 5 .

Oblivious amplitude amplification
For fixed point search or oblivious amplitude amplification, it is desired to make a polynomial which maps a as close as possible to 1, for a wide range of small values of a, starting as close to a = 0 as possible.

Sign function
The sign function Θ(a) has a number of applications for quantum signal processing. As discussed in Section III, a robust polynomial approximation of it can be obtained using the error function where k is a (large) scaling factor.

Matrix inversion using 1/a
The 1/a function is useful for computing Moore-Penrose pseudoinverses of matrices using the quantum singular value transform. As discussed in Section VI B, Chebyshev polynomials may be employed to approximate this function [46].
For example, with κ = 3 and = 0.3, a set of QSP phase angles for this polynomial is:

Cosine and sine functions for Hamiltonian simulation
For Hamiltonian simulation, we seek an approximation to e −iat . As discussed in section VI A, this can be accomplished using the Jacobi-Anger approximations of cos(at) and sin(at) of Eqs. (76) and (77). The approximation is chosen to sufficient degree so as to bound the error to > 0 in the region a ∈ [−1, 1]. For example, with t = 5 and = 0.1, a set of QSP phase angles for the approximation to cos(at) are

Threshold function
Distinguishing eigenvalues and singular values may be performed using a step function, which we will take to be step(a − 1/2) for illustrative purposes. This may be polynomially approximated using a Taylor series expansion of step(a − 1/2) ≈ 1 2 erf k(a + 1/2) − erf k(a − 1/2) , which becomes a good approximation for large k.
For example, with k = 10 and using a degree d = 18 Taylor series, a set of QSP phase angles for this polynomial is: pyqsp --plot-real-only --plot-npts=400 --seqargs=18,10 --plot poly_thresh Note that this is an even function of a, but it may be used just in the region a ≥ 0, e.g. to distinguish singular values that are above or below 1/2. It can be made as sharp as desired by increasing k and the degree of the polynomial.

Linear amplitude amplification
Linear amplitude amplification is a subroutine useful for a number of quantum algorithms including simulation. The goal is to multiply inputs by a constant 1/2Γ for Γ ∈ (0, 1/2]. As is usual for QSP, the absolute value of the output must bounded by 1 and therefore we seek a polynomial approximation that performs the linear amplification only for small inputs. We can obtain a suitable approximation by truncating the Taylor expansion of linear amplification(a, Γ) ≈ a 2Γ × 1 2 erf k(a + 2Γ) − erf k(a − 2Γ) , where k is chosen to obtain the desired accuracy within the region [−Γ, Γ]. This approximation is described in further detail in [21]. For example, with Γ = 0.25 and using a degree d = 19 Taylor series, a set of QSP phase angles for this polynomial is: pyqsp --plot-real-only --plot-npts=400 --seqargs=19,0.25 --plot poly_linear_amp

Phase estimation polynomial
Similar to the threshold function is the phase estimation polynomial of Eq. (69) used in Section V.
For example, with ∆ = 10 and using a degree d = 18 Taylor series, a set of QSP phase angles for this polynomial in the (W x , S z , +| · |+ )-QSP convention is: pyqsp --plot-real-only --plot-npts=400 --seqargs=18,10 --plot poly_phase A set of QSP phases for this polynomial can also be given in the (W x , S z , 0| · |0 )-QSP convention; use of this convention clarifies the reduction to the quantum Fourier transform presented in Section V D.
pyqsp --polydeg 16 --measurement="z" --func="-1+np.sign(1/np.sqrt(2)-x)+ np.sign(1/np.sqrt (2) The corresponding response function is shown in Figure 28. Note that we are no longer using the real polynomial approximation of Eq. (69) since it does not satisfy the conditions of Theorem 9; because of this, there is a small non-zero imaginary response. The QSP phase angles for this example are generated using an optimization algorithm.

Eigenstate filtering
As in the eigenvalue threshold problem of Section IV, suppose H is a Hermitian matrix with an eigenvalue λ which is known to be separated from other eigenvalues by a gap ∆ λ > 0, and the problem is to create, using QSP, a projection operator onto the eigenspace corresponding to λ. Lin and Tong [39] show that the degree d = 2k Response function for a degree 18 polynomial approximation to the phase estimation function in the (Wx, Sz, 0| · |0 ) convention. polynomial f k (x, ∆ λ ) = T k −1 + 2 known as the "eigenstate filtering function," is an optimal polynomial for filtering out the unwanted information from all other eigenstates. For example, with δ = 0.3 and using a degree d = 30 Taylor series, a set of QSP phase angles for this polynomial is: which produces this response function for a > 0: The corresponding response function is shown in Figure 29. This is a better threshold function than the one presented in section D 5, and the threshold can be located where desired by changing ∆ λ .

ReLU
The "rectified linear unit" activation function, ReLU(x) := max(0, x), is popular in machine learning, and QSP is a natural framework to employ for realizing such nonlinear activation functions for quantum machine learning. A common differentiable approximation of the ReLU function is the softplus function, which is made into an even function in this version: where ∆ parameterizes the steepness, and δ the offset of the threshold from 0. For example, with δ = 0.6 and ∆ = 15 and using a degree 20 Taylor series, a set of QSP phase angles for this polynomial is: