Theoretical and Experimental Perspectives of Quantum Verification

In this perspective we discuss verification of quantum devices in the context of specific examples, formulated as proposed experiments. Our first example is verification of analog quantum simulators as Hamiltonian learning, where the input Hamiltonian as design goal is compared with the parent Hamiltonian for the quantum states prepared on the device. The second example discusses cross-device verification on the quantum level, i.e. by comparing quantum states prepared on different quantum devices. We focus in particular on protocols using randomized measurements, and we propose establishing a central data repository, where existing experimental devices and platforms can be compared. In our final example, we address verification of the output of a quantum device from a computer science perspective, addressing the question of how a user of a quantum processor can be certain about the correctness of its output, and propose minimal demonstrations on present day devices.


I. INTRODUCTION
The dream and vision of now more than two decades to build quantum computers and quantum simulators has materialized as nascent programmable quantum devices in today's laboratories [1][2][3]. While first generation experiments focused on basic demonstration of building blocks of quantum information processing, quantum laboratories now host programmable intermediate scale quantum devices, which -while still imperfect and noisy -open the perspective of building quantum machines, which fulfill the promise of becoming more powerful than their classical counterparts. Significant advances in building small scale quantum computers and quantum simulators have been reported with various physical platforms, from atomic and photonic systems to solid state devices. A central aspect in further developments is verification of proper functioning of these quantum devices, including cross-device and cross-platform verification. Quantum verification is particularly challenging in regimes where comparison with classical simulation of quantum devices is no longer feasible.
Quantum characterization, validation and verification (QCVV) is a well-developed field in quantum information theory, and we refer to reviews [4][5][6] and tutorials [7] on this topic. The challenge in designing practical techniques to characterize quantum processes on intermediate and large-scale quantum devices is related to the (in general) exponential scaling of number of experiments and digital post-processing resources with system size, as is manifest in quantum process tomography or state tomography. Exponential resources can be circumvented by extracting partial information about quantum processes providing a figure of merit, such as a process fidelity. However, such protocols also face the requirement of decoupling the state preparation and measurement errors from a process fidelity. Applications of well established protocols in experimental settings, for example as randomized or cycle benchmarking of quantum comput-ers [8] or verifiable measurement-based quantum computation [9] have been reported.
In this 'perspective' we wish to look forward to possible near future experiments addressing verification of quantum computers and quantum simulators, and in particular venturing into less explored territories. We illustrate aspects of verification, which are physically relevant and conceptually complementary to previous work, by describing three experimental scenarios as 'proposed experiments'. Our discussion aims at connecting recent theoretical results with possible implementation of verification protocols in existing experimental settings. Clearly, different communities from quantum experimentalists to theorists, and computer scientists look at perspectives on verification from quite different angles, and our examples are chosen to reflect this diversity.
Our first example illustrates verification of analog quantum simulators [3,10] via Hamiltonian learning [11][12][13]. The central idea is to verify the analog quantum simulator by comparing the desired many-body Hamiltonian, i.e. the Hamiltonian to be implemented, with the actual, physically realized Hamiltonian, which can be efficiently reconstructed from measurements of quantum states prepared on the quantum device. This is applicable to, and immediately relevant for present analog quantum simulation experiments for spin and Hubbard models with atoms and ions, and superconducting qubits [14][15][16][17][18][19][20][21][22][23][24].
In our second example we address cross-device and cross-platform verification as applicable to quantum computers and quantum simulators. Here the goal is the pairwise comparison of quantum states implemented on different quantum devices on the level of the full manyqubit wave function, or for reduced density matrices of subsystems. To this end, results of randomized measurements, performed on each device separately, can be classically correlated to estimate the fidelity of two quantum states, with efficiency scaling better with (sub-) system size than what is achieved in quantum state tomogra-phy [25,26]. We envision a community effort where data from randomized measurements are uploaded to a central data repository, enabling the direct comparison of multiple quantum devices for a defined set of quantum problems, specified either as quantum circuits and algorithms or Hamiltonian evolution.
Finally, in our third example we move on to verification from a computer scientist perspective, and address the question of how a user of a quantum processor can be certain about the correctness of its output. This question becomes particularly important in case the user of a quantum device does not have direct access to it (e.g. cloud computing). Is it even possible for a user to rely on the result if they cannot verify it efficiently themselves? This question has been answered in the affirmative in case the user has access to a limited amount of quantum resources [27][28][29][30][31][32][33][34]. Interestingly, such a verification of the output is feasible even via purely classical means [35]. However, not very surprisingly, the resources required to implement such a verification protocol are beyond reach with current technology. Due to the rapid technological developments and the accompanying need for the ability to verify the output of a computation, we propose here a proof-of-principle experiment to implement such a verification protocol that is feasible with current technologies.

II. VERIFICATION OF ANALOG QUANTUM SIMULATORS VIA HAMILTONIAN LEARNING
The goal of quantum simulation is to solve the quantum many-body problem [10], from strongly correlated quantum materials in condensed matter physics [15] to quantum field theories in high-energy physics [36], or modeling of complex molecules and their dynamics in quantum chemistry [20,37]. Building an analog quantum simulator amounts to realizing in the laboratory synthetic, programmable quantum matter as an isolated quantum system. Here, first of all, a specified manybody Hamiltonian H must be implemented faithfully in highly controllable quantum system with given physical resources. Furthermore, quantum states of matter must be prepared on the physical quantum device corresponding to equilibrium phases, e.g. as ground states, or represent non-equilibrium phenomena as in quench dynamics.
Remarkable progress has been made recently in building analog quantum simulators to emulate quantum many-body systems. Examples are the realization of lattice spin-models with trapped ions [22,23], Rydberg tweezer arrays [16][17][18], superconducting devices [24], or Hubbard models with ultracold bosonic or fermionic atoms in optical lattices [15,19,21]. While analog quantum simulation can be viewed as special purpose quantum computing with the rather focused task of emulating a many-body systems via a specified H, the unique experimental feature is the ability to scale to rather large particle numbers. This is in contrast to present day quantum computers, which provide a high-fidelity universal gate set for a small number of qubits.
Today's ability of analog quantum simulators to prepare and store on a scalable quantum device a highly entangled many-body state, while solving a quantum problem of physical relevance, fulfills one of the original visions of Feynman's proposal of quantum simulation. However, this also raises the question of verification in regimes where comparison with classical computations with controlled error, such as tensor network techniques, are no longer available. This includes also higher dimensional lattice models, or with fermionic particles, and quench dynamics.
The proper functioning of a quantum simulator can be assured by comparing experiment vs. theory [38], or predictions from two different experimental quantum devices. This can be done on the level of comparing expectation values of relevant observables, e.g. on the most elementary level by comparing phase diagrams [38], or the increasingly complex hierarchies of correlation functions [39]. We return to approaches of directly comparing quantum states in Sec. III below.
Verification by Hamiltonian Learning: Instead, we will rephrase here verification of an analog quantum simulator as comparing the 'input' Hamiltonian, specified as the design goal for the quantum simulator, with the actual Hamiltonian realized on the physical device. This latter, experimental Hamiltonian can be determined via 'Hamiltonian tomography', or 'Hamiltonian learning', i.e. inferring from measurements under certain conditions the parent Hamiltonian underlying the experimentally prepared quantum state [11,12].
Hamiltonians of many-body physics consist of a small set of terms which are (quasi-) local and consist of fewbody interactions, i.e. H = i h i with h i quasi-local terms. Thus, for a given H, only a small set of physical parameters determines the accessible quantum states and their entanglement structure: for example, as ground state, H |Ψ G = E G |Ψ G , as a finite temperature state in the form of a Gibbs ensemble ∼ exp (−βH); or as generator of the quench dynamics with an initial (pure) state |Ψ 0 evolving in time as |Ψ t = exp (−iHt) |Ψ 0 .
Remarkably, as shown in recent work [11][12][13], it is the local and few-body structure of physical Hamiltonians in operator space which allows efficient Hamiltonian tomography via measurements from experimentally prepared (single) quantum states on the quantum simulator. These states include the ground state, a Gibbs state, or states produced in quench dynamics. It is thus the restricted operator content of Hamiltonians, which promises scalable Hamiltonian learning with system size, i.e. makes Hamiltonian tomography efficient.
Here we wish to outline 'Hamiltonian verification' for a Fermi Hubbard model. This can be implemented with atoms in an optical lattice, and observed with a quantum gas microscope [15,19]. To be specific, we apply the protocol of Ref. [11] for reconstruction of the parent Hamiltonian from an experimentally prepared ground state. Similar results apply to energy eigenstates, ther- mal states, or any stationary state. We simulate experimental runs of the protocols including the measurement budget, thus assessing accuracy and convergence [40].
The protocol of Ref. [11] describes learning of local Hamiltonians from local measurements. The starting point is the assumption of an experimentally prepared stationary state ρ, as described above. The protocol finds the parent Hamiltonian H from ρ via the steady state condition [H, ρ] = 0. As ρ is stationary under H, so is the expectation value of any observable A: The latter equation can be used to obtain a set of linear constraints from which H can be reconstructed. Consequently, for lattice systems the algorithm can be summarized as follows (see also As stated in Ref. [12], the locality of H implies that such a Hamiltonian reconstruction will be unique. The reconstructed parametersc can be cross-checked with respect to the parameters of an input Hamiltonian and serve as quantifier for the verification of the quantum simulator. The required number of experimental runs is controlled by the gap of the correlation matrix M = K T K, which strongly depends on the type and number of constraints [11]. In the limit of all possible constraints, the matrix M coincides with the correlation matrix defined by Qi and Ranard [12]. The lowest eigenvalue of this matrix corresponds to the Hamiltonian variance measured on the input state, which has been used previously for experimental verification of variationally prepared manybody states [22]. In Fig. 1

(b) and (c) we illustrate Hamiltonian learning for a Fermi-Hubbard model
on a 2D square lattice [40]. Here c † iσ (c iσ ) denote creation (annihilation) operators of spin-1 2 fermions at lattice sites i and n iσ = c † iσ c iσ . Consequently, in this example the local basis {S m } M m=1 consists of hopping operators for all bonds (i, j): (c † iσ c jσ + H.c.) for each spin component σ, and of operators counting double occupancies on the individual sites i: n i↑ n i↓ . In case of the 3 × 4 lattice studied in Fig. 1, the operator basis therefore includes M = 46 elements. As an input state for the protocol we take the ground state in the strongly repulsive regime (J = 1, U = 8) and introduce a small hole doping of n = 0.83. As a set of constraints we adopt the operators A ijk = i(c † iσ c jσ − H.c.)n kσ in which i, j and k are nearest-neighbor sites [41]. The particular combinations of sites {i, j, k} is chosen in such a way that the rows of the matrix K are linearly independent. Note that obtaining the matrix elements K nm = −i[A n , S m ] requires the measurement of locally resolved atomic cur- , where j can be located within 2 lattice constants around i. In experiments with atoms in optical lattices, these currents can be accessed by inducing superexchange oscillations accompanied by spin resolved measurements in a quantum gas microscope [42,43]. Fig. 1 (b) shows the relation between the distance of the exact vs the reconstructed Hamiltonian parameters ∆ĉ and the number of measurements per constraint on a 3 × 4 Hubbard lattice. Panel (c) displays the improvement in quality of the Hamiltonian reconstruction as additional constraints A ijk are added to system of equations Kc = 0. As can be seen, the Hamiltonian can be recovered exactly as the number of constraints N C approaches the number of elements M in the operator basis {S m } M m=1 . We note that the total measurement budget can be optimized via arranging the operators [A n , S m ] into commuting groups, such that they can be evaluated from the same measurement outcomes [40].
In the Hamiltonian learning protocol outlined above, the number of required measurement to obtain a fixed parameter distance ∆ĉ scales polynomially with the system size [11]. Recent work demonstrates that the method can be extended for recovering Linbladians from steady states, potentially allowing an efficient recovery of dissipative processes [44]. Future investigations will have to include the relation of the type and number of constraints to the gap of the correlation matrix which determines the total number of required experimental runs, as well the role of measurement errors and decoherence (see for instance Ref. [45]).
An entirely different verification protocol, which can also be applied to quantum simulation, is Cross-Device Verification described in the following Section. There, verification is achieved by cross-checking the results from two quantum simulators simulating the same physics by measuring overlaps of quantum states on the level of reduced density operators for various subsystem sizes.

III. CROSS-DEVICE VERIFICATION OF QUANTUM COMPUTATIONS AND QUANTUM SIMULATIONS
In the previous section, we presented the verification of an analog quantum simulator by comparing the Hamiltonian actually realized in the device with the input or target Hamiltonian. A different approach to verification, aiming to gain confidence into the output of a quantum simulation or quantum computation is to run the simulation or computation on various different quantum devices, and compare the outcomes with each other, and -if available -with a idealized theoretical simulation [see for an illustration Fig. 2 a)]. Such cross-comparison can be implemented on different levels of sophistication. While quantum simulations have been compared on the level of low-order observables [38], for instance order parameters characterizing phase diagrams, recent protocols aim to compare full quantum states [25,[46][47][48].
To measure quantum fidelities, various approaches exist. A pure quantum protocol would establish a quantum link, teleport quantum states and compare states locally, for instance via a SWAP-test [52][53][54]. While such overlap measurements have been demonstrated locally in seminal experiments [55][56][57], a quantum link teleporting large quantum states of many particles with high accuracy between two quantum devices is not expected to be available in the near future.
Today, protocols relying on classical communication between many-body quantum devices are thus required. Here, ultimate brute-force tests are quantum state and quantum progress tomography which aim for a full classical reconstruction, allowing for a classical comparison, of quantum states or processes. Even incorporating recent advances, such as compressed sensing for density matrices with low rank [58], such approach requires however at least 3 N measurements to accurately determine an arbitrary N qubit states. Efficient methods, such as tensor network [59,60] or neural network tomography [61], have been developed, rely however on a special structure of the states of interest.
Here, a randomized measurement on a N -qubit quantum state ρ is performed by the application of a unitary U , chosen at random from a tomographically complete set and a subsequent measurement in the computational basis {|s }. Statistical correlations of such randomized measurements, performed sequentially on a single quantum In regimes, where a classical simulation is possible, the implemented states can additionally be compared to a theoretical target state ρT . (c) Experiment-theory fidelities between a quantum states prepared in a trapped ion quantum simulator and its classical simulation as a function of the subsystem size NA (the total system consists of 10 qubits) for various evolution times (different colors) after a quantum quench in a long-range Ising model [51], reprinted from Ref. [25]. device, allow for tomographic reconstruction of the quantum state [48,64,67], but also give direct access to nonlocal and non-linear (polynomial) functionals of density matrices such as Rényi entropies [48,62,63]. In particular, recent work [48] combined randomized measurements with the notion of shadow tomography [68] which aims to predict directly expectation values of arbitrary observables, instead of reconstructing the full density matrix. Using insights from the stabilizer formalism [69], Ref. [48] devised an efficient implementation of shadow tomography via randomized measurements, which enables to estimate expectation values of arbitrary (multi-copy) observables with high precision and rigorous performance guarantees [48]. This allows in particular to estimate the fidelity between the quantum state ρ and a known theoretical target. It complements methods such as direct fidelity estimation [46,47] and randomized benchmarking [8,[70][71][72][73][74], which utilize the absolute knowledge of the theoretical target to be efficient for certain target states and processes.
Cross-device verification with randomized measurements: In a very general setting, one faces the situation where two unknwon quantum states have been prepared on two separate quantum devices, potentially at very different points in space and time [ Fig. 2 a)]. In Ref. [25] (see also Ref. [26]), it has been proposed to measure the cross-device fidelity F max (ρ 1 , ρ 2 ) of two unknown quantum states, described by (reduced) density matrices ρ 1 and ρ 2 and prepared on two separate devices. To this end, randomized measurements are implemented with the same random unitaries U on both devices. Facilitating the direct experimental realization, these unitaries U can be local, U = N k=1 U k , with U k , acting on qubit k and sampled from a unitary 2-design [71,75] defined on the local Hilbert space C 2 . From statistical cross-and auto-correlations of the outcome prob-abilities P  (Tr ρ 2 2 ), and thus F max (ρ 1 , ρ 2 ), are estimated via for i, j = 1, 2. Here, . . . denotes the ensemble average over local random unitaries and the Hamming distance D[s, s ] between two strings s and s is defined as D[s, s ] ≡ |{k ∈ {1, . . . , N } | s k = s k }|.
In the regime where a classical simulation of the output is possible, this protocol can also be used for a experiment-theory comparison (c.f. direct fidelity estimation [46,47] and classical shadow tomography [48]). In Fig. 2(c), experiment-theory fidelities between highly entangled quantum states prepared via quench dynamics in a trapped ion quantum simulator [51] and its theoretical simulation are shown [25]. We note that such experimenttheory comparisons to simple (product) states can also be used to identify and mitigate errors resulting from imperfect measurements [25,76,77].
Based on numerical simulations, it was found in Ref. [25] that the number of necessary experimental runs to estimate the fidelity F max up to a fixed statistical error scales exponentially with subsystem size, ∼ 2 bN . The exponents b 1 are however favorable compared to quantum state tomography, enabling fidelity estimation for (sub-) systems consisting of a few tens of qubits with state of the art quantum devices. For two very large quantum devices, consisting of several tens to a few hundreds of qubits, the present protocol allows thus only to estimate fidelities of possibly disconnected subsystems up to a given size, determined by the available measurement budget. This data represents very fine-grained local information on fidelities of subsystems. It remains an open question whether this information can be combined with additional knowledge of a few global properties to obtain (at least bounds on) the total system fidelity.
While we have outlined above protocols to cross-check two individual devices, we envision a community effort where specific quantum problems, either as quantum circuits and algorithms, or for quantum simulation are defined, and data from theoretical simulations as well as measurement data from quantum devices are uploaded to a central data repository [see for an illustration Fig. 2  b)]. In regimes, where a classical simulation is possible, an ultimate reference could here be represented by a theory target state. For larger quantum devices, reference operations and circuits could be executed, and density matrices of (sub-)systems could be compared with each other. This would allow for a standardized, pairwise cross-check of multiple quantum devices representing various platforms.
The outlined protocols rely on classical communication of randomized measurement results, and are restricted, due to an exponential scaling of the number of required experimental runs, to (sub)-systems of a few tens of qubits. To overcome this challenge, we expect that in the future quantum state transfer protocols become available to develop efficient fully quantum in addition to hybrid quantum-classical protocols for cross-checking quantum devices.

IV. VERIFICATION OF THE OUTPUT OF AN UNTRUSTED QUANTUM DEVICE
In the validation procedures considered above, the person testing the quantum processor (the user) has either direct access to the device or trusts the person operating it. Computer scientists are often concerned about a very different notion of verification: the verification of the output of a computation performed by an untrusted device. Such a verifiability demand will become particularly relevant once quantum devices, that reliably process hundreds of qubits, become usable as cloud computers.
To demonstrate the need of these verification protocols, let us consider the various kinds of problems such cloud computers could be utilized for. If the user employs a quantum computer to solve a problem within NP, such as factoring a large number into its prime factors, the solution to the problem is simple: knowing the factors, the output can be efficiently verified with a classical computer. However, it is believed that quantum computers are capable of efficiently solving problems that can no longer be efficiently verified classically, such as simulating quantum many-body systems. How can one then rely on the output, given that the quantum computer (or the person operating it) might be malicious and wants to convince the user that the answer to e.g. a decision problem is "yes" when it is actually "no"? Hence, harnessing the full power of a quantum device, which is not directly accessible to the user, brings with it the necessity to de-rive protocols for verifying its output. The aim here is to derive quantum verification protocols that allow a computationally limited (e.g. a classical) user to verify the output of a (powerful) quantum computer. Complicating matters, is the need to ensure that a honest prover (the quantum computer) can convince the user of the correct outcome efficiently [78]. To express things more simply, we will refer now to the user (called verifier) as Alice (A) and to the prover as Bob (B).
Verification protocols [79] where A has access to limited quantum resources [27][28][29][30][31][32][33][34] or is able to interact with two non-communicating provers have been derived [80]. In a recent breakthrough Mahadev [35] showed that even a purely classical user can verify the output of a quantum processor. In contrast to the verification protocols mentioned before, this protocol relies on a computational assumption: the existence of trapdoor functions which are post-quantum secure [81]. These functions are hard to invert even for a quantum computer. However the possession of additional information (trapdoor) enables one to compute the preimages of the function efficiently. Using the notion of post-quantum secure trapdoor functions in combination with powerful previously derived findings, led to the surprising result that a classical user can indeed verify the output of a quantum computer, as we will briefly explain below. The notion and techniques developed in [35,82] have recently been utilized to put forward protocols with e.g. zero-knowledge polynomial-time verifiers [83] and non-interactive classical verification [84].
At a first glance it simply seems impossible to efficiently verify the output of a much more powerful device (even if it was classical) if one is just given that output and is prevented from testing the device. The key idea here is to use interactive proofs. The exchange of messages allows A to test B and to eventually get convinced that the B's claim is indeed correct or to mistrust him and reject the answer. The Graph Non-isomorphism problem is a simple example of a task where the output (of a powerful classical device) can be verified with an interactive proof [85].
To explain the general idea of how to verify the output of a quantum device, we assume that B possesses a quantum computer, whereas A only has classical computational power. A asks B to solve a decision problem (within BQP, i.e. a problem which can be solved efficiently by a quantum computer) and wants to verify the answer. Of particular importance here is that one can show that the outcome of such a decision problem can be encoded in the ground state energy of a suitable, efficiently computable, local Hamiltonian H [86]. This implies that in case B claims that the answer to the decision problem is "yes" [87], he can convince A of this fact by preparing a state with energy (w.r.t. H) below a certain value, which would be impossible in case the correct answer was "no". An instance of such a state is the so-called clock-state, |η [88,89], which can be prepared efficiently by a quantum computer. Hence, the output of the quantum computer can be verified by determin-ing the energy of the state prepared by B. This can be achieved by performing only measurements in the X-as well as the Z-basis [90][91][92]. It remains to ensure that A can delegate these measurements to B without revealing the measurement basis. The important contribution of Mahadev [35] is the derivation of such a measurement protocol (see Fig. 3). The properties of post-quantum secure trapdoor functions are exploited precisely at this point to ensure that B can not learn whether a qubit is measured in the Z-or X-basis, which prevents him from cheating.
For reasonable choices of the security parameters, the realization of such a verification protocol is, even without considering faulty devices, not feasible with current technology (on B's side). Already the number of auxiliary qubits required in the measurement protocol would be too demanding ( [93] and [94]). Nevertheless, due to the rapid technological development and the accompanying need for these kind of verification protocols, we present here a proposal for a proof-of-principle experiment. The minimal example explained here can already be carried out with a total of seven qubits.
First, the original decision problem is mapped to a local Hamiltonian problem. B prepares the corresponding state, |η (consisting of 4 qubits in this example), whose energy needs to be determined. Due to the linearity of the measurement protocol, it is sufficient to demonstrate how A can delegate the measurement in the X-or Z-basis on a single-qubit state |ψ = α 0 |0 + α 1 |1 (belonging to |η ) without revealing the measurement basis to B. The measurement protocol has the following high-level structure (see Fig. 3):

1.
A computes a family of post-quantum secure trapdoor functions {y k }, labeled by an index k, together with the associated information t k (trapdoor). The functions y k are of one of the two types, either oneto-one or two-to-one [95]. If A wants to measure |ψ in the Z-basis (X-basis), she chooses a label k, such that y k is one-to-one (two-to-one), respectively. A keeps t k private (this is precisely the leverage A has over B) and sends k to B. Knowing k, B can efficiently evaluate the function y k on any input. However, it is computationally hard for him to determine which type y k is. Furthermore, A can compute the preimages of y k efficiently using t k while B cannot.
2. B is asked to prepare the state |φ k ∝ b,x α b |b |x |y k (b, x) . This can be done efficiently by a quantum computer.
3. B is asked to measure the last register (qubits 3 and 4 in our example) of |φ k in the Z-basis and to send the measurement outcome y to A. The state of the first and second register (qubits 1 and 2 in our example) is then, depending on the type of y k either: (i) The product state |b |x (with probability |α b | 2 ) where y k (b, x) = y; or (ii) The entangled state α 0 |0 |x 0 + α 1 |1 |x 1 where y k (0, x 0 ) = y k (1, x 1 ) = y.

4.
A randomly chooses to run either a "test" or a "measurement" round. In a "test" ("measurement") round, B is asked to measure the qubits in the first and second register in the Z-basis (Xbasis) respectively, and to send the outcome to A. The "test" rounds allow A to gain confidence that B has indeed prepared |φ k by checking that y k (b, x) = y. In a "measurement" round the first qubit has effectively been measured in either the Z or X basis, depending on the type of y k . Using the trapdoor information t k , A can classically post-process the outputs to obtain the corresponding measurement outcome.
As mentioned above, a minimal, non-trivial example, which can be realized with an ion-trap quantum computer [96] requires only 7 qubits in total and some tens of single and two-qubit gates [97]. In this case the clockstate |η is a 4-qubits state and, for this minimal example, one can choose the second and third register to have 1 and 2 qubits, respectively (as displayed in Fig. 3). Here, y k : {0, 1} 2 → {0, 1} 2 and k labels either one of the 24 one-to-one functions or one of the 24 two-to-one functions.
Let us finally mention that protocols which allow the verification of the output of imperfect quantum computers have been recently put forward in case the verifier has limited access to quantum resources [98]. Similar ideas can also be utilized in the purely classical verification protocol [35], ensuring that the measurements can still be performed without jeopardizing its security [96].

V. CONCLUSION AND OUTLOOK
In an era, where we build noisy intermediate scale quantum devices, with the effort to scale them towards larger system sizes and optimize their performance, verification of quantum devices becomes a main motif in theoretical and experimental quantum information science. In this perspective on theoretical and experimental aspects of quantum verification we have taken an approach of discussing three examples, formulated as proposed experiments. The three examples are verification of quantum simulation via Hamiltonian learning (Sec. II), crosschecking of quantum states prepared on different quantum devices (Sec. III), and addressing the question of how a user of a quantum processor can be certain about the correctness of its output (Sec. IV). While our choice of examples highlighting quantum verification is subjective and guided by personal interests, the common theme is that these 'proposed experiments' can be performed with the quantum devices existing in today's quantum laboratories or near-future devices. In addition, our examples illustrate the diversity of questions in quantum verification and tools and techniques to address them, with emphasis on what we identify as problems of high relevance.
Of course, by the very nature of a perspective as forward looking, we identify interesting topics and outline possible avenues, while putting the finger on open issues for future theoretical and experimental work. These open problems range from technical to conceptual issues, and we summarize some of these questions within the various sections. The overarching challenge is, of course, to develop efficient and quantitative verification protocols and techniques, which eventually scale to large system sizes we envision as useful quantum devices. In Sec. II on verification of analog quantum simulation via Hamiltonian learning, the local Hamiltonian ansatz scales, by construction, with the system size, and leads, in principle, to a quantified error assessment. While one may raise issues of imperfect state preparation and measurement errors in experiments, and the measurement budget available in a specific experiment, we emphasize that these protocols also involve heavy classical post-processing of data, which may provide limits from a practical and conceptual perspective. While this might not pose serious limitations for near-term devices, we might ask here, but also in a broader context, if some of this post-processing can be replaced by more efficient quantum post-processing on the device. The cross-device check of quantum states in Sec. III provides another example of this type. There, the protocol underlying the comparison of quantum states with a central data repository involves classical communication. The protocol described is much more efficient than tomography, and scales with a 'friendly exponential' in system size, allowing today experimental implementa-tion for tens of qubits. A future development of quantum state transfer as quantum communication between the devices promises to overcome these limitations. Finally, our discussion in Sec. IV on verification of the output of an untrusted quantum device presents an absolute minimal example which can be run on present quantum computers, leaving as challenges the verification of outputs of imperfect quantum devices and more advanced experimental demonstrations.
Verification of quantum processors become particularly challenging and relevant in the regimes of quantum advantage, where quantum devices outperform their classical counterparts [99,100]. As solving a "useful" computational task (such as factoring a large number) would neither be feasible with noisy intermediate-scale quantum computers nor necessary to demonstrate quantum superiority, one focuses on sampling problems [101][102][103], in this context. However, these approaches entail difficulties in demonstrating quantum superiority. On the one hand, the fact that the sampling was performed faithfully needs to be verified. On the other hand, one needs to show that the task is computationally hard for any classical device (taking into account that the quantum computer is imperfect). In this context, both, strong complexity-theoretical evidence of classical intractability as well as new proposals for experimental realizations for various setups are desirable.
Acknowledgment -Work at Innsbruck is supported by the European Union program Horizon 2020 under Grants