Optimal verification of entangled states with local measurements

Consider the task of verifying that a given quantum device, designed to produce a particular entangled state, does indeed produce that state. One natural approach would be to characterise the output state by quantum state tomography; or alternatively to perform some kind of Bell test, tailored to the state of interest. We show here that neither approach is optimal amongst local verification strategies for two qubit states. We find the optimal strategy in this case and show that quadratically fewer total measurements are needed to verify to within a given fidelity than in published results for quantum state tomography, Bell test, or fidelity estimation protocols. We also give efficient verification protocols for any stabilizer state. Additionally, we show that requiring that the strategy be constructed from local, non-adaptive and non-collective measurements only incurs a constant-factor penalty over a strategy without these restrictions.

Efficient and reliable quantum state preparation is a necessary step for all quantum technologies.However, characterisation and verification of such devices is typically a time-consuming and computationally difficult process.For example, tomographic reconstruction of a state of 8 ions required taking ∼ 650, 000 measurements over 10 hours, and a statistical analysis that took far longer [1]; verification of a few-qubit photonic state is similarly challenging [2,3].This is also the case in tomography of continuous-variable systems [4][5][6].One may instead resort to non-tomographic methods to verify that a device reliably outputs a particular state, but such methods typically either: (a) assume that the output state is within some special family of states, for example in compressed sensing [7,8] or matrix product state tomography [9]; or (b) extract only partial information about the state, such as when estimating entanglement witnesses [10,11].
Here, we derive the optimal local verification strategy for common entangled states and compare its performance to bounds for non-adaptive quantum state tomography in [12] and the fidelity estimation protocol in [13].Specifically, we demonstrate non-adaptive verification strategies for arbitrary two-qubit states and stabilizer states of N qubits that are constructed from local measurements, and require quadratically fewer copies to verify to within a given fidelity than for these previous protocols.Moreover, the requirement that the measurements be local incurs only a constant factor penalty over the best non-local strategy, even if collective and adaptive measurements are allowed.
Premise.Colloquially, a quantum state verification protocol is a procedure for gaining confidence that the output of some device is a particular state over any other.However, for any scheme involving measurements on a finite number of copies of the output state, one can always find an alternative state within some sufficiently small distance that is guaranteed to fool the verifier.Furthermore, the outcomes of measurements are, in general, probabilistic and a verification protocol collects a finite amount of data; and so any statement about verification can only be made up to some finite statistical confidence.The only meaningful statement to make in this context is the statistical inference that the state output from a device sits within a ball of a certain small radius (given some metric) of the correct state, with some statistical confidence.Thus the outcome of a state verification protocol is a statement like: "the device outputs copies of a state that has 99% fidelity with the target, with 90% probability".Note that this is different to the setting of state tomography; a verification protocol answers the question: "Is the state |ψ ?" rather than the more involved tomographic question: "Which state do I have?".Hence, unlike tomography, a verification protocol may give no information about the true state if the protocol fails.
We now outline the framework for verification protocols that we consider.Take a verifier with access to some set of allowed measurements, and a device that produces states σ 1 , σ 2 , . . .σ n which are supposed to all be |ψ , but may in practice be different from |ψ or each other.We have the promise that either σ i = |ψ ψ| for all i, or ψ|σ i |ψ ≤ 1 − for all i.The verifier must determine which is the case with worst-case failure probability δ.
The protocol proceeds as follows.For each σ i , the verifier randomly draws a binary-outcome projective measurement {P j , 1−P j } from a prespecified set S with some probability µ i j .Label the outcomes "pass" and "fail"; in a "pass" instance the verifier continues to state σ i+1 , otherwise the protocol ends and the verifier concludes that the state was not |ψ .If the protocol passes on all n states, then the verifier concludes that the state was |ψ .We impose the constraint that every P j ∈ S always accepts when σ i = |ψ ψ|, ∀i (i.e. that |ψ is in the "pass" eigenspace of every projector P j ∈ S).This may seem a prohibitively strong constraint, but we later demonstrate that it is both achievable for the sets of states we consider and is always asymptotically favourable to the verifier.
The maximal probability that the verifier passes on copy i is where Ω i = j µ i j P j .However, the verifier seeks to minimise this quantity for each Ω i and hence it suffices to take a fixed set of probabilities and projectors {µ j , P j }, independent of i.Then the verifier-adversary optimisation is where Ω = j µ j P j .We call Ω a strategy.∆ is the expected probability that the state σ fails a single measurement.Then the maximal worst-case probability that the verifier fails to detect that we are in the "bad" case that ψ|σ Protocols of this form satisfy some useful operational properties: A. Non-adaptivity.The strategy is fixed from the outset and depends only on the mathematical description of |ψ , rather than the choices of any prior measurements or their measurement outcomes.
B. Future-proofing.The strategy is independent of the infidelity , and gives a viable strategy for any choice of .Thus an experimentalist is able to arbitrarily decrease the infidelity within which verification succeeds by simply taking more total measurements following the strategy prescription, rather than modifying the prescription itself.The experimentalist is free to choose an arbitrary > 0 and be guaranteed that the strategy still works in verifying |ψ .
One may consider more general non-adaptive verification protocols given S and {σ i }, where measurements do not output "pass" with certainty given input |ψ , and the overall determination of whether to accept or reject is based on a more complicated estimator built from the relative frequency of "pass" and "fail" outcomes.However, we show in the Supplemental Material that these strategies require, asymptotically, quadratically more measurements in than those where |ψ is always accepted.We will also see that the protocol outlined above achieves the same scaling with and δ as the globally optimal strategy, up to a constant factor, and so any other strategy (even based on non-local, adaptive or collective measurements) would yield only at most constant-factor improvements.
Given no constraints on the verifier's measurement prescription, the optimal strategy is to just project on to |ψ .In this case, the fewest number of measurements needed to verify to confidence 1 − δ and fidelity 1 − is n opt = −1 ln(1− ) ln 1 δ ≈ 1 ln 1 δ (see the Supplemental Material).However, in general the projector |ψ ψ| will be non-local, which has the disadvantage of being harder to implement experimentally.This is particularly problematic in quantum optics, for example, where deterministic, unambiguous discrimination of a complete set of Bell states is impossible [14][15][16].Thus, for each copy there is a fixed probability of the measurement returning a "null" outcome; hence, regardless of the optimality of the verification strategy, merely the probability of its successful operation decreases exponentially with the number of measurements.Instead, we seek optimal measurement strategies that satisfy some natural properties that make them both physically realisable and useful to a real-world verifier.We impose the following properties: 1. Locality.S contains only measurements corresponding to local observables, acting on a single copy of the output state.

3.
Trust.The physical operation of each measurement device is faithful to its mathematical description; it behaves as expected, without experimental error.
Thus for multipartite states we only consider strategies where each party locally performs a projective measurement on a single copy, and the parties accept or reject based on their collective measurement outcomes.We also highlight the trust requirement to distinguish from selftesting protocols [17][18][19].
Given this prescription and the set of physicallymotivated restrictions, we now derive the optimal verification strategy for some important classes of states.To illustrate our approach, we start with the case of a Bell state before generalising to larger classes of states.
Bell state verification.Consider the case of verifying the Bell state |Φ + = 1 √ 2 (|00 + |11 ).If we maintain a strategy where all measurements accept |Φ + with certainty, then it must be the case that Ω|Φ + = |Φ + .The optimisation problem for the verifier-adversary pair is then given by ∆ : However, we show in the Supplemental Material that it is never beneficial for the adversary to: (a) choose a nonpure σ; or (b) to pick a σ such that ψ|σ|ψ < 1 for some state |ψ ⊥ such that Φ + |ψ ⊥ = 0.Then, Given that Ω|Φ + = |Φ + , we can simplify by noting that Φ + |Ω|Φ + = 1 and Φ + |Ω|ψ ⊥ = 0. Thus, where the verifier controls Ω and the adversary controls |ψ ⊥ .Given that |Φ + is itself an eigenstate of Ω, the worst-case scenario for the verifier is for the adversary to choose |ψ ⊥ as the eigenstate of Ω with the next largest eigenvalue.If we diagonalise Ω we can write Ω = , where Φ + |ψ ⊥ j = 0 ∀j.The adversary picks the state |ψ ⊥ max with corresponding eigenvalue ν max = max j ν j .Now, consider the trace of Ω: if tr(Ω) < 2 then the strategy must be a convex combination of local projectors, at least one of which is rank 1.However, the only rank 1 projector that satisfies and therefore tr(Ω) ≥ 2. Combining this with the expression for Ω above gives tr(Ω) = 1 + j ν j ≥ 2. It is always beneficial to the verifier to saturate this inequality, as any extra weight on the subspace orthogonal to |Φ + can only increase the chance of being fooled by the adversary.Thus the verifier is left with the optimisation This expression is optimised for ν j = 1 3 , j = 1, 2, 3.In this case, Ω = 1  3 on the subspace orthogonal to the state |Φ + .Then we can rewrite Ω as where P + XX is the projector onto the positive eigensubspace of the tensor product of Pauli matrices XX (and likewise for −Y Y and ZZ).The operational interpretation of this optimal strategy is then explicit: for each copy of the state, the verifier randomly chooses a measurement setting from the set {XX, −Y Y, ZZ} all with probability 1  3 , and accepts only on receipt of outcome "+1" on all n measurements.Note that we could expand Ω differently, for example by conjugating each term in the above expression by any local operator that leaves |Φ + alone; the decomposition above is only one of a family of optimal strategies.As for scaling, we know that ∆ = (1 − ν max ) = 2  3 , and the number of measurements needed to verify the Bell state |Φ + is then Note that this is only worse than the optimal non-local strategy by a factor of 1.5.
In comparison, consider instead verifying a Bell state by performing a CHSH test.Then even in the case of trusted measurements, the total number of measurements scales like O 1 2 [20], which is quadratically worse than the case of measuring the stabilizers {XX, −Y Y, ZZ}.This suboptimal scaling is shared by the known bounds for non-adaptive quantum state tomography with single-copy measurements in [12] and fidelity estimation in [13].See [21][22][23] for further discussion of this scaling in tomography.Additionally, two-qubit tomography potentially requires five times as many measurement settings.We also note that a similar quadratic improvement was derived in adaptive quantum state tomography in [24], in the sample-optimal tomographic scheme in [25] and in the quantum state certification scheme in [26]; however, the schemes therein assume access to either non-local or collective measurements.
Arbitrary states of two qubits.The goal is unchanged for other pure states of two qubits: we seek strategies that accept the target state with certainty, and hence achieve the asymptotic advantage outlined for Bell states above.It is not clear a priori that such a strategy exists for general states, in a way that is as straightforward as the previous construction.However, we show that for any two-qubit state not only does such a strategy exist, but we can optimise within the family of allowable strategies and give an analytic expression with optimal constant factors.
We first remark that we can restrict to states of the form |ψ θ = sin θ|00 +cos θ|11 without loss of generality, as any state is locally equivalent to a state of this form, for some θ.Specifically, given any two qubit state |ψ with optimal strategy Ω opt , a locally equivalent state (U ⊗ V )|ψ has optimal strategy (U ⊗ V )Ω opt (U ⊗ V ) † .The proof of this statement can be found in the Supplemental Material.Given the restriction to this family of states, we can now write down an optimal verification protocol.
Theorem 1.Any optimal strategy for verifying a state of the form |ψ θ = sin θ|00 + cos θ|11 for 0 < θ < π 2 , θ = π 4 that accepts |ψ θ with certainty and satisfies the properties of locality, trust and projective measurement, can be expressed as a strategy involving four measurement settings: where P + ZZ is the projector onto the positive eigenspace of the Pauli operator ZZ, and the sets of states {|u k } and {|v k } are written explicitly in the Supplemental Material.The number of measurements needed to verify to within infidelity and with power 1 − δ satisfies A comparison of the total number of measurements required to verify to fidelity 1 − for the strategy derived here, versus the known bounds for estimation up to fidelity 1 − using non-adaptive tomography in [12] and the fidelity estimation protocol in [13], and the globally optimal strategy given by projecting onto |ψ .Here, 1 − δ = 0.9 and θ = π 8 .
The proof of this theorem is included in the Supplemental Material.Note that the special cases for |ψ θ where θ = 0, θ = π 2 and θ = π 4 are omitted from this theorem.In these cases, |ψ θ admits a wider choice of measurements that accept with certainty.We have already treated the Bell state case θ = π 4 above.In the other two cases, the state |ψ θ is product and hence the globally optimal measurement, just projecting onto |ψ θ , is a valid local strategy.We note that this leads to a discontinuity in the number of measurements needed as a function of θ, for fixed (as seen in Fig. 1).This arises since our strategies are designed to have the optimal scaling O 1 for fixed θ, achieved by having strategies that accept |ψ with probability 1.
As for scaling, in Fig. 2 the number of measurements required to verify a particular two-qubit state of this form, for three protocols, is shown.The optimal protocol derived here gives a marked improvement over the previously published bounds for both tomography [12] and fidelity estimation [13] for the full range of , for the given values of θ and δ.The asymptotic nature of the advantage for the protocol described here implies that the gap between the optimal scheme and tomography only grows as the requirement on becomes more stringent.Note also that the optimal local strategy is only marginally worse than the best possible strategy of just projecting onto |ψ .
Stabilizer states.Additionally, it is shown in the Supplemental Material that we can construct a strategy with the same asymptotic advantage for any stabilizer state, by drawing measurements from the stabilizer group (where now we only claim optimality up to constant factors).The derivation is analogous to that for the Bell state above, and given that the Bell state is itself a stabilizer state, the strategy above is a special case of the stabilizer strategy discussed below.For a state of N qubits, a viable strategy constructed from stabilizers must consist of at least the N stabilizer generators of |ψ .This is because a set of k < N stabilizers stabilizes a subspace of dimension at least 2 N −k , and so in this case there always exists at least one orthogonal state to |ψ accessible to the adversary that fools the verifier with certainty.In this minimal case, the number of required measurements is n s.g.opt ≈ N −1 ln δ −1 , with this bound saturated by measuring all stabilizer generators with equal weight.Conversely, constructing a measurement strategy from the full set of 2 N − 1 linearly independent stabilizers requires a number of measurements −1 ln δ −1 , again with this bound saturated by measuring each stabilizer with equal weight.For growing N , the latter expression for the number of measurements is bounded from above by 2 −1 ln δ −1 , which implies that there is a local strategy for any stabilizer state, of an arbitrary number of qubits, which requires at most twice as many measurements as the optimal nonlocal strategy.Note that this strategy may not be exactly optimal; for example, the state |00 is also a stabilizer state, and in this case applying the measurement |00 00| is both locally implementable and provably optimal.Thus, the exactly optimal strategy may depend more precisely on the structure of the individual state itself.However, the stabilizer strategy is only inferior by a small constant factor.In comparison to the latter strategy constructed from every stabilizer, the former strategy constructed from only the N stabilizer generators of |ψ has scaling that grows linearly with N .Thus there is ultimately a trade-off between number of measurement settings and total number of measurements required to verify within a fixed fidelity.
In principle, the recipe derived here to extract the optimal strategy for a state of two qubits can be applied to any pure state.However, we anticipate that deriving this strategy, including correct constants, may be somewhat involved (both analytically and numerically) for states of greater numbers of qubits.
Following the completion of this work, we became aware of [27] which, among other results, applies a similar protocol to the Bell state verification strategy in the context of entanglement detection.

Supplemental Material: Optimal verification of entangled states with local measurements
The contents of the following supplemental material are as follows: in Appendix A, we set up a formal framework for state verification protocols.In Appendix B we simplify the form of the protocol using the set of physically-motivated strategy requirements outlined in the main body.Appendix C is concerned with deriving the optimal strategy for states of two qubits, in particular proving Theorem 1; and in Appendix D we derive efficient verification strategies for stabilizer states.Finally, Appendix E outlines the hypothesis testing framework necessary for this paper.We first set up a formal framework for general state verification protocols.We assume that we have access to a device D that is supposed to produce copies of a state |ψ .However, D might not work correctly, and actually produces (potentially mixed) states σ 1 , σ 2 , . . .such that σ i might not be equal to |ψ ψ|.In order to distinguish this from the case where the device works correctly by making a reasonable number of uses of D, we need to have a promise that these states are sufficiently far from |ψ .So we are led to the following formulation of our task: Distinguish between the following two cases: Given a verifier with access to a set of available measurements S, the protocols we consider for completing this task are of the following form: Protocol Quantum state verification Two-outcome measurement Mi ∈ S on σi, where Mi's outcomes are associated with "pass" and "fail" if "fail" is returned then 4: Output "reject" 5: Output "accept" We impose the conditions that in the good case, the protocol accepts with certainty, whereas in the bad case, the protocol accepts with probability at most δ; we call 1 − δ the statistical power of the protocol.We then aim to find a protocol that minimises n for a given choice of |ψ , and S, such that these constraints are satisfied.Insisting that the protocol accepts in the good case with certainty implies that all measurements in S are guaranteed to pass in this case.This is a desirable property in itself, but one could consider more general non-adaptive protocols where measurements do not output "pass" with certainty on |ψ , and the protocol determines whether to accept based on an estimator constructed from the relative frequency of "pass" and "fail" outcomes across all n copies.We show in Appendix E that this class of protocols has quadratically worse scaling in than protocols where each measurement passes with certainty on |ψ .
We make the following observations about this framework: 1. Given no restrictions on M i , the optimal protocol is simply for each measurement to project onto |ψ .In fact, this remains optimal even over the class of more general protocols making use of adaptivity or collective measurements.One can see this as follows: if a two-outcome measurement M (corresponding to the whole protocol) is described by measurement operators P (accept) and I − P (reject), then if M accepts |ψ ⊗n with certainty, we must have P = |ψ ψ| ⊗n + P for some residual positive semidefinite operator P .Then replacing P with |ψ ψ| ⊗n gives at least as good a protocol, as the probability of accepting |ψ remains 1, while the probability of accepting other states cannot increase.
The probability of acceptance in the bad case after n trials is then at most (1 − ) n , so it is sufficient to take to achieve statistical power 1 − δ.This will be the yardstick against which we will compare our more restricted protocols below.
2. We assume that the states σ i are independently and adversarially chosen.This implies that if (as we will consider below) S contains only projective measurements and does not contain the measurement projecting onto |ψ ψ|, it is necessary to choose the measurement M i at random from S and unknown to the adversary.Otherwise, we could be fooled with certainty by the adversary choosing σ i to have support only in the "pass" eigenspace of M i for each copy i.
3. We can be explicit about the optimisation needed to derive the optimal protocol in this adversarial setting.
As protocols of the above form reject whenever a measurement fails, the adversary's goal at the i'th step is to maximise the probability that the measurement M i at that step passes on σ i .If the j'th measurement setting in S, M j , is picked from S at step i with probability µ i j , the largest possible overall probability of passing for copy i is Pr[Pass on copy i] = max σi, ψ|σi|ψ ≤1− j µ i j tr(P j σ i ), (S2) where we denote the corresponding "pass" projectors P j .We can write Ω i = j µ i j P j , and then As the verifier, we wish to minimise this expression over all Ω i , so we end up with a final expression that does not depend on i.This leads us to infer that optimal protocols of this form can be assumed to be non-adaptive in two senses: they do not depend on the outcome of previous measurements (which is clear, as the protocol rejects if it ever sees a "fail" outcome); and they also do not depend on the measurement choices made previously.
Therefore, in order to find an optimal verification protocol, our task is to determine where Ω is an operator of the form Ω = j µ j P j for P j ∈ S and some probability µ j .We call such operators strategies.If S contained all measurement operators (or even all projectors), Ω would be an arbitrary operator satisfying 0 ≤ Ω ≤ I.However, this notion becomes nontrivial when one considers restrictions on S. Here, we focus on the experimentally motivated case where S contains only projective measurements that can be implemented via local operations and classical postprocessing.
4. In a non-adversarial scenario, it may be acceptable to fix the measurements in Ω in advance, with appropriate frequencies µ j .Then, given n, a strategy Ω = j µ j P j corresponds to a protocol where for each j we deterministically make µ j n measurements {P j , I − P j }.For large n, and fixed σ i = σ, this will achieve similar performance to the above protocol.
5.More complicated protocols with adaptive or collective measurements, or measurements with more than two outcomes, cannot markedly improve on the strategies derived here.We do not treat these more general strategies explicitly, but note that the protocols we will describe based on local projective measurements already achieve the globally optimal bound (S1) up to constant factors, so any gain from these more complex approaches would be minor.
We have asserted that Ω accepts |ψ with certainty: ψ|Ω|ψ = 1.However, for this to be the case Ω must have |ψ as an eigenstate with eigenvalue 1; thus we can write where the states {|ψ ⊥ j } are a set of mutually orthogonal states orthogonal to |ψ .Then We can write where σ ⊥ is a density matrix entirely supported in the subspace spanned by the states |ψ ⊥ j , and |Φ ⊥ is a vector in the subspace spanned by |ψ ⊥ j .We know that a = r as ψ|σ|ψ = r, and b = 1 − r as tr(σ) = 1.Now, note that the probability of accepting σ does not depend on the choice of |Φ ⊥ .Thus tr(Ωσ) is maximised when σ ⊥ = |ψ ⊥ max ψ ⊥ max |, where |ψ ⊥ max is the orthogonal state in the spectral decomposition of Ω with largest eigenvalue, c max .Thus max which is achieved by any density matrix of the form Note that the pure state σ = |φ φ| for |φ = √ r|ψ + √ 1 − r|ψ ⊥ max is of this form, and so we can assume that the adversary makes this choice.
Given that the state σ can be taken to be pure and that the fidelity Then the optimisation problem becomes to determine ∆ , where and Ω|ψ = |ψ .
As for the optimisation of ¯ , note that it is the goal of the adversary to make ∆ as small as possible; and so they are obliged to set ¯ = .Then the optimisation becomes where Ω|ψ = |ψ .
Note that this expression implies that any Ω where Ω|ψ = |ψ automatically satisfies the future-proofing property: firstly that Ω is independent of , but also that the strategy must be viable for any choice of (i.e.there must not be a choice of where ∆ = 0).For an initial choice ∆ > 0, we have that 1 − ψ ⊥ |Ω|ψ ⊥ > 0 and so ∆ > 0 for any 0 < < .Thus the verifier is free to decrease arbitrarily without fear of the strategy failing.Note also that this condition may not be automatically guaranteed if the verifier chooses an Ω such that Ω|ψ = |ψ .
Regarding the optimisation problem in S15, for an arbitrary state |ψ on n qubits it is far from clear how to: (a) construct families of viable Ω (built from local projective measurements) that accept |ψ with certainty; (b) to then solve this optimisation problem over those families of Ω.For the remainder of this work, we focus on states of particular experimental interest where we can solve the problem: arbitrary states of 2 qubits, and stabilizer states.

Appendix C: States of two qubits
We now derive the optimal verification strategy for an arbitrary pure state of two qubits.We first give the proof of the statement in the main text that optimal strategies for locally equivalent states are easily derived by conjugating the strategy with the local map that takes one state to the other.Hence, we can restrict our consideration to verifying states of the form |ψ = sin θ|00 + cos θ|11 without loss of generality.Specifically: Lemma 3. Given any two qubit state |ψ with optimal strategy Ω opt , a locally equivalent state Proof.We must show that strategy Ω = (U ⊗ V )Ω opt (U ⊗ V ) † is both a valid strategy, and is optimal for verifying |ψ = (U ⊗ V )|ψ .Validity: If Ω opt = j µ j P j is a convex combination of local projectors, then so is Ω : Optimality: The performance of a strategy is determined by the maximum probability of accepting an orthogonal state |ψ ⊥ .For the strategy-state pairs (Ω opt , |ψ ) and (Ω , |ψ ), we denote this parameter q opt and q , respectively.Then So applying the same local rotation to the strategy and the state results in no change in the performance of the strategy.Thus the following simple proof by contradiction holds: assume that there is a better strategy for verifying |ψ , denoted Ω .But then the strategy (U ⊗ V ) † Ω (U ⊗ V ) must have a better performance for verifying |ψ than Ω opt , which is a contradiction.Thus Ω must be the optimal strategy for verifying |ψ .
We will now prove Theorem 1 from the main body.However, we first prove a useful lemma -that no optimal strategy can contain the identity measurement (where the verifier always accepts regardless of the tested state).In the following discussion, we denote the projector Π := 1 − |ψ ψ|.For a strategy Ω where Ω|ψ = |ψ , the quantity of interest which determines ∆ in (S15) is the maximum probability of accepting an orthogonal state |ψ ⊥ : If a strategy is augmented with an accent or subscript, the parameter q inherits that accent or subscript.

Lemma 4. Consider an operator
Proof.For arbitrary |ψ ⊥ such that ψ|ψ ⊥ = 0, We are now in a position to prove Theorem 1.Note that the special cases where |ψ is a product state (θ = 0 or π 2 ) or a Bell state (θ = π 4 ) are treated separately.
Theorem 1 (restated).Any optimal strategy for verifying a state of the form |ψ = sin θ|00 + cos θ|11 for 0 < θ < π 2 , θ = π 4 that accepts |ψ θ with certainty and satisfies the properties of locality, trust and projective measurement, can be expressed as a strategy involving four measurement settings: where the states |φ k are ) The number of measurements needed to verify to within fidelity and statistical power 1 − δ is Proof.The strategy Ω can be written as a convex combination of local projectors.We can group the projectors by their action according to two local parties, Alice and Bob, and then it must be expressible as a convex combination of five types of terms, grouped by trace: ) where ρ k i and σ k i are single-qubit pure states and the subscript denotes the type of term in question.The state ρ j⊥ is the density matrix defined by tr(ρ j ρ j⊥ ) = 0. Qualitatively, given two local parties Alice and Bob with access to one qubit each, and projectors with outcomes {λ, λ}, the terms above correspond to the following strategies: (1) Alice and Bob both apply a projective measurement and accept if both outcomes are λ; (2) Alice and Bob both apply a projective measurement and accept if both outcomes agree; (3) Alice and Bob both apply a projective measurement and accept unless both outcomes are λ; (4) Alice or Bob applies a projective measurement and accepts on outcome λ, and the other party abstains; and (5) both Alice and Bob accept without applying a measurement.
We show in Appendix E that strategies that accept |ψ with certainty have a quadratic advantage in scaling in terms of epsilon.Given this, we enforce this constraint from the outset and then show that a viable strategy can still be constructed.For the general strategy in Eq.S27 to accept |ψ with certainty, each term in its expansion must accept |ψ with certainty.However, this is impossible to achieve for some of the terms in the above expansion.In particular, we show that the terms (ρ ⊗ σ), (ρ ⊗ 1) and (1 ⊗ σ) cannot accept |ψ with certainty, and the form of the term (ρ ⊗ σ + ρ ⊥ ⊗ σ ⊥ ) is restricted.
(ρ ⊗ σ + ρ ⊥ ⊗ σ ⊥ ): for this term, we can expand both ρ and σ in terms of Pauli operators: Inserting these expressions and the definition of |ψ into the condition that p = 1 gives the constraint Now, we know from the Cauchy-Schwarz inequality that where the second inequality is derived from the fact that {α, β, γ}, {α , β , γ } are the parameterisation of a pair of density matrices.There are two ways that this inequality can be saturated: (a) sin(2θ) = 1; (b) αα − ββ = 0, γγ = 1.In all other cases, the inequality is strict.Thus the constraint in Eq.S31 cannot be satisfied in general.Exception (a) corresponds to θ = π 4 , which is omitted from this proof and treated separately.In exception (b), we have that γγ = 1 and so either γ = γ = 1 or γ = γ = −1.In both cases we have that where P + ZZ is the projector onto the positive eigenspace of ZZ.This is the only possible choice for this particular term that accepts |ψ with certainty.
We can also make use of Lemma 4 to remove the term 1 ⊗ 1.Given this and the restrictions above from enforcing that p = 1, the measurement strategy can be written where k η k = 1 and 0 ≤ α ≤ 1.
We'll try to further narrow down the form of this strategy by averaging; i.e. by noting that, as |ψ is an eigenstate of a matrix then conjugating the strategy by M ζ ⊗ M −ζ and integrating over all possible ζ cannot make the strategy worse; if we consider an averaged strategy Ω such that then necessarily the performance of Ω cannot be worse than that of Ω.To see this, note that the averaging procedure does not affect the probability of accepting the state |ψ .However, for each particular value of ζ the optimisation for the adversary may necessarily lead to different choices for the orthogonal states |ψ ⊥ (ζ) , and so averaging over ζ cannot be better for the adversary than choosing the optimal |ψ ⊥ at ζ = 0. We can also consider discrete symmetries of the state |ψ .In particular, |ψ is invariant under both swapping the two qubits, and complex conjugation (with respect to the standard basis); by the same argument, averaging over these symmetries (i.e. by considering Ω = 1 2 (Ω + (SWAP)Ω(SWAP † )) and Ω = 1 2 (Ω + Ω * )) cannot produce strategies inferior to the original Ω.Therefore we can consider a strategy averaged over these families of symmetries of Ω, without any loss in performance.
This averaging process is useful for three reasons.Firstly, it heavily restricts the number of free parameters in Ω requiring optimisation.Secondly, it allows us to be explicit about the general form of Ω. Thirdly, the averaging procedures are distributive over addition; and so we can make the replacement Note that a single term 1 − ρ k ⊗ σ k , may, after averaging, be a convex combination of multiple terms of the form 1 − ρ ⊗ σ.To proceed, we will use this averaging procedure to show that it suffices to only include a single, postaveraging term of the form 1−ρ k ⊗σ k in the strategy Ω, and that the resulting operator can be explicitly decomposed into exactly three measurement settings.
Consider a general operator Ω, expressed as a 4 × 4 matrix.First, take the discrete symmetries of |ψ .Averaging over complex conjugation in the standard basis implies that the coefficients of Ω are real; and averaging over qubit swapping implies that Ω is symmetric with respect to swapping of the two qubits.Denote the operator after averaging these discrete symmetries as Ω.Then consider averaging over the continuous symmetry of |ψ : The eigensystem of this operator is then completely specified; besides |ψ , it has the following eigenvectors: with corresponding eigenvalues λ 1 = 1 − b csc θ sec θ and λ 2 = λ 3 = c.The maximum probability of accepting a state orthogonal to |ψ , q, can then be written Then, taking just the ρ ⊗ σ part and expressing as a matrix in the computational basis gives To enforce separability it is necessary and sufficient to check positivity under partial transposition, yielding the constraint λ 1 ) sin(2θ) ≥ 0. Simple rearrangement gives a lower bound that must be satisfied for the strategy to remain separable: This additional locality constraint rules out any point on the black line to the left of the red point in Fig. S1.The term = 0 and so represents a single point in the (λ 1 , λ 2 ) plane.Thus the parameters (λ 1 , λ 2 ) for the full strategy Ω must be represented by a point in the convex hull of the single point representing the P + ZZ term and the locus of points representing the trace 3 part -i.e. in the unshaded region in Fig. S1.
We now show that a strategy that includes more trace 3 terms cannot improve on the performance of the strategy above.Write this expanded strategy as for k η k = 1.Firstly, we note again that the averaging operations (SWAP, conjugation via M ζ and complex conjugation in the standard basis) are distributive over addition and so we can make the replacement .Note that each term in Ω comp satisfies both the constraint from the trace and the constraint from PPT in S48, and hence so does Ω comp .Now, each operator in this term shares the same eigenbasis (namely, the set of states {|v i } in S43).Thus we know that λ comp 1 = k η k λ 1,k , and likewise for λ comp 2 ; i.e. the strategy parameters for this composite term are just a convex combination of those for its constituent parts.A term Ω comp is then specified in the (λ 1 , λ 2 ) plane by a point P comp = (λ comp 1 , λ comp 2 ) ∈ Conv(λ 1,k , λ 2,k ) (i.e. the point P comp must lie on the thick black line bounding the unshaded region in Fig. S1).
Thus we know that Conv(Ω ) ⊆ Conv(Ω), and so any strategy writeable in the form S49 can be replaced by a strategy of the form S45 with identical parameters (λ 1 , λ 2 ), and hence identical performance.Thus, we need only consider strategies of the form We can now be explicit about the form of the above strategy.For Ω to accept |ψ with certainty, ρ ⊗ σ must annihilate |ψ and so we make the replacement ρ ⊗ σ = |τ τ |, where |τ is the most general pure product state that annihilates |ψ .To be explicit about the form of the state |τ , write a general two-qubit separable state as |τ = (cos φ|0 + e iη sin φ|1 ) ⊗ (cos ξ|0 + e iζ sin ξ|1 ), (S52) where we take 0 ≤ φ, ξ ≤ π 2 , without loss of generality.The constraint that this state annihilates |ψ = sin θ|00 + cos θ|11 is cos φ cos ξ sin θ + e −i(η+ζ) sin φ sin ξ cos θ = 0. (S53) If either φ = 0 or ξ = 0, then cos φ cos ξ sin θ = 0 implying that ξ = π 2 or φ = π 2 , respectively.This yields the annihilating states |τ = |01 and |τ = |10 , respectively.If φ, ξ = 0 then from the imaginary part of Eq.S53 we find that e −i(η+ζ) = −1.Then we can rearrange to give tan φ tan ξ = tan θ. (S54) Using this constraint and the identities ) we can eliminate ξ to yield Note that, for 0 < θ < π 2 , taking the limits φ → 0 and φ → π 2 we recover the cases |τ = |01 and |τ = |10 , up to irrelevant global phases.Thus we can proceed without loss of generality by assuming that ρ ⊗ σ = |τ τ |, where |τ is given by Eq.S56.Averaging over the symmetries of |ψ outlined above then yields the following expression: using the shorthand s, c, t for sin, cos and tan, respectively.Given this explicit parameterisation we can extract the eigenvalue λ 1 : It can be shown by simple differentiation w.r.t.φ that, for fixed θ, this expression has a minimum at λ 1 = λ LB .Also, this expression is a continuous function of φ and therefore can take any value up to its maximum (namely, 1).Hence a single trace 3 term is enough to achieve any point in the allowable convex hull in Fig. S1.For convenience we will denote tan 2 φ = P, tan 2 θ = T for 0 ≤ P ≤ ∞, 0 < T < ∞.The explicit form for the whole strategy is then We now optimise over the two remaining free parameters, {α, φ} (or alternatively, {α, P }) for fixed θ (or fixed T ).This optimisation is rather straightforward from inspection (see Fig. S2), and the reader may wish to skip to the answer in Eq.S66.However, we include an analytic proof for the sake of completeness.We have shown that it suffices to consider the eigenvalues λ 1 and λ 2 , given in this case by the expressions The parameter q is given by the maximum of these two eigenvalues.Note that, if P = 0, the expression λ 1 (α, 0, T ) = 1 which implies that the adversary can pick a state that the verifier always accepts, and hence the strategy fails.Likewise, taking the limit lim P →∞ λ 1 (α, P, T ) = 1.Thus we must restrict to the range 0 < P < ∞ to construct a viable strategy for the verifier.The quantity q is minimised for fixed T when the derivatives with respect to P and α vanish.First, we calculate the derivatives w.r.t.α: Given that P > 0 and T > 0, we have that for any choice of T , ∂ α λ 1 > 0 and ∂ α λ 2 < 0. Thus, one of three cases can occur: (a) for a given choice of T and P , the lines given by λ 1 and λ 2 intersect in the range 0 ≤ α ≤ 1 and hence there is a valid α such that q is minimised when λ 1 = λ 2 ; (b) for a given choice of T and P , λ 1 > λ 2 in the range 0 ≤ α ≤ 1 and hence q is minimised when α = 0; (c) for a given choice of T and P , λ 1 < λ 2 in the range 0 ≤ α ≤ 1 and hence q is minimised when α = 1.However, we note that this final case cannot occur; it suffices to check that λ 1 (α = 1) > λ 2 (α = 1), and from the expressions in (S60) we have that λ 1 (α = 1) = 1 and λ 2 (α = 1) = 0.As a visual aid for the remaining two cases, see Fig. S2.In case (a), T + P 2 T + P 2 + 4P (1 + T ) .(S62) In case (b), we have that q = λ 1 (0, P, T ) = T + P 2 (1 + P )(P + T ) . (S63) We must also minimise w.r.t.φ; however, we can safely minimise w.r.t.P as ∂ φ P > 0 (unless φ = 0, but in this case q = 1 and the strategy fails).In case (b), we have ∂q ∂P = (P 2 − T )(1 + T ) (1 + P ) 2 (P + T ) 2 . (S64) In this case, consider the two points implicitly defined by the constraint λ 1 (0, P, T ) = λ 2 (0, P, T ) (drawn as the black points in Fig. S2).Denote these points f ± (T ).It can be readily checked that in case (b), ∂ P q < 0 for any q < f − (T ), and ∂ P q > 0 for any q > f + (T ).Thus the minimum w.r.t P must occur when λ 1 (0, P, T ) = λ 2 (0, P, T ) and hence we can restrict our attention to case (a) (note Fig. S2).In this case, ∂ P q becomes ∂q ∂P = −2(1 + T )(T − P 2 ) [T + 4P T + P (4 + P )] 2 = 0, (S65) which implies that P = √ T .Substituting in the optimal choices for the parameters {α, P } and reexpressing solely in terms of θ gives the optimal strategy Ω opt = 2 − sin(2θ) 4 + sin(2θ) where Ω opt (S67) This strategy accepts an orthogonal state with probability q opt = 2 + sin(2θ) 4 + sin(2θ) , (S68) implying that the number of measurements needed to verify to within accuracy and with statistical power 1 − δ under this test is The final step is to show that the operator Ω opt 3 can be decomposed into a small set of locally implementable, projective measurements.We can do so with a strategy involving only three terms: where the set of separable states {|φ k } are the following: ) which is in agreement with the scaling previously derived in Eq.S1.These are the limiting cases of the scaling of n with ∆ .In the worst case, n scales quadratically in ∆ −1 ; however, for any strategy where the state |ψ to be tested is accepted with certainty, only a total number of measurements linear in ∆ −1 are required.Thus asymptotically, a strategy where p = 1 is always favourable (i.e.gives a quadratic improvement in scaling with ∆ ) for any ∆ > 0.
Appendix A: Quantum state verification for some ¯ ≥ chosen by the adversary, to be optimised later.