Contracting projected entangled pair states is average-case hard

An accurate calculation of the properties of quantum many-body systems is one of the most important yet intricate challenges of modern physics and computer science. In recent years, the tensor network ansatz has established itself as one of the most promising approaches enabling striking efficiency of simulating static properties of one-dimensional systems and abounding numerical applications in condensed matter theory. In higher dimensions, however, a connection to the field of computational complexity theory has shown that the accurate normalization of the two-dimensional tensor networks called projected entangled pair states (PEPS) is #P-complete. Therefore, an efficient algorithm for PEPS contraction would allow to solve exceedingly difficult combinatorial counting problems, which is considered highly unlikely. Due to the importance of understanding two- and three-dimensional systems the question currently remains: Are the known constructions typical of states relevant for quantum many-body systems? In this work, we show that an accurate evaluation of normalization or expectation values of PEPS is as hard to compute for typical instances as for special configurations of highest computational hardness. We discuss the structural property of average-case hardness in relation to the current research on efficient algorithms attempting tensor network contraction, hinting at a wealth of possible further insights into the average-case hardness of important problems in quantum many-body theory.

Determining the properties of quantum many-body systems is of paramount importance in our efforts to understand conductance and thermodynamics of solid-state materials [1,2], designing new sensors and devising novel quantum technologies [3], inferring nuclear processes in stars or the early universe [4,5]. However, oftentimes it is not possible to find degrees of freedom enabling a concise description of a given system in terms of an effective model featuring essentially no interactions. In such a case there is usually no easy way out but to calculate numerically observables of interest from a Hamiltonian description [6][7][8][9][10][11][12]. Here, however, we face a particular challenge namely that the state space of quantum many-body systems demands a number of parameters that grows exponentially with the amount of constituents of the system. If so, even storing the state of the system on a computer becomes impossible and hence one seeks for efficient variational families of states. Tensor networks are a prime example of such an ansatz class [10,[13][14][15][16][17]. Despite their spectacular success in one-dimension [18][19][20][21][22][23][24][25][26][27][28][29] as so-called matrix-product states [14,20,30], the most natural tensor network ansatz in two-dimensions, called projected entangled pair states (PEPS) [31], turned out to be burdened by a peculiar difficulty: even to calculate the normalization of PEPS is computationally intractable as has been shown by Schuch et al. [32].
More precisely, the normalization or evaluation of a local expectation value within the PEPS ansatz class is a computational task which is complete for the complexity class #P, i.e., is as hard as any other problem in this class [33][34][35]. A paradigmatic #P problem consists in counting the solutions of the traveling salesman problem, which is an optimization problem, complete for the class NP. Intuitively, counting the solutions to a hard problem can only be harder. Within the current state of knowledge in computer science the optimal runtime for NP-complete problems is unknown. However, the exponential-time hypothesis [36] conjectures that for any algorithm attempting a solution at these problems there exist instances demanding an exponential runtime.
Physically, one can invoke the Church-Turing-Deutsch principle [37] that interprets computations as physical pro-cesses. For example, NP has been established to correspond to the cooling of spin glasses [38]. These materials are known to sometimes take an extremely long time to cool down. On the other hand, very many solid-state materials seem to cool down much faster. Indeed, insights in computer science suggest that the hardness of NP-complete problems lies in few tough instances with particularly rugged landscape. Phenomena like this are described in the framework of average-case complexity. While NP-complete problems are unlikely to be hard on average [39], average-case hard problems are ubiquitous for the class #P. Recently, first examples directly relevant to demonstrating computational separation between classical and quantum devices have been pointed out [40,41].
There are several approaches for a rigorous theory of average-case complexity. Arguably the most natural is random self-reducibility: The idea is that a machine powerful enough to solve e.g. three quarters of the instances would allow to solve all instances. Thus, it becomes implausible to find heuristic algorithms that solve significant numbers of instances as the self-reducibility structure would imply efficiency even for those instances that are particularly hard. Hence, while #P-hardness is a very strong statement, it does not preclude the existence of efficient practical algorithms that are capable of solving relevant instances.
In this work, we provide strong complexity theoretical indications that the latter is not the case for generic PEPS due to a random self-reducibility structure that we uncover. This extends the worst-case #P-hardness result [32] to the average case and is an even more challenging obstruction to overcome. Technically, we make an extensive use of the recent insightful work by Bouland et al. [41], where average-case hardness has been established in the context of quantum circuits, and we also employ some of the results established by Aaronson and Arkhipov [40].
In certain special instances fast algorithms might still be feasible. For example it is known that matrix-product states admit a polynomial time deterministic contraction algorithm [42]. But even in two dimensions, this can happen under strong physical assumptions forcing the problem to admit a arXiv:1810.00738v1 [quant-ph] 1 Oct 2018 local structure [43,44]. Additionally, for certain subclasses some heuristic algorithms  (see Refs. [45,64] for reviews) yield results of practical importance [65][66][67][68][69][70][71][72][73][74]. Our average-case hardness result, however, suggests that these approaches could break down even for relevant PEPS instances as otherwise difficult computational problems would admit (quasi-) polynomial algorithms.
Physically, for disordered systems, one would expect any accurate ground state approximation by a PEPS to inherit the randomness of the Hamiltonian [75]. Hence in this setting, we provide evidence of intractability. Oftentimes, however, further physical assumptions are justified: While these completely generic PEPS are relevant for the study of strongly disordered systems, in many practically meaningful settings (in particular in the study of topological order), the relevant PEPS are translation-invariant. Remarkably, a worst-to-average case reduction as described in this letter works just as well for translation-invariant systems but we are unaware of a hardness result in the worst case for such systems.
Projected entangled pair states. Here we recall the definition of PEPS [76] and review the computational problem from Ref. [32] concerning the contraction of PEPS. We consider a family of graphs G = (V, E) with |V | = N . Every vertex v stands for a local spin system described by a Hilbert space In the projective construction of PEPS one thinks of every edge e ∈ E as a maximally entangled state D i=1 |i |i in a virtual D-dimensional spin systems. A specific PEPS is then described by linear operators P [v] : C D ⊗ · · · ⊗ C D → C d . It is defined as the state vector in H resulting from the application of all P [v] for all v ∈ V . Note that by this the obtained PEPS is not necessarily normalized. The virtual dimension is assumed to satisfy D = poly(N ) and is called bond dimension. In our discussion, it will be crucial to discriminate between the PEPS, which is a state vector in H, and its specification P [v] v . We will refer to the latter as PEPS-data. A PEPS is called translation-invariant if the local tensors satisfy P [v] = P [w] = P for all v, w ∈ V . These states have already been proven to be immensely useful in condensed matter research but the full regime of applicability is still open. Here, we assume open boundary conditions but our results carry over to the periodic case too.
PEPS evaluation. It strikes as a tremendous advantage that PEPS are described by polynomial data only. However, the physical problem we want to tackle remains notoriously difficult in that contraction of PEPS is computationally hard. This is needed for obtaining physical quantities of interest like expectation values of local observables. Specifically, the following computational tasks are the essential ingredients of PEPS contraction algorithms: Problem 1 (PEPS-contraction). Input: A graph G and corresponding finite PEPS-data P [v] v describing an unnormalized state |ψ and with bond dimension D = poly(N ). Output: ψ|ψ .
It is one of the key insights in Ref. [32] that this problem is in fact #P-complete for the case that G is a square lattice.
In the following, we recall the arguments leading to this observation. The construction uses measurement based quantum computing [77][78][79]. Measurement based quantum computing based on cluster states performs a computation by initializing the cluster state on a square lattice and successively applying local sharp (projective) measurements to the local qubits. This is a universal model of a quantum computer and we can use it to encode any quantum circuit in a PEPS with polynomially bounded bond dimension. Notice first that the cluster state is a PEPS with bond dimension D = 2. However, the outcome of the quantum computation performed by the measurements depends on the random outcomes. This is dealt with by correcting the outcome with Pauli operators depending on the random outcomes. The PEPS encoding the quantum circuit is now obtained by applying an additional projector |a a|, where a is the outcome that does not give rise to a non-trivial Pauli-correction. Hardness follows from encoding the problem of counting solutions for a Boolean formula: Given a Boolean formula f , finding Main result. Let us consider a generic PEPS in the sense that all entries of the tensor P [v] are drawn independently at random from the finite precision approximation of the normal distribution centered around 0 and with standard deviation σ. We will denote this Gaussian distribution with P := N C (0, σ) D 4 dN . Our main result is the following theorem: This rigorous statement can be interpreted in several intuitive ways. Firstly, it rules out the possibility that the computational hardness could be hidden in particular instances that are intractable, as it says that one could use the algorithm O to construct an algorithm O that is efficient for all inputs. Colloquially, assuming that most instances are easy with a known heuristic O, then the full problem will be equally easy. Secondly, it is important to note that Problem 1 requires exact computation [32] but a different variant of Theorem 1 that we prove in the appendix shows the following: Exponential precision approximation is also intractable on average, however, under stronger requirements on the algorithm O. Hence, structurally, we see that if #P-problems are non-trivial, then it cannot be due to very rare instances. Our choice of the probability distribution is similar to that of Ref. [40,Section 9.1], where the evaluation of the so-called permanent is considered which is also a #P-complete computational problem. Therefore, Theorem 1 shows not only that both these problems are in the same complexity class, but they also have the same complexity theoretical structure. Note that the result holds for arbitrary graphs as well, though the statement is trivial in one dimension [42].
Random self-reducibility. There are several precise mathematical candidates for a definition of average-case hardness. We find that PEPS-contraction is average-case hard in the same sense as canonical combinatorial problems [40,80]: Both problems admit random self-reducibility. A problem is randomly self-reducible if the evaluation of any instance x can be reduced to the evaluation of random instances y 1 , . . . , y k with a bounded probability independent of the input. We will sketch how this is done for the permanent and PEPS giving the essential proof idea, see Ref. [41] for a particularly clear exposition in the context of quantum circuits. The complete argument can be found in Appendix A.
In a seminal result, Lipton [80] proved random selfreducibility for the evaluation of the permanent, a function that takes as an input a square matrix and outputs a number. The permanent of a matrix A ∈ C n×n is defined as the 'determinant without signs': where S n is the symmetric group. However, very unlike the determinant, the permanent turns out to yield a difficult combinatorial problem: Its evaluation has been proven to be #Pcomplete by Valiant [81]. The proof of random-self reducibility is rooted in the algebraic fact that the permanent defines a polynomial of degree n in the entries of its input matrix A.
More precisely, the strategy is to take any (hard) instance A that we want to compute, draw a uniformly random matrix B and define for t ∈ R. Notice that E(t) is uniformly random for any t because B is, even though E(t) and E(t ) are correlated. The permanent of these matrices is a polynomial q(t) := perm(E(t)) of degree n. Even if the algorithm O fails to accurately output perm(A) it will, by assumption, likely correctly evaluate q(t i ) for a choice of t i . The idea is to infer q(0) from the values at {t i } via polynomial interpolation. We will explain this step in more detail in the next paragraph for the setting of PEPS. Sketch of proof for Theorem 1. Let us sketch how the worst to average-case reduction works for PEPS contractions. For a detailed and formal proof we refer to Appendix A. First, notice that given a bond dimension D, the set of possible PEPS-data admits a canonical vector space structure defined by Notice that already in this step, it is crucial to discriminate between PEPS-data and PEPS since the above definition has very little to do with the addition of the corresponding states. Intuitively, we scramble independently the individual tensors. Given a hard instance (P [v] ) v , we draw random PEPS-data This choice of a scrambled operator is suitable for us because it allows us to deal with a subtlety arising from the fact that the PEPS-data (R(t) [v] ) v is not Gauss-random even though (Q [v] ) v is. This is different to the setting of Lipton [80] but has been worked out for boson sampling [40], where it was shown that the difference is immaterial for small t. This carries over to our case as we discuss in Appendix A. We choose k = poly(N ) sampling points t i ∈ [0, ε), where ε is polynomially small. With these sampling points we perform polynomial interpolation.
Let |ψ(t i ) denote the PEPS corresponding to the data (R(t i ) [v] ) v . In analogy to the discussion of the permanent, we define the function q(t) := ψ(t)|ψ(t) , which is a polynomial in t of degree r = 2N . For each sampling point, the machine O performs the exact contraction with probability 3 4 + 1 poly(N ) . Using the Markov inequality, we obtain that out of the k sampling points O outputs the correct value of the contraction q for at least k+r 2 with probability 1 2 + 1 poly(N ) . Provided k > r, we can use polynomial interpolation to reconstruct the coefficients of the polynomial q such that q(1) is the desired PEPS contraction value. This is achieved by the so-called Berlekamp-Welch algorithm, a result in computer science, which in polynomial runtime outputs the coefficients of q. Thus, using a small computational overhead, we obtain q(1). Repeating this procedure, and taking the majority vote choosing the most frequent outcome, the probability of success can be amplified to 1 − 2 −poly(N ) . We define this final outcome to be the output of the algorithm O .
Translation invariance. In many physical applications, e.g. in solid state materials or systems admitting topological order, the system of interest is translation-invariant. Hence, the PEPS-data should reflect this symmetry and one would naturally set all local tensors to be equal. In this case we do not know the corresponding computational problem to be #P-hard, for example the #P-hard instances in Ref. [32] are not translation-invariant. However, our worst-to-average case reduction works just as well in this special case, simply by choosing ( The same argument and statement of the main theorem goes through. This leaves us with two mutually exclusive options: If the translation-invariant problem is hard for a complexity class C, then it follows that the problem is C-hard on average in the sense of our main theorem. If the problem is merely in P, then it is enough to find a heuristic for about 3 4 of the inputs to find a full randomized algorithm. On the other hand, if C = #P, then even the translation-invariant PEPS contraction problem would appear to be average-case intractable. We are unaware of random self-reducibility results for complexity classes other than #P. We thus expect a dichotomy: Either the translation-invariant problem is in P or it is #P-complete. Evaluation precision. As far as we know, it is state of the art in computer science to prove random self-reducibility structures for problems given the promise that O works with at least exponential precision. In fact, we can improve our main theorem for this case too, at the cost of requiring O to function with a probability of 1 − 1 12N . The reason for this trade-off is that subtleties arise in the technical steps, where the Berlekamp-Welch algorithm has to be replaced with a noise-resistant method. However, in the bigger picture, it does not seem possible to extend the seminal idea of Lipton to O working with polynomial precision. Intuitively, we interpolate around small t i and want to evaluate at 1. We consider it unlikely that it is possible to devise an extrapolation method which accurately outputs q(1). However, if it turned out to be the case, e.g., by future results in computer science, then our worst-to-average case reduction would work for the case of practical interest, i.e., polynomial precision of O. Related questions of precision relaxation are of interest in quantum information theory in the context of searching for quantum speed-ups. Here, certain precision relaxations are conjectured to be average case hard as well [40,41].
Expectation values. The computational Problem 1 is concerned with PEPS contractions. The quantity that one computes is the norm of the respective PEPS. However, in most physical applications the quantities of interest are expectation values of a local observableÂ Notice that this problem and its unnormalized version have both been proven to be #P-complete in Ref. [32] as well. For any algorithm that uses PEPS normalization as an intermediate step our main theorem is directly of interest and reflects the fundamental structure of the problem at hand. In the general case we can prove a worst-to-average result for this quantity as well. It is easy to see that our discussion of PEPS contraction carries over to the discussion of unnormalized expectation values. We show that a close analogue of Theorem 1 holds for this quantity as well. The normalized expectation value is slightly more subtle in the following sense: The analogue of the function q is not a polynomial but a rational function q p where the degrees of both polynomials q and p are bounded by 2N . We can simply solve for the coefficients on enough sampling points to obtain the respective coefficients. This, however, requires a stronger machine O. This result might be further improved by the use of more sophisticated algorithms for the reconstruction of rational functions. Implications on practical tensor network algorithms. The results found here have interesting implications to the performance of PEPS contraction algorithms aimed at solving condensed-matter problems [10,14,15]. There are three insights that are important in this respect: Firstly, the results laid out here relate average-case to worst-case complexity. In that, they apply to any tensor network contraction algorithm as the structure of random self-reducibility shows that if a given method O has trouble at less than a quarter of instances, these can be in principle also treated with a small polynomial runtime overhead by our construction of the randomized algorithm O (and, for that matter, our results also pertain to algorithms in P). Secondly, it is known that PEPS contraction algorithms often work well in practice for reasonable condensed-matter systems [45,64] which may seem at first sight at odds with the results presented here and in Ref. [32]. For this, one has to acknowledge that many important problems have additional structure that may render the PEPS contraction feasible. Specifically, it was proven in Ref. [44] that local normalized expectation values of injective PEPS with uniformly gapped parent Hamiltonian can be evaluated in quasi-polynomial time, i.e., faster than conjectured by the exponential-time hypothesis. Following up on this observation, it seems conceivable that one can devise PEPS algorithms that provide ground states of systems in a trivial phase (possibly even with convergence proofs), by making use of techniques of quasi-adiabatic evolution [82,83], applying short circuits to product states as ground states of trivial parents. Having said that, any such approach would require keeping track of ground states of families of Hamiltonians. Thirdly, in most practical algorithms used in practice, in contrast, some initial condition for the PEPS is chosen, which is iteratively refined via sweeps, until a good convergence to the ground state is encountered. In fact, in practice, the PEPS data are initially often chosen randomly, following a refinement in sweeps by iteratively minimizing the energy evaluated from a local Hamiltonian. The results laid out here show that it is crucial to devise meaningful schemes making reasonable choices of these initial conditions. But our average-case hardness results of PEPS contraction indicate that one should be particularly cautious when choosing such initial states.
Outlook. In this work we presented the first average case complexity result in the context of quantum many-body systems, specifically tensor network states. Our main result is structural, namely we prove that the hard instances of PEPScontraction make up a significant fraction of all instances. Physically, this means that contraction of PEPS with random tensors is likely to be computationally hard to accurately evaluate. Conceptually, we establish structural similarities to the evaluation of the permanent. Our results hold under the assumption of accurate or exponential precision. In Appendix C, we stress that also on physical grounds, to demand exponential precision is very much reasonable. However, in a physical context it is often sufficient to evaluate observables up to polynomial precision. The major open problem is thus to extend the presented analysis to this case. For PEPS contractions establishing such a result would have direct practical implications. Furthermore, we are not aware of any #P-completeness result for translation-invariant PEPS. Thus, the general open question should be: What are the instances of PEPS for which known contraction methods have convergence guarantees? It is our hope that further research at the interface between computer science and quantum many-body physics will provide exciting insights to this question.
As explained in the main text, our result comes in different flavors. Here, we present our results in full technical detail. We formalize the problem of evaluating expectation values of local observables in the following two problems: We prove all results for two canonical choices: The first is to draw entry-wise from a uniform distribution centered around zero and truncated at some chosen threshold σ, which we will denote by U = U C (0, σ) and the product distribution by P 1 := U D 4 dN . Almost equivalently we could draw from a Gaussian distribution. We will denote this Gaussian distribution in this appendix with P 2 := G D 4 dN := N C (0, σ) D 4 dN . This is reminiscent to a discussion about the permanent with entries in the complex numbers in Ref. [40,Section 9.1]. More precisely, we prove the following technical theorems: Theorem 2 (Worst-to-average reduction). Suppose there exists a machine O that solves Problem 1 or Problem 2 within precision 2 −polyN for square lattices in polynomial time with a probability of 1 − 1 12N over the instance drawn from P i for i = 1, 2. Then, there exists a machine O that solves any instance with precision 2 −poly(N ) of the respective problem in randomized polynomial time with exponentially high probability.
We will prove this theorem first, as it requires the most technical work. If we do not relax to exponential precision but require perfect arithmetical evaluation of the machine O, we obtain a much stronger worst-to-average reduction: Theorem 3 (Stronger worst-to-average reduction). Suppose it exists a machine O that solves Problem 1 or 2 exactly for square lattices in polynomial time with a probability of 3 4 + 1 polyN drawn from P i , with i = 1, 2. Then, there exists a machine that solves any instance of the respective problem in randomized polynomial time with exponentially high precision.
Notice that Theorem 1 is a special case of the above. Namely, it corresponds to the choice of Problem 1 and probability distribution P = P 2 . Finally, again requiring perfect evaluation, we obtain worst-to-average reduction for the normalized expectation value problem as well: Theorem 4 (Normalized expectation values). Suppose it exists a machine O that solves Problem 3 exactly for square lattices in polynomial time with a probability of 1 − 1 24N drawn from P i with i = 1, 2. Then there exists a machine that solves any instance of the respective problem in randomized polynomial time with exponentially high precision.

Proof of Theorem 2
Before we turn to presenting the proof, we state a lemma which resembles Lemma 48 in Ref. [40]. Let us denote with N C (µ, σ) the normal distribution over the complex numbers with mean µ and standard deviation σ. The lemma establishes that products of normal distributions with small mean are close to a product of the standard normal distribution with zero mean.

Lemma 5 (Gaussian distributions). For the distributions
with v ∈ C M , it holds that where || • || denotes the total variation distance and v ∈ C M . The same result holds if we substitute N with U.
Proof of Lemma 5. We prove the lemma for the Gaussian case. The uniform can be obtained similarly. We obtain with the triangle inequality for the total variation distance: With the relation between total variation distance and L 1 -norm, we obtain The second inequality follows using again the triangle inequality: where the last inequality follows from a straightforward calculation.
Proof of Theorem 2. For simplicity, we set σ = 1. Furthermore, we restrict to the case of Problem 1 as the proof for the case of Problem 2 is completely analogous. Consider Problem 1 and a hard instance defined by the data (P [v] ) v , e.g. the encoding of a Boolean function as was done in Ref. [32]. It suffices to consider a (P [v] ) v with all matrix entries being bounded by 1 as all instances constructed in Ref. [32] admit this form. Furthermore, we draw PEPS-data from the standard Gaussian distribution entry-wise, denoted as Analogously to Lipton [80], we define Now, let |ψ(t) denote the PEPS corresponding to this data. In analogy to the discussion of the permanent, we define the function q(t) := ψ(t)|ψ(t) . Notice that this function is a polynomial in t with degree r = 2N , which scales polynomially in the input length. Before we can apply Theorem 8, we have to deal with the fact that the (R(t) [v] ) v are not distributed according to the Gaussian distribution. We will need only very small t bounded by some ε > 0, such that the difference between the respective distributions is immaterial. Specifically, the (R(t) [v] ) v tensors are distributed according to Thus, from a triangle inequality and Lemma 5, we obtain D − G D 4 dN ≤ 4D 4 dN + 2D 4 dN ε = 6D 4 dN ε (A10) for |t| ≤ ε, by identifying C with R 2 . It will suffice to set and δ := 1 12N . This implies that for a small enough inverse polynomial ε, we can make the total variation distance polynomially small. Let {t i } i∈[r+1] be the set of r + 1 equidistant points in [0, ε]. We will now use the assumption from the theorem's statement that the machine O works for a 1−δ fraction of the instances drawn from G D 4 dN . Using (A10), we obtain for the success probability of the machine evaluating at the points t i accurately up to within precision 2 −polyN where we used that the total variation distance is an upper bound on the difference in probability the two distributions could possibly assign to an event. Finally, we obtain the probability of r + 1 consecutive succesful evaluations as by Bernoulli's inequality.
Here, we abbreviated O R [v] v (t i ) with O(t i ). Given the evaluation values at the t i , we can solve for the coefficients and obtain a polynomialq which satisfies |q(t i ) − q(t i )| ≤ 2 −polyN for all t i with high probability. The machine O then evaluatesq(1), which is an estimate for q(1) = ψ|ψ .
To bound the error on this estimate we will use two powerful results: The first on noisy extrapolations and the second on noisy interpolations of polynomials. A version of the following lemma was proven in Ref. [85], see also Ref. [40][Section 9.1].
The following theorem was proven in Rakhmanov [84].
Theorem 7 (Rakhmanov). Let E k denote the set of k equidistant points in (−1, 1). Then, for a polynomial p : R → R with degree r such that |p(y)| ≤ 1 for all y ∈ E k , it holds that |p(x)| ≤ C log π arctan k r √ R 2 − x 2 (A14) with |x| ≤ R := 1 − r 2 k 2 . (A15) We will use the second result to bound the error between the points and then use the first result to bound the error onq (1). For the proof, we shift the polynomial p such that the intervall of interest is centered around the origin. Furthermore, we can straightforwardly implement that we work with a smaller interval. We obtain that R = 1 − r 2 (r + 1) 2 Restricting to the strict subinterval [− R 2 , R 2 ], we can apply Theorem 7 and obtain the following bound for all t ∈ [− R 2 , R 2 ], Finally, we can apply Lemma 6. This yields the desired bound on the difference between the estimateq(1) and the actual value q(1): for a sufficiently large poly. Finally, we remark that the success probability can be exponentially amplified by repeating the above procedure polynomially many times because of the Chernoff bound.
probability ≥ 3 4 + 1 poly(n) for a uniformly random matrix M ∈ F n×n q , implies the capacity to determine the permanent of any given matrix A with probability 1 − δ for an exponentially small δ.
A variant of this theorem for the field C was proven in [40, Section 9.1].