Strong Resilience of Topological Codes to Depolarization

The inevitable presence of decoherence effects in systems suitable for quantum computation necessitates effective error-correction schemes to protect information from noise. We compute the stability of the toric code to depolarization by mapping the quantum problem onto a classical disordered eight-vertex Ising model. By studying the stability of the related ferromagnetic phase both via large-scale Monte Carlo simulations and via the duality method, we are able to demonstrate an increased error threshold of 18.9(3)% when noise correlations are taken into account. Remarkably, this agrees within error bars with the result for a different class of codes-topological color codes-where the mapping yields interesting new types of interacting eight-vertex models.


I. INTRODUCTION
Moore's law has accurately described the speedup of current computer technologies for half a century, yet this speedup is slowly coming to an end due to transistor limitations. A promising alternative is given by quantum computers. However, the qubit manipulations required for information processing and communication are prone to errors because qubits are more sensitive to noise than their classical counterparts. Consequently, protecting qubits has become an issue of paramount importance for the success of quantum computation. The effects of single-qubit operations can be decomposed into three processes, bit flips, phase flips, as well as a combination thereof, which can be represented by the three Pauli matrices σ x , σ z , and σ y , respectively. This is in contrast to classical bits, which can only suffer from a single type of error, namely bit flips.
More generally, the notion of a noisy channel is instrumental in characterizing the disturbing effects on physical qubits. Such a quantum channel can be described by specifying the probability (or "qubit error rate") p for each of the aforementioned noise types. For instance, if only σ x occurs, then we have a bit-flip channel. Here we are interested in channels of the form: where the density matrix ρ fully describes the quantum state and the probability for each type of error to occur is p w ∈ [0, 1] with p := p x + p y + p z . The depolarizing channel exhibits equal probability p w = p/3 for each error type to arise. As such, the depolarizing channel is more general than the bit-flip channel, because it allows for the unified, correlated effect of all three basic types of errors. The implications of this error model for the performance of a quantum code remains an open prob-lem. In addition, the depolarizing channel plays a fundamental role in quantum-information protocols where noise has to be taken into account, including quantum cryptography [1,2], quantum distillation of entanglement [3], and even quantum teleportation [4]. Experimentally, controllable depolarization has been realized recently in photonic quantum-information channels [5] with a Ti:sapphire pulsed laser and nonlinear crystals, as well as 2-qubit Bell states [6]. Here we compute the effects of depolarization on a set of entangled qubits.

A. Topological codes
The goal of quantum error correction [7,8] is to protect quantum information from decoherence. One approach using topology is based on encoding (few) logical qubits in a particular state subspace of (typically many) physical qubits which is not disturbed directly by noise. Such a suitable subspace of states can be defined in terms of a set of commuting observables, called check operators, each being a projective measurement with respect to the code subspace (i.e., the eigenvalue signals errors on participating qubits). Investigating all stabilizers S i allows one to limit the set of possible errors to those compatible with the measured error syndrome. Our best strategy then is to classify the remaining, nondistinguishable errors according to their effect on the encoded logical information and undo the effects of the most probable equivalence class E.
A hallmark of topological quantum error-correction codes [9][10][11][12][13][14] is the geometrical locality of these check operators: Physical qubits are placed on a lattice and check operators depend only on a few neighboring sites. The logical information, which is encoded globally in a subspace of all physical qubits, is preserved as long as we can successfully detect and correct local errors. If errors on the physical qubits occur with a probability p, the error threshold p c -a key figure of merit of any quantum code-is defined as the maximum error probability p, such that error classification is achievable. For error rates larger than p c the error syndrome's complexity inhibits unambiguous error recovery. It is therefore of current interest to find codes where p c is large.

B. Error threshold as a phase transition
The process of error correction resembles a phase transition and, indeed, it is possible to connect error correction directly to an order-disorder phase transition in a suitable classical statistical-mechanical model [12,15,16]. One can derive a Hamiltonian H E of interacting Ising spins s i , labeled by a Pauli error E that controls the sign of the couplings, such that the probability of each equivalence class E is proportional to the partition function Equation (3) has to be interpreted as describing a random statistical model with quenched couplings and two parameters: the error probability p governing the fraction of negative interaction constants J σ ∈ {±1}, and the inverse temperature β = 1/T . For low enough T and p the system orders into a ferromagnetic state (see Fig. 1). Along the Nishimori line [17] where Eq. (3) holds, the ordered [disordered] phase corresponds to the topological code being effective [ineffective]. The intersection of the Nishimori line and the phase boundary identifies the error threshold p c . The first topological codes studied were toric codes [9], still under intense investigation and scrutiny mainly due to their simplicity and elegance. To determine their error threshold, we show that toric codes under the depolarizing channel connect to the celebrated eight-vertex model (see Fig. 2) introduced by Sutherland [18], as well as Fan and Wu [19], and whose general solution by Baxter [20][21][22] stands up as the culmination of a series of breakthroughs in the theory of phase transitions and critical phenomena.
The aforementioned mapping onto a statisticalmechanical model to compute the error tolerance of quantum codes was first applied to toric codes with bit-flip errors [15], connecting them to the random-bond Ising model. In general, for individual bit flips the error threshold is p c ≈ 10.9%, and the same is true for phase flips alone. Therefore, under depolarizing noise and correcting separately bit flips and phase flips, the threshold is  tion, such that, in particular, correlations are taken into account. We find p c = 18.9(3)%. Remarkably, the error threshold increases significantly by taking correlation effects into account. They should thus not be neglected by recovery algorithms. A recent advance in this regard is the renormalization approach of Duclos et al. [23] where p c ≃ 16.4% was confirmed, still leaving room for further improvement [24]. Note also that p c is very close to the hashing bound p ≃ 0.1893 [25], which is also the case for uncorrelated bit-flip and phase-flip noise [15,26].

II. TOPOLOGICAL STABILIZER CODES
A. Error correction in stabilizer codes Both toric codes [9] and color codes [10] are topological stabilizer codes. A stabilizer code is described by a set of check operators S i in the Pauli group. That is, they are tensor products of Pauli operators σ x , σ y , and σ z . These check operators S i are a commuting set of observables with eigenvalue ±1 that generates an Abelian group S := S i that does not contain −1, called the stabilizer group. Encoded states |ψ are those for which all check operators satisfy S i |ψ = +|ψ . If errors affect the state, typically they will change the value of the check operators leaving a trace that can be used to recover the original state. Note that some errors are undetectable because they commute with all check operators and thus leave no trace.
We are interested in noisy channels of the form where the sum is over all Pauli group elements E, and p(E) denotes the probability for E to occur. Several different Pauli errors E have the same effect on the encoded state. Therefore, it is convenient to place them in equivalence classes E, such that E is equivalent to E ′ when EρE † = E ′ ρE ′ † on an encoded state ρ or, equivalently, when EE ′ is proportional to a product of check operators. Therefore, the total probability for a given class of errors is given by One can choose a set of undetectable errors D i and use them to label the error classes compatible with any given syndrome. Namely, if E is compatible with the syndrome then the possible error classes are E itself and the classes D i E.
The error-correction process starts with the measurement of the check operators S i . Measuring each S i yields an eigenvalue s i = ±1. Only certain errors are compatible with these eigenvalues. In particular, E is compatible with the error syndrome if ES i = s i S i E. Ideally, given a syndrome s = {s i } one can compute the relative probabilities P (E|s) of the different error classes E compatible with s. If E s is the class that maximizes this probability, the best guess is that this is the error that occurred and thus should be corrected. The net effect of such an ideal error correction is where the success probability p 0 and the probability for an effective error D i are Note that in Eq. (7) the sum is over possible syndromes. Furthermore, where D is the number of error classes per error syndrome. In practice this ideal error correction might be too costly from a computational perspective. Therefore, approximations are needed.

B. Toric codes and color codes
Topological codes have two interesting features: First, they can be defined for different system sizes in such a way that check operators remain local-involving only a few neighboring qubits-and at the same time nontrivial undetectable errors are global and thus involve a number of qubits that depend on the system size. Second, they exhibit an error threshold. For error rates below the threshold, the success probability [Eq. (7)] approaches 1 for increasing system size, whereas 1 − p 0 decreases exponentially.
In toric codes [9] physical qubits are placed on the edges of a square lattice. Notice that for each edge in the direct lattice there is an edge in the dual lattice.
Check operators S f are attached to faces f , either in the direct or the dual lattices. Toric codes can thus be defined in two similar, but distinct ways: In the original definition by Kitaev, if f is a face in the direct [dual] lattice composed by the edges r, s, t and u, then the corresponding check operator is The second definition is due to Wen [27] and it does not distinguish between dual and direct faces. If f has a top edge r, a bottom edge s and side edges t, u, then we take S f := σ z r ⊗σ z s ⊗σ x t ⊗σ x u . Both definitions are equivalent up to a rotation of half of the qubits. However, for the depolarizing channel Kitaev's definition is related to the alternating eight-vertex model and Wen's definition to the standard eight-vertex model.
In color codes [10] physical qubits are placed on the vertices of a trivalent lattice with three-colorable faces, such as, for example, the honeycomb lattice. There are two check operators S x f and S z f attached to each face f taking the form S x f := i σ x i and S z f := i σ z i , respectively, with i running over the qubits on the vertices of f .
Because the computing capabilities of color codes depend on the underlying lattice where the qubits are placed, we study two different scenarios: the honeycomb lattice for its simplicity, and a lattice of octagons and squares that allows for the implementation of additional types of quantum gates. In the mapping onto a statistical-mechanical model to compute the error threshold these two arrangements correspond to the triangular and union jack lattices, respectively (see Fig. 3).

III. RANDOM EIGHT-VERTEX ISING MODELS
To determine the error threshold, we show that topological codes under the depolarizing channel connect to certain random classical spin models.
For the toric code, the error-correction process maps onto a statistical-mechanical interacting eight-vertex model [18][19][20][21]. Remarkably, this class of models exhibit critical exponents that depend on the coupling constants, Eq. (22), thus challenging the very notion of universality. Eight-vertex models were originally formulated in the 'electric picture' where the degrees of freedom are electric dipoles placed at the bonds surrounding each vertex of a square lattice [22], i.e., the number of independent dipole configurations per vertex is eight. In addition, a mapping to a 'magnetic picture' was found by Wu [28], as well as Kadanoff and Wegner [29]: Consider two independent Ising systems, each on a square lattice, with classical spin variables s i and s ′ k taking on values ±1, and bonds J ij and J ′ kℓ , respectively. The lattices are stacked as shown in Fig. 2 such that the vertices (spin sites) of one lattice are at the center of the plaquettes of the other. The Hamiltonian takes the explicit form This can be thought of as two interacting Ising models by means of a four-spin interaction (denoted by the symbol +) between original and dual lattices.
In fact, two types of eight-vertex models can be related to error correction in toric codes: the standard eightvertex model where In both cases, we set the four-spin interaction to J + = J 4 , as depicted in Fig. 2(c). Thus, both types of eight-vertex models share the same lattice structure but differ in the pattern of coupling constants. This difference has fundamental consequences in the exact solvability of the model: while the standard eight-vertex model is exactly solved for arbitrary couplings, the alternating eight-vertex model is generally not exactly solvable. In fact, the latter corresponds to the Ashkin-Teller model [30]. Notice that when J 4 = 0, the eight-vertex model reduces to two decoupled Ising models, while for J 4 = 0 the model has two critical temperatures.
The error threshold for correction in quantum codes corresponds to the critical line separating ordered from disordered phases. The former represents a situation where quantum error correction can be performed with arbitrary precision. Determining the location of this critical line in eight-vertex models is facilitated by the existence of a self-duality symmetry in the partition function: a duality transformation relating a high-temperature eight-vertex model to a low-temperature one on the same lattice. Self-duality implies that the coupling constants for 2-spin interactions are isotropic, i.e., J = J ′ . Altogether, an isotropic self-dual eight-vertex model has a critical line given by [22]: with the restriction that J 4 ≤ J. The point in the plane (J 4 = J c , J = J c ) at which the self-dual line ceases to be critical is given by This is already a remarkable and encouraging result because it yields a critical point which is approximately 60% larger than in the standard square-lattice twodimensional Ising model. Note that the error threshold for bit-flip or phase-flip errors in the Kitaev model is computed via a mapping to the aforementioned twodimensional Ising model. In that case, the critical point can be computed from the relationship sinh(2βJ c ) = 1, i.e., βJ c = 0.4406. Recall that the critical exponents depend continuously on the value of J 4 . In this work we extend the standard eight-vertex model by adding quenched disorder to the couplings between the spins. Given that for the eight-vertex model βJ c ≈ 0.2746 is smaller than for the square-lattice Ising model, we can expect to find a larger error threshold for the depolarizing channel than for bit-flip or phase-flip errors.
In addition to depolarizing errors in the toric code, where the problem map onto Eq. 9, we also study color codes, see Fig. 3. In this case the underlying statisticalmechanical model to study the error stability to depolarizing errors is defined on a triangular lattice. There are two Ising variables-s i and s ′ i -per site. For convenience, we introduce an artificial third variable s ′′ i = s i s ′ i . The Hamiltonian is then given by: (12) Equation (12) is illustrated in Fig. 4 where the top [bottom] layer corresponds to the s i [s ′ i ] Ising variables with the corresponding three-spin interaction term as shown in Fig. 4(a) [4(b)]. The third term in the Hamiltonian with six-spin interactions is represented by Fig. 4(c).
When J ′′ ijk = 0 in Eq. (12), we obtain two independent triangular 3-body Ising models. Interestingly, this model can be mapped onto a eight-vertex model on a Kagomé lattice [31]. Therefore, the color code Hamiltonian H 2 in Eq. (12) can be thought of as an interacting eight-vertex model (or coupled eight-vertex models). In this work we consider two different lattice geometries, triangular and union jack (see Fig. 3).

A. Spin models for depolarizing noise
The goal is to relate the stability of a topological stabilizer code to depolarizing noise to the ordered phase of a suitably chosen classical spin model. However, here we consider the more general qubit channel shown in Eq. (1). This adds transparency to the mapping and reveals the differences between Kitaev's and Wen's versions of the toric code with respect to error correction. When Eq. (1) is applied to each qubit in a code, the net result is a channel of the form presented in Eq. (4). In particular, the probability for each Pauli error is where n is the total number of qubits and E w the number of appearances of σ w in the tensor product forming E. The classical spin Hamiltonian is constructed as follows: 1. Attach a classical spin s i to each check operator S i . 2. Associate with each single-qubit Pauli operator σ an interaction term J σ s σ 1 s σ 2 · · · s σ Nσ such that the spins s i correspond to the check operators S i affected by σ, i.e., such that S i σ = −σS i .
3. Attach to each coupling a sign τ σ = ±1 dictated by the Pauli error E labeling the Hamiltonian, through the conditions σE = τ σ Eσ.
The resulting Hamiltonian takes the general form where the sum is over all Pauli operators σ and there are only three different couplings J σ since we set J σ w k := J w , with w = x, y, z and k the qubit label.
For the mapping, we require the interaction constants to be This relates the error probability in Eq. (13) to the Boltzmann factor for the ordered state, {s i = 1}, given the interactions generated by E: Note that, to recover Eq. (3) just notice that replacing the error E → E ′ = S j E is equivalent to considering a different spin configuration in the original Hamiltonian: Finally, to complete the mapping, the label E in the Hamiltonian must describe quenched randomness. In particular, the coupling configuration dictated by E appears with probability p(E). Equivalently, this means that for every qubit k the probability p(τ x k , τ y k , τ z k ) for each configuration of coupling signs is given by In the case of the depolarizing channel p x = p y = p z = p/3 and J x = J y = J z = J. (19) The resulting model has two parameters, p and βJ with βJ = 1/T . For low p and T the model orders ferromagnetically and along the Nishimori line, this order is equivalent to the noise being correctable. Indeed, comparing error-class probabilities amounts to computing free-energy differences In topological codes we expect the existence of an error threshold p c -or several for different error types, but we do not need such generality. When p < p c in the limit of large systems the success probability is expected to approach unity, i.e., p 0 −→ 1 and thus due to Eq. (8) along the Nishimori line the free-energy difference is asymptotically infinite, because for any real t, P (∆ i (β, E) > t) −→ 1. Similarly, when p > p c , the success probability is expected to become minimal (p 0 −→ 1/D) and thus the free-energy difference converges in probability zero, so that for any t > 0 we have P (|∆ i (β, E)| < t) −→ 1. This shows that the-free energy differences ∆ i are order parameters and p c is the critical value of p along the Nishimori line. In the models of interest here, these are domain-wall free energies.

B. Models for toric and color codes
Let us now study what the above mapping, Eq. (14), produces when applied to toric codes and to topological color codes.
For the toric code, the single-qubit operators σ x and σ z produce 2-body interactions, because each bit flip [phase flip] affects the stabilizer operators on two on two neighboring dual [direct] faces. The σ y operators, which combine correlated spin-flip and phase-flip errors, introduce four-body interactions, see Fig. 2. The result is an alter-nating eight-vertex model with coupling signs that are parametrized by a Pauli error E, namely,  Fig. 2). The two layers are shifted by half a lattice spacing and the third term in Eq. (22) describes the combined four spin interaction at each of the crossings "+". Note that in Eq. (22) there is one qubit per cross +. For Wen's toric code one recovers the standard eight-vertex model. In either case, there is a global Z 2 ×Z 2 symmetry because one can flip all spins in each lattice separately without affecting the total energy.
In the case of color codes there is one spin per face. The σ x and σ z single-qubit operators produce 3-body interactions in Eq. (14), whereas σ y operators produce 6-body interactions. The Hamiltonian is then given by Eq. (12) but with coupling signs parametrized by a Pauli error E, namely, with s x i s y i s z i = 1. Therefore, we obtain two stacked triangular lattices having three and six-body interactions (see Fig. 4), with the six-body interactions introducing correlations between both layers. In this case the global symmetry is Z 2 × Z 2 × Z 2 × Z 2 . Indeed, the sites can be colored with three colors in such a way that each triangles has a site of each color. Thus one can flip all spins for two given colors in either of the two lattices separately without affecting the total energy.
For p = 0 in Eqs. (22) and (23) self-duality predicts a critical temperature of T c = 1/βJ c ≃ 3.641, a value that we confirm numerically in our Monte Carlo simulations.

V. MONTE CARLO SIMULATIONS
We investigate the classical statistical spin models acquired in the mapping, Eq. (22) and Eq. (23), via largescale classical Monte Carlo simulations using the parallel tempering Monte Carlo technique [32].
In the parallel tempering Monte Carlo method, several identical copies of the system at different temperatures are simulated. In addition to local simple Monte Carlo (Metropolis) spin updates [33], one performs global moves in which the temperatures of two neighboring copies are exchanged. It is important to select the position of the individual temperatures carefully such that the acceptance probabilities for the global moves are large enough [34] and each copy performs a random walk in temperature space. This, in turn, allows each copy to efficiently sample the rough energy landscape, therefore speeding up the simulation enormously.
Detecting the transition temperature T c (p) for different fixed amounts of disorder allows us to pinpoint the phase boundary in the p -T phase diagram. The error threshold p c is then given by the intersection of the phase boundary with the Nishimori line.

A. Observables and Simulation Details
For the toric code, it is expedient to partition the lattice into two sublattices such that the only interconnection is given by the four-body-interactions of the Hamiltonian in Eq. (22). The ground state of the pure system is realized when the spins of each sublattice are aligned (but the alignment may be different as the sign would cancel out in both the two and four-spin terms). In this case the sublattice magnetization is a good order parameter, where P denotes one of the sublattices. Similarly, we note that each layer of the triangular lattice for color codes is tripartite with spins aligned in each sublattice for all realizations of the pure system's ground state. Hence, we can define an order parameter analogous to Eq. (24) for a suitable sublattice P ′ . Note that particular caution is required when implementing the periodic boundary conditions to ensure that these distinct sublattices are well defined. We can now use the magnetization defined in Eq. (24) to construct the wave-vector-dependent magnetic susceptibility, where · · · T denotes a thermal average and R i is the spatial location of the spin S i . From Eq. (25) we construct the two-point finite-size correlation function [35], where [· · · ] av denotes an average over disorder and k min = (2π/L, 0, 0) is the smallest nonzero wave vector. Near the transition, ξ L is expected to scale as whereX is a dimensionless scaling function. Because at the transition temperature, T = T c , the argument of Eq. (27) becomes zero (and hence independent of L), we expect lines of different system sizes to cross at this point. If however the lines do not meet, we know that no transition occurs in the studied temperature range. In all simulations, equilibration is tested using a logarithmic binning of the data: This is done by verifying that all observables averaged over logarithmicallyincreasing amounts of Monte Carlo time are, on average, time independent. Once the data for all observables agree for three logarithmic bins we deem the Monte Carlo simulation for that system size to be in thermal equilibrium. The simulation parameters can be found in table I.

B. Sample results
For the pure system (p = 0) there is a sharp transition visible directly in the sublattice magnetization. The transition temperature T c,pure ≈ 3.64 coincides with the value found analytically. For larger amounts of disorder, a transition can still be located precisely by means of the crossings in the two-point finite-size correlation function [Eq. (26)] for different system sizes. Sample data for a disorder strength of p = 0.170 (i.e., this would mean that on average 17% of the physical qubits are "broken") are shown in Fig. 5, indicating a transition temperature of T c (p) = 2.14(2). The error bars are calculated using a bootstrap analysis of 500 samples. There are small finite-size effects which are addressed by analyzing the intersection T * c (L, 2L) of pairs of system sizes. We estimate the limit value for L → ∞ by means of a linear fit in a 1/L-plot -this is our estimate for the best value in the physically-relevant thermodynamic limit. For disorder rates approaching the error threshold, corrections to scaling increase and a careful finite-size scaling analysis has to be performed to determine T c [36]. At p = 0.189, the lines only touch marginally such that both the scenario of a crossing as well as no transition are compatible within error bars. This gives rise to the large error bars in the phase diagram (Fig. 1). For error rates p > p c , the lines do not meet, indicating that there is no transition in the temperature range studied. The crossing of the critical line T c (p) with the Nishimori line [Eq. (20)] determines the error threshold to depolarization. Our (conservative) estimate is p c = 0.189(3). Our results are summarized in Fig. 1, which shows the estimated phase diagram.

VI. DUALITY METHOD
An alternative approach to estimate the critical value p c is to use the duality method [37], originally developed within the context of spin glasses.
The critical point of a statistical model expressed only by local interactions between degrees of freedom can be analyzed using the duality method under the assumption of a unique transition temperature. The partition function Z[A] is then given by the local Boltzmann factor A φ = exp(βJ cos πφ), where φ ∈ {0, 1} is the difference between adjacent spins such as cos(πφ) = ±1. We define the principal Boltzmann factor A 0 as the case where all spins are parallel. The partition function has to be invariant under a Fourier transform, i.e., where A * is a dual principal Boltzmann factor (via a Fourier transformation). In that case the critical point is determined via the equality A 0 = A * 0 . This implies that all the components of the local Boltzmann factors are equal for several self-dual models such as the standard Ising model. Although this self-duality does not work a priori for systems with quenched disorder in the general case, the method can be applied in a special subspace called the Nishimori line [37]. The results can be improved by considering extended local Boltzmann factors over a restricted range of interactions [37,38] (see Fig. 6 which illustrates the used clusters). Because the resulting statistical-mechanical Hamiltonians for both the toric code and topological color codes are self dual, we can apply this efficient technique to obtain estimates (up to systematic deviations that depend on the clusters used) of the error threshold.

A. Zeroth-order approximation
The effects of the depolarizing channel on topological codes can be expressed by a spin-glass model with the partition function [39] where = exp{βJτ x cos πφ+ βJτ z cos πφ + βJτ x τ z cos πφ cos πφ} .
τ x ij ∈ {±1} and τ z ij ∈ {±1} are quenched random variables chosen from the distribution This model has a gauge symmetry in the subspace J = J p which corresponds to the Nishimori line.
To determine the multicritical point we replicate the partition function to take into account the quenched randomness of the variables τ x ij and τ z ij , i.e., where the brackets denote a configurational average. The local Boltzmann factor is then given by where k distinguishes the specific configuration (φ α i ,φ α i ). The n-binary Fourier transformation gives the dual Boltzmann factor A * n,k . It follows [37,38] that A n,0 = A * n,0 determines the critical point along the Nishimori line. Taking the leading term in n, we obtain the error threshold for the depolarizing channel of the toric code as p c = 0.189 . . . under the conditions J = J p and 3 exp(−4βJ) = p/(1 − p) for the Nishimori line. Because the local Boltzmann factors for the topological color codes on both the hexagonal and square-octagonal lattice are the same, we obtain the same estimate for the error threshold.

B. First-order approximation using finite clusters
To reduce systematic errors we consider finite-size clusters with four bonds on each square lattice for the toric code, six triangles taken from each triangular lattice for the color codes on the hexagonal lattice and four triangles from each union jack lattice for color codes on the square-octagonal lattice (see Fig. 6). We compute the principal Boltzmann factors on the clusters, i.e.,

A
(1) as well as its dual A * (1) n,0 via a n-binary Fourier transformation. As before, the critical point along the Nishimori line is determined via A There are small variations in the estimates, however, the estimates are all of the order of approximately 19% and in agreement with our results from Monte Carlo simulations.

VII. RESULTS AND CONCLUSION
We have shown that the stability under depolarizing noise of toric codes can be related to the existence of a magnetic phase in a random eight-vertex model. Similarly, color codes turn out to be related to a class of 'interacting' eight-vertex models. We analyze the models resulting from the mapping via both large-scale parallel tempering Monte Carlo simulations [16,36] and the duality method [37,38]. By determining T c (p) for different error probabilities p, we are able to determine the phase boundary in the p -T plane (Fig. 1). Both approaches confirm the existence of a stable ordered phase and by locating the intersection of the phase boundary with the Nishimori line, we compute in a nonperturbative way, the disturbing effects of noise on topological codes. The external noise considered in this work is more realistic than in previous studies because it applies to both bit-flip errors, phase-flip errors and more importantly, a nontrivial combination thereof.
The error threshold to depolarization errors for different classes of topological codes studied is approximately 19%, which is larger than the threshold for noncorrelated errors. This is very encouraging and shows that topological codes are more resilient to depolarization effects than previously thought. The profound relationship between complex statistical-mechanical models, such as the eightvertex model, and topological quantum error correction promises to deliver a plethora of new paradigms to be studied in both fields in coming years.