A decoder for the triangular color code by matching on a M\"obius strip

The color code is remarkable for its ability to perform fault-tolerant logic gates. This motivates the design of practical decoders that minimise the resource cost of color-code quantum computation. Here we propose a decoder for the planar color code with a triangular boundary where we match syndrome defects on a nontrivial manifold that has the topology of a M\"{o}bius strip. A basic implementation of our decoder used on the color code with hexagonal lattice geometry demonstrates a logical failure rate that is competitive with the optimal performance of the surface code, $\sim p^{\alpha \sqrt{n}}$, with $\alpha \approx 6 / 7 \sqrt{3} \approx 0.5$, error rate $p$, and $n$ the code length. Furthermore, by exhaustively testing over five billion error configurations, we find that a modification of our decoder that manually compares inequivalent recovery operators can correct all errors of weight $\le (d-1) /2$ for codes with distance $d \le 13$. Our decoder is derived using relations among the stabilizers that preserve global conservation laws at the lattice boundary. We present generalisations of our method to depolarising noise and fault-tolerant error correction, as well as to Majorana surface codes, higher-dimensional color codes and single-shot error correction.


I. INTRODUCTION
A quantum computer must be able to perform information-processing tasks with near noiseless logical qubits. To deal with the noise that physical qubits will experience, we imagine protecting and processing quantum information using quantum error-correcting codes [1][2][3][4][5][6][7][8][9]. As such we seek codes that can perform logical operations efficiently, while dealing with the significant number of errors that physical qubits will suffer. Ideally, we will find resource efficient codes whose construction will require a relatively small number of the qubits that are currently available in laboratories, that also respect the technical constraints that are imposed by modern hardware [10][11][12].
We aim to reach a very low logical failure rate with a minimal number of physical qubits [6,[26][27][28][29]. At low error rates, and neglecting entropic factors, we expect the logical failure rate to decay like P f ail ∼ p t where p is the error rate of the physical qubits and t ≤ (d − 1)/2 is the number of errors the code can tolerate with d the code distance. Maximising t will optimise the performance of the code far below threshold.
In addition to finding high-performance decoders it is also important for them to be practical. That is, they should have a fast runtime and they should be versatile to realistic laboratory settings. To this end we turn to the minimum-weight perfect-matching algorithm [6,[30][31][32]. Decoders based on matching generalise naturally to the fault-tolerant setting where stabilizer measurements are unreliable [6,[33][34][35]. Moreover, the matching subroutines can be replaced with almost linear-time algorithms that demonstrate comparable performance [36,37].
Here we propose an efficient matching decoder for the color code with boundaries that corrects high-weight errors. We find that a basic implementation of our decoder on the hexagonal lattice demonstrates a logical failure rate competitive with the square-lattice surface code [6,26,28,29] at low error rates using an equivalent number of qubits. We report a logical failure rate that scales as ∼ p t with t ≈ 0.42d ≈ 3d/7 using an independent and identically distributed noise model. Given this instance of the color code requires n = O(3d 2 /4) physical qubits for its realisation, we have that t ≈ 6 √ n/7 √ 3 ≈ 0.5 √ n which is competitive with the optimal performance of the surface code at low p, that demonstrates t = √ n/2 [6,26,28,29]. Our decoder also demonstrates a threshold p c ∼ 9.0%, exceeding that of other matching decoders on the hexagonal lattice.
Of course, we should strive to find decoders that maximise the number of errors a code can tolerate with t = (d − 1)/2. We improve our decoder by developing a method for obtaining two inequivalent low-weight corrections, see also [38]. This method enables us to manually compare different corrections returned from the matching subroutines to make a better choice of output. We find that our improved decoder corrects errors up to its distance for system sizes for d ≤ 13. We obtain this result by exhaustively testing all errors of weight ≤ (d − 1)/2 for each system size. At d = 13 we check over five billion error configurations.
Fundamentally, the surface code permits the use of matching decoders due to its materialized symmetries [5,35] where relations among the local elements of the stabilizer group give rise to a defect parity conservation law. Given that errors always produce defects in pairs, we can locally match nearby defects to successfully correct the code with high probability. In [35] it was proposed that the symmetries of more general stabilizer codes offer a unifying picture to find matching decoders for other codes, see e.g. [58][59][60]. At a basic level, we find that this perspective reproduces the aforementioned strategies of decoding the color code. Further, a more careful examination of the symmetries at the boundary of the color code allows us to find a matching graph that is associated to a global symmetry that is embedded on a Möbius strip. It is this observation that enables us to produce our results.
To elaborate on some of the principles we use to derive our matching decoder, in addition to our numerical results, we also give an extended discussion on how the ideas we have used can be generalised for other decoding problems with the color code and its variants. We look at color codes with different boundary conditions, Majorana surface codes [44,[61][62][63][64], higher-dimensional color codes [15,16,65], and we discuss single-shot error correction with the gauge color code [8,25,66]. We also examine the depolarising noise model together with other types of unfolding [22,35], as well as fault-tolerant error correction [24,25,52].
In what follows, we briefly introduce the color code and describe our decoder from the perspective of symmetries in Sec. II. In Sec. III we argue that our decoder will be capable of decoding high-weight errors. We evaluate the performance of our decoder in Sec. IV using several numerical experiments before offering some concluding remarks. We go into further detail about the matching subroutines used by our decoder in Appendices A, B, C and D, and how we analyse our data in Appendix E. We give an extended discussion on the generalisations of our decoding methods in Appendix F.

II. THE COLOR CODE
We define the color code [13] on a two-dimensional lattice with three-colorable faces. That is the faces, indexed f , can be assigned one of three colors, red, green and blue, such that no two faces of the same color are touching. We will focus on the hexagonal lattice shown in Fig. 1 for simulations but we remark that the discussions we give are agnostic of the underlying geometry of the three-colorable lattice. Let us label the colors with bold-face symbols from the set C = {r, g, b}, and we define the function col : o → C that specifies the color of some object of the lattice o. It will also be helpful to assign a color to each of the edges of the lattice. We say that an edge has color u ∈ C if it connects two distinct faces of color u. We will also say that an edge has color u if it connects a face of color u and the u-colored boundary, where we define our convention for coloring the boundaries below.
The lattice we are interested in, shown in Fig. 1 is embedded on a triangle. The three sides of the triangle support three distinct boundaries that are also specified by colors of set C where the qubits of the boundary of color u ∈ C touch no faces of color u. In Fig. 1 we outline the boundaries with their respective color. Let us also assign a color to each of the three corners of the lattice. We say that a corner of the lattice is colored u if its vertex supports only one face of color u. We find the u-colored corner at the point where the two boundaries of color v and w overlap such that u = v = w = u.
The color code is such that a qubit is placed on each vertex v of the three-colorable lattice. Quantum errorcorrecting codes are designed to protect a subspace of the Hilbert space of the total system from common errors. We call this subspace the code subspace, or just the codespace for short. We specify the codespace using the stabilizer formalism. The stabilizer group S is an Abelian subgroup of the Pauli group acting on n qubits. The code subspace is the +1 eigenvalue eigenspace of all of its elements, i.e., s|ψ = |ψ for all s ∈ S where the code subspace is spanned by state vectors |ψ . The stabilizer generators of the color code are associated to the faces of the lattice. Each face supports two stabilizers S X f = v∈∂f X v and S Z f = v∈∂f Z v for all f where ∂f is set of qubits that lie on the boundary of face f , and X v and Z v are the standard Pauli matrices that act on the qubit on vertex v. We will only be interested in the Pauli-Z stabilizers, S Z f , in this work. As such we will omit the superscripts used for the complete definition and write the relevant stabilizers more simply as where again ∂f is the the set of qubits that touch f . We show an example stabilizer S f = S Z f in Fig. 1(a). The color code as defined above encodes a single-qubit with an odd code distance d using n = 3(d−1)(d+1)/4+ 1 physical qubits. Its low-weight logical operators have string-like support that terminate at each of the three distinct boundaries of the code. Moreover, these stringlike logical operators may also branch, see an example of a logical operator at Fig. 1(b). It will also be convenient to define the logical operators where we take the product over the d vertices δu that lie on the boundary of color u. Note that each of the logical operators Z r , Z g and Z b are equivalent up to multiplication by an element of the stabilizer group, and so too are the logical Pauli-X operators X u . Of course, X u Z v = −Z v X u for all colors u and v.
A quantum error-correcting code is designed to protect the state encoded in the code space. Let us briefly look at how the color code responds to errors. We will focus on bit-flip errors throughout this work. We write Pauli errors E = v∈E X v where, by abuse of notation, E denotes both the subset of vertices that support error E, as well as the Pauli operator E itself. We measure the stabilizer generators to obtain information about E to find a correction C such that CE ∈ S. By the definition of the stabilizer group, this correction will recover the encoded state |ψ that has suffered error E, i.e., the state E|ψ that does not necessarily lie in the code subspace. We say that there is a defect at f if S f E = (−1)ES f , and the error syndrome is the list of faces that support a defect for error E. We also assign each defect a color from C according to the color of the face on which it lies. A decoding algorithm is designed to determine a correction operator C such that CE ∈ S with high probability by taking the syndrome data and prior information about the error model as input.
Let us finally look at the syndrome produced by some small errors acting on the color code. In Fig. 1(c) we show a single bit-flip error that creates a single defect on each of its three neighbouring faces. The structure of the code is such that the three defects are all differently colored. Errors can be combined to create longer strings with defects at their endpoints. Fig. 1(d) shows a string of two errors that lie on the two vertices of a blue edge. This error has created two blue defects at either end of the string. In general, we can say that a string-like error has color u if it is supported on a sequence of u-colored edges.
The error syndrome appears differently at the boundary of the lattice. Strings of color u ∈ C can terminate at the uboundary or the u-colored corner without producing a defect. We show a blue defect terminating at the blue boundary in Fig. 1(e). Errors can compound further to create defects that are separated over a longer distance. Fig. 1(f) shows a string-like error where a red, green and blue string all meet at a branching point. The error has created a red and a green defect at the endpoints of their respectively colored string. The blue defect has terminated at the blue boundary.

A. Symmetries and decoding
Here we discuss the symmetries of the color code. The symmetries of the code give us a natural way for a decoder to interpret the syndrome data.
We define a symmetry [35,58] as a subset of stabilizers Σ whereby s∈Σ s = 1 1. ( This definition of a symmetry reveals a structure among the defects of the error syndrome that allows us to employ minimum-weight perfect matching for decoding. To see why, let us write the eigenvalue of s as σ s = ±1. Given that ( Σ s) |ψ = |ψ due to Eqn. (3), it follows that s∈Σ σ s = 1.
A direct consequence of this relationship is that there must be an even number of defects, that is, stabilizers with σ s = −1, detected among the stabilizers s ∈ Σ. More explicitly, this means that every error will give rise to an even number of defects if we restrict our attention to the stabilizers of a symmetry. We can therefore predict the locations of errors by pairing defects that are likely caused by errors drawn from the given error model. We can regard this as a defect parity conservation law of the error syndrome. This observation is particularly intuitive in topological codes [5,6,13,14,35,60] where we have a local structure among the stabilizer operators. In such cases, errors can be interpreted as strings where defects appear at their endpoints.
Let us now look at the symmetries of the color code. For now we will consider either infinite or periodic boundary conditions for simplicity. Focusing on just the bit-flip noise model, the symmetries of interest consist of stabilizer operators associated to faces of two specific colors.
Decomposing a symmetry of the color code. We show four errors (a-d). We color the faces of the lattice that are members of Σr = Σ A r ∪ Σ B r where we separate the faces of the symmetry into disjoint subsets where Σ A r (Σ B r ) are the faces of Σr that lie above(below) the red line. Red faces are excluded from the symmetry. The red line that extends from the left to the right of the figure shows the qubits that support a logical operator of the color code. Errors (a), (c) and (d) each give rise to a single defect on either side of the red line on either blue or green faces. Error (b) creates only red defects, and as such they do not appear on the faces of the symmetry.
Let us define the red, green and blue symmetries, Σ r , Σ g and Σ b , where For instance, Σ r contains stabilizer operators S f for all green and blue faces, see Fig. 2. Indeed, it is readily checked that the product of all of the colored hexagons on the lattice shown in the figure multiply to give 1 1.
In addition to the stabilizers that are members of Σ r , Fig. 2(a-d) shows four errors that, of course, must respect the symmetry of the code. As such they all give rise to an even number of defects on the faces of Σ r . Fig. 2 also shows the support of a logical operator Z by the red line that runs from left to the right along red edges of the lattice.
We now consider how we can use symmetries to find a correction operator. The problem of decoding can be reduced to estimating the commutator of E and the logical operators of the code. Recall that we seek a correction operator C such that CE ∈ S. We begin by propsing a trivial correction operator C that restores the code to any state in the code space. Such an operator is easy to evaluate for topological codes by, say, finding a collection of string-like operators that move all of the defects to some common point on the lattice. We can then ask if C has the same commutator as E with respect to the logical operator Z. If we estimate that their commutators are the same, then we can choose C = C to recover the encoded state. Otherwise, we choose C = XC to recover the encoded state. It therefore remains to determine the commutator of E with Z.
Using the setup presented in Fig. 2 we see that errors that anti commute with Z produce a single defect on either side of the logical operator. In fact, we find that errors make an odd parity of defects on either side of the logical operator if and only if the error anti commutes with the logical operator. We make this claim rigorous in Appendix A. We therefore find that we can determine the commutator of E with Z by pairing nearby defects over the entire lattice and then counting the number of pairs of defects that are matched across the red line that supports the logical operator. The number of pairs that straddle this line will give us the parity of the number of qubits that the logical operator shares with E. This number gives us the commutator between E and Z whereby an even(odd) parity of edges crossing the support of the logical operator imply that E commutes(anti commutes) with Z, thus allowing us to evaluate C and complete the decoding problem. In what follows it remains to explain how we use matching to determine a likely error that produced the syndrome to determine C.

B. A minimum-weight perfect-matching decoder and the restricted lattice
We can use minimum-weight perfect matching to find the commutator between some logical operator and an error that was likely to have caused the syndrome. As we have explained in the previous subsection this is sufficient to find a correction operator. The minimumweight perfect-matching algorithm [30,31] takes as input a graph with weighted edges and returns a perfect matching, i.e., a graph where all the vertices of the input graph are connected by exactly one edge of the input graph, such that the sum of the weights of the edges of the matching are minimal. Its complexity is O(V 3 ) where V is the number of vertices for the input graph. On the two-dimensional lattice we expect V = O(pd 2 ) giving a worst case runtime something like O(d 6 ). See Ref. [6] where this idea was first employed for decoding topological codes. Let us also remark on recent work detailing a Python implementation of the algorithm [32]. Here we explain how we use minimum-weight perfect matching to decode the color code using the symmetries we have illustrated in the previous subsection.
We perform minimum-weight perfect matching to pair defects on a restricted lattice; a concept first introduced in Ref. [51]. Let us see now how the symmetries of the color code give rise to the restricted lattices proposed in Ref. [51], see Fig. 3. The figure shows the color code model on the dual lattice, where qubits lie on the triangles of the lattice, and the stabilizers are represented by vertices. The restricted lattice is obtained by projecting each of the qubits supported on a triangle onto an edge that connects, say, the blue and the green vertices that are adjacent to the respective qubit. The restricted lattice is shown by red edges in Fig. 3. Let us now see how the symmetry relates to the restricted lattice. To the left of Fig. 3 we show the dual lattice overlaid with the primal lattice that we have already defined. The figure highlights the blue and green vertex stabilizers that correspond to the stabilizers of the Σ r symmetry. Let us then say that the restricted lattice corresponding to this symmetry is the gb-restricted lattice. In general we say that the restricted lattice that corresponds to The gb restricted lattice from the symmetries of the color code. We draw the color code on the dual lattice where faces are replaced by vertices and qubits are replaced by triangles. We highlight the stabilizer vertices associated to the symmetry with blue and green vertices. We overlay the dual lattice with the primal lattice in the large circle to the left of the figure. Two adjacent stabilizers are connected by an edge. Each edge has two adjacent qubits that can be flipped to create defects at the endpoints of each edge, see e.g., q1 and q2 to the right of the figure. If exactly one of these two qubits are flipped then two defects on the bold vertices will be created on the restricted lattice.
the symmetry Σ u is the vw-restricted lattice with col- Let us now consider how errors appear on the restricted lattice. Every qubit is adjacent to exactly two highlighted vertices; one green and one blue. Therefore, a single qubit error will produce two defects separated by a single edge. In general, the smallest number of errors that are required create a pair of defects on this symmetry that are separated by a distance w along the edges of this restricted lattice must have weight of at least w. It is also worth remarking that there are two qubits projected onto each edge, for instance, both qubits q 1 and q 2 are projected on the highlighted thick red edge. An error on either q 1 or q 2 will create a pair of defects that lie at the endpoints of the highlighted edge on the restricted lattice. The restricted lattice we have obtained was likened [67] to the surface code where qubits lie on the edges of the lattice [5]. The surface code on the hexagonal lattice has been considered explicitly in Refs. [48,67,68].
We can estimate a least-weight correction with respect to a symmetry using minimum-weight perfect-matching. We produce a graph where we assign each defect on the restricted lattice a vertex. We then produce a complete graph where edges are assigned a weight that is proportional to the separation of the defects along the shortest path over the restricted lattice. The output of the minimum-weight perfect-matching algorithm indicates a set of low-weight error strings that are likely to have created the error syndrome with respect to the symmetry. Counting the number of edges that pair defects over the support of the logical operator gives us an estimate of the parity of errors supported on the logical operator of interest. One can prove that the probability of estimating the support of the error on the logical operator incorrectly will decay exponentially quickly with d for a sufficiently low error rate using similar arguments to those presented in [6]. In Appendix B we justify why the solution to the minimum-weight perfect matching algorithm will propose a likely error, and we give details on how we can evaluate the separation between two points on the hexagonal restricted lattice in Appendix C.

C. Prior work
We conclude this preliminary Section by discussing earlier work that has used variations of the decoding strategy above [24,25,36,43,44,[50][51][52][53]. In [51] it was shown that the edges obtained from matching on the three restricted lattices we have defined above can be used to find the border of a correction that is consistent with the error syndrome. This decoder produced a threshold of ∼ 8.7% on the hexagonal lattice with periodic boundary conditions which is consistent with the earlier work in Ref. [50] where a decoder was proposed by consideration of the fusion rules of the anyon model of the color code [13]. This number is also aligned with the work of [68] where a threshold of ∼ 15.9% is obtained by matching Pauli-Z errors on the surface code on a hexagonal lattice. Indeed, equivalent values are obtained if we equate this threshold with 2p(1 − p) = 0.159 given that, in the color code picture, there are two distinct qubits that can cause an error that will create a pair of defects.
The matching decoder on the restricted lattice was simplified in [53] where it was shown that a local correction can be found using the result of the matching on two of the three restricted lattices. Ref. [53] obtained a threshold of ∼ 10% by focusing on two specific restricted lattices on an alternative color code lattice. In a sense, the alternative perspective we have provided here offers another simplification, where we can decode individual logical operators separately by concentrating on the matching found from a single restricted lattice. Generalisations of this decoder have been obtained for the color code undergoing circuit noise that occurs as stabilizer readout is performed [24,25,44,52] by extending the error syndrome in the temporal direction [6,35]. These examples also generalise the restricted lattice by considering the case where the color code has boundaries. See also [44] where the problem of decoding the color code undergoing bit-flip noise is likened to decoding errors on a Majorana code.
Let us also comment on thresholds obtained with maximum-likelihood decoding.
Using a statisticalmechanical mapping that determines the performance of a maximum-likelihood decoder a threshold for bit-flip noise [69,70] has been obtained as ∼ 10.9%. Thresholds of ∼ 18.9% have been obtained for the color code undergoing depolarising noise [40]. These results have been reproduced with a tensor-network decoder [45,71] that approximates maximum-likelihood decoding. Faulttolerant thresholds for a phenomenological noise model of ∼ 4.8% have also been obtained using statistical mechanical modelling [72][73][74]. These remarkably high numbers motivate the development of efficient fault-tolerant decoders for the color code. We summarise threshold re- sults for different color code lattices undergoing a bit-flip noise model in Table I.

III. DECODING ON THE MÖBIUS STRIP
Let us now describe our decoder. We find that we can decode the color code using a single minimum-weight perfect-matching subroutine on a lattice that is embedded on a Möbius strip. In what follows we will examine the symmetries of the color code to show how we arrive at the decoder we present. We will go on to explain how the decoder overcomes the challenges of decoding the color code on a triangular lattice. We also explain how our decoder deals with the issue of degeneracy that arises when looking at the syndrome on a restricted lattice.

A. Symmetries of the color code with boundaries
Here we look more closely at the color code symmetries at the boundaries of the lattice to show the construction of the single Möbius symmetry. In SubSec. II A we found the restricted lattices used for a minimum-weight perfectmatching decoder by identifying that the product of all the faces on two of the three colors of the lattice give rise to a symmetry. However, this is not true on the lattice with boundaries. We define a boundary operator b u ∈ S such that where we take the product of all the faces that are not colored u. As an example we show b g to the right hand side of Fig. 4. We note also that b u = Z v Z w , where The Möbius symmetry. An image of the error syndrome is shown three times on three different restricted lattices. The restricted lattices are connected at their boundaries to reconstruct a single unified lattice that is embedded on a Möbius strip. We number unified qubits to the left and right of the lattice. Note that the qubits are oppositely aligned on each side of the lattice, thus giving the restricted lattice Möbius topology. We show the image of the error syndrome shown to the right of the figure on the Möbius symmetry. We also show the operator bg, which is the product of all the stabilizers on the red and blue faces. u = v = w = u where we defined these instantiations of the logical operators in Eqn. (2).
Before explaining the construction of the Möbius strip, it will be helpful to first show how we can recover the standard restricted lattices of the color code with boundaries. Indeed, the inclusion of the operator b u in the symmetry Σ u together with the stabilizers S f on faces with color col(f ) = u give us a symmetry that enables us to decode with matching. Specifically, we have that s∈Σu s = 1 1 if we take symmetries for the color code with boundaries. In practice, the addition of this operator means that we can match defects on the faces Σ u onto one of the two boundaries with color not equal to u. This strategy is commonly adopted elsewhere in the literature, see for instance [24,25,52,76]. Explicitly considering the boundary operators reveals additional structure between these restricted lattices. We find that Let us look at these correlations from a physical perspective. This will motivate the method of decoding we propose. As we have already discussed, the symmetries of the color code correspond to a Z 2 × Z 2 conservation law among the defects of the color code in the bulk. To see this, one can check that there is no single qubit error acting on the bulk of the lattice that will violate the relation where |#u| 2 denotes the number of defects of color u modulo 2. Equivalently, we can write the conservation law in terms of stabilizer operators such that for states |φ = E bulk |ψ where Pauli errors E bulk act on qubits on the bulk of the lattice of code states of the color code |ψ . This is clear because the boundary operators b u are not supported in the bulk of the lattice. Errors on the boundary of the lattice effectively violate the global defect conservation law shown in Eqn. (9). The boundary operators record these violations. Let us take, for example, the error shown in Fig. 4. A single error on the green boundary means that |#r| 2 = |#b| 2 but |#r| 2 = |#g| 2 and |#b| 2 = |#g| 2 . Likewise, and correspondingly, we have that b r |ξ = b b |ξ , but that b r |ξ = b g |ξ and b b |ξ = b g |ξ , where now |ξ = E bdry. |ψ for the boundary error E bdry. shown in Fig. 4.
We have thus seen that the defect parity conservation on the gband rg-restricted lattices are violated if and only if a green defect is created from the green boundary or from the green corner. Similarly, a red(blue) defect created at the red(blue) boundary or corner qubit will simultaneously violate the defect parity conservation law on the rband rg-(gb-and rb-)restricted lattices.
It is important to find a correction that respects global defect conservation at the boundaries of the color code However, a problem we find by considering the defect configuration on restricted lattices independently is that we can obtain corrections that do not respect Eqn. (8).
We find that we can obtain a correction that respects the boundary operators by combining the restricted lattices to produce a single unified lattice. We show the construction of the new lattice in Fig. 4. We show all three restricted lattices combined along their boundaries. We call this join between two boundaries of the restricted lattice a crease, and we give each crease a color, u = r, g, b, according to the color of the logical operator Z u its qubits support; see Eqn. (2). For instance, we see that the central rg-restricted lattice is combined with the gbrestricted lattice at the right of the figure along the green crease. Note also that the gb-restricted lattice is a reflection of the rg-restricted lattice over the green crease. The qubits at the green corner of these two lattices are unified.
In the same way, we unify the blue boundary and the blue corner of the rband gb-restricted lattices, and we unify the red boundary and the red corner of the rgand rb-restricted lattices to obtain the single restricted lattice shown in the figure. The resulting lattice gives rise to a symmetry that respects defect conservation symmetry among its boundary terms. We call this lattice the unified lattice, as it combines all three restricted lattices. For the triangular lattice we have introduced we find that our unified lattice is supported on a Möbius strip. This unified lattice is an interesting example where its corresponding symmetry includes all of the stabilizer generators S f twice. A similar idea was used in Ref. [58] to find one in the bulk that gives rise to three defects, one at the boundary that creates two defects, and an error at the corner where only one defect is produced. They are circled with solid, dashed and dotted lines respectively. Arrows point to the image of the error and its syndrome on the Möbius strip. We observe that the single-qubit error in the bulk produces three edges on the Möbius strip. A single-qubit error on the boundary produces two edges; one in the bulk of the lattice, and one that crosses the crease that represents the logical operator supported on the green boundary. An error at the corner of the color code produces a single edge on the Möbius strip.
a decoder for the tailored surface code undergoing biased errors.
B. Assigning weights to the edges of the matching graph on the unified lattice Here we explain how to decode the color code using the new symmetry. We look at how errors, and their syndrome, map onto the unified lattice. We will look at different single qubit errors to explain how we assign weights to the edges of the input graph, and we will explain how we determine the support of the error on a logical operator using the intuition we presented in Sub-Secs. II A and II B.
Errors map differently onto the unified lattice depending on whether the error occurred in the bulk, on the boundary, or at a corner of the lattice, see Fig. 5. We propose that a good strategy to this end is to assign weights to unit edges, i.e. edges created by single qubit errors, such that the sums of the weights of all the edges associated to each single qubit error are equal. Let us begin by looking at a single-qubit error in the bulk of the lattice. As we see in the solid circles in Fig. 5 this error produces three separate edges on the unified lattice. Without loss of generality then, let us assign a weight of 1 to a single edge in the bulk of the unified lattice. The sum of all the edges that identify the single qubit error then is 3.
We next consider an error at the boundary of the lattice, such as that circled by the dashed line in Fig. 5. We see that this error produces two edges on the unified lattice. These two edges are distinct in the following sense: the edge to the left of the Möbius strip can also be created by an error in the bulk. On the other hand, the edge to the right that crosses the crease is not consistent with any bulk error. We have already assigned a weight of 1 to all edges created by a single-qubit error in the bulk. We therefore give edges that cross the crease a weight of 2. This choice is such that the sum of the weights of the edges associated to the error at the boundary is equal to the sum of the weights of the edges associated to the error in the bulk: 3.
We finally observe that the error produced at the corner creates a single edge; see the error circled by the dotted line in Fig. 5. We must assign a weight of 3 to this edge that passes through the corner then to ensure consistency with the other single qubit errors. As errors compound to make longer strings, we assign weights to the edges that pair two well-separated defects as to the sum of the weight of the unit edges along the shortest path connecting the two defects, where the unit edges along this path take weights according to the assignment we have proposed above.
We use the resulting matching to find the commutator with some specified logical operator. Following the arguments we have given in the previous section, the parity of the number of edges that cross the dashed line on the green crease is consistent with the commutator of the errors with the logical operator supported on the green boundary: Z g . One can easily check that all the singlequbit errors that lie on the green boundary, including the red and blue corners, each create a single edge on the Möbius strip that crosses this line, whereas no other single-qubit errors produce any edges that cross this line. We therefore count all the edges that cross this line and thereafter propose a correction operator that is consistent with the commutator that is learned from the matching.

C. Correcting high-weight errors
In this SubSection and that which follows we motivate our choice to decode using the symmetry on the unified lattice. To do so, we compare our decoder to a naïve implementation of a decoder that finds a correction using a matching on two restricted lattices to identify the challenges that arise when designing a decoder for a color code with boundaries. Without loss of generality we assume the decoder finds a correction using the rband rgrestricted lattices such that the decoder does not identify FIG. 6. A difficult error for a decoder that does not account for correlations between green and blue defects. For a sufficiently large code, with d 11, one can find errors of weight where, if we neglect the position of the blue defect, the green defect is paired to the green boundary with a correction of weight wg < w. Likewise, a decoder that matches on the rb lattice will suggest a correction where the blue defect is paired to the blue boundary with an operator of weight w b < w. These choices will lead the decoder to fail.
correlations between green and blue defects.
We consider the error shown in Fig. 6. It shows an error E of weight w = O(d/3) where w (d − 1)/2 for lattices with d 11 that we might hope to be able to correct. The error extends from the red boundary such that a blue and a green defect are created at the end of the string E near to the centre of the triangle. We also consider operators C g and C b that pair the green and blue defect to the green of blue boundary, respectively, and we have that EC g C b is a logical operator. The weights of operators E, C g and C b are w, w g and w b , respectively.
The decoder will match on the rband rg-restricted lattices. In the case of the rg-restricted lattice the decoder must decide to pair the green defect onto either the red boundary or the green boundary. By consideration of the geometry of a triangle, one can find errors of weight w = d/3 + const. with const. > 0 a small integer such that the matching subroutine will pair the green defect onto the green boundary provided w > w g . Likewise, if w > w b , the decoder will incorrectly pair the blue defect onto the blue boundary. We therefore see that low-weight errors with w = O(d/3) can lead to a logical failure if both w > w g , w b . Nevertheless, this is a bad choice of correction given that it can be that w w g + w b . Ideally, we can find a decoder that can consider all of the defects of the lattice in unison to account for this.
In Fig. 7 we consider the same error on the unified lattice. The image of the error effectively doubles its weight to ∼ 2w. However, to pair the defects incorrectly, the decoder must produce edges of weight 2w g to connect The image of the error shown in Fig. 6 on the unified lattice. The error, shown in grey, has weight ∼ 2w +1. However, the distance between the two green defects is 2wg on the Möbius strip. Likewise, the distance between the two blue defects is 2w b . We therefore see that the matching decoder will only find the incorrect outcome if 2w + 1 2(wg + w b ). Given that w + wg + w b = d, we find that the decoder can tolerate errors of the type shown in Fig. 6, i.e., long strings that extend from the boundary, provided w d/2. the green defects that appear on both the rgand gbrestricted lattices, together with an edge of weight 2w b to pair the blue defects on the rband gb-restricted lattices. As such, the decoder will only fail if where we have added a unit on the left hand side of the inequality to account for the single edge to pair the green and blue defect together on the gb-restricted lattice. As such, we see that matching on the unified lattice enables us to correct high-weight strings that extend from some boundary. Moreover, clearly, the unified lattice is invariant under color exchange, as it accounts for all three restricted lattices equally, as such there is no color dependence on the choice of correction.

D. Accounting for the degeneracy of errors
Let us now consider issues that arise due to the degeneracy of errors when we consider syndromes on the restricted lattice. As we have already alluded, a problem that can arise when we make a decoder based on matching on a restricted lattice is that some syndrome information is disregarded. Indeed, we can find many different errors that produce different syndromes on the color code, that give rise to the same syndrome on the restricted lattice. Let us consider the error shown in Fig. 8. Again, considering the naïve decoder, where we find a correction by pairing only on the rgand gb-restricted lattices, the decoder is likely to pair the green defect to the top corner instead of the green boundary. However, this correction has a weight of 5, whereas the error itself has a weight of 4. As such, we might expect a better strategy to account for this degeneracy in the syndrome.
In Fig. 8 we show the image of the error on the unified lattice. Again, as the decoder considers all the syndrome information equally, in this example we find that The left of the figure shows the error on the color code lattice, together with the correction a naïve decoder will choose. Specifically, matching on the rb-restricted lattice will pair the two blue defects correctly. However, matching on the rg-restricted lattice will incorrectly pair the green defect along the shortest path to the green corner of the lattice. On the right we show the image of the same error on the unified lattice. Given that the decoder accounts for information from all three restricted lattices, we find that the decoder will find the right correction by pairing defects along the solid lines. The incorrect matching, shown by the dashed lines, has a higher weight. As such, in this example, a decoder that matches on the unified lattice will be successful. the decoder will find the correct solution. In the figure we compare the weights of the edges of both the correct and incorrect matching. We find that the correct matching, shown by solid lines, has a weight of 8, whereas the incorrect matching has a weight of 11, where we remember the assignment of weights to edges that pass over boundaries and corners as explained in SubSec. III B.

E. Matching with branching errors and finding a low-weight correction
Unlike the surface code [6], we find that the sum of the lengths of all the edges returned from the minimumweight perfect matching is not necessarily proportional to the weight of the least-weight correction. In fact we find that the sum of the lengths of edges that indicate a branching error has occurred is quite a complicated function. Without intervention, a minimum-weight perfectmatching decoder will be biased towards a locally minimal solution with weight that is greater than the leastweight correction. Let us look at some errors to illustrate this problem before proposing a solution, see Fig. 9.
A typical string-like error of weight w will create two defects, one at each of its endpoints, that matched twice on the unified lattice. As such, we have that the total length of the edges from the matching associated to this error will have length ∼ 2w, see Fig. 9(a). In contrast, the sum-total of the lengths of the edges that identify a branching error may be larger. For instance Fig. 9(b) shows a single error that is identified with three edges. We can find branching points of three qubits, where the FIG. 9. Matching found for the syndrome of small errors using the matching algorithm. (a) Typical string-like errors with weight w are paired by two edges from the matching. The sum of the lengths of the edges that pair these two red defects has length ∼ 2w. (b) and (c) show small errors with weight w = 1 and w = 3, respectively, that are identified by the matching algorithm with three edges in the matching that have total length = 3w. (d) and (e) show branching errors of weight w = 9 whose matchings have lengths = 2w + 3 and = 3(w + 1)/2, respectively.
sum total of the weights of the edges that match the defects of the branch is nine, see Fig. 9(c). As such, we observe that the sum of the weights of the edges may have ∼ 3w around a branching error.
Let us now look at how the sums of the weights of edges that identify an error misalign with the weight of the error. We will consider an error E B with weight w B that includes a branch that has three qubits each contributing three edges. The error lies on the support of a least weight logical operator such that there is an We first consider an error with a branch such that the sum total of the lengths of the edges that identify the branch are B ∼ 2w B + 3, see Fig. 9(d). In contrast, its alternate correction E A is matched with edges of length A ∼ 2w A . The minimum-weight perfect-matching algorithm will identify E A as the error if A > B . With the relations proposed above find that this holds if We therefore see that there are branching errors with weight w B = (d − 1)/2 that can lead the decoder to choose the alternate correction. However, these errors should be correctable. We also find that there are branching points that have very low weight edges. For instance, we find that there are branching errors that are identified by the minimumweight perfect-matching algorithm with edges of length B ∼ 3w B /2. We show one such error in Fig. 9(e) with B ∼ 3w B /2 + 3/2. We find that this can compromise the performance of the decoder significantly. Let us find the largest error E A such that A < B where again, the weight of E A is w A , and has edges of length A ∼ 2w A and E A E B is a logical operator of weight d. We find that Once again, we find that there are errors with weight w A ∼ 3d/7 where clearly w A (d − 1)/2 for large d that are misidentified due to the low weight of the edges that match the branches similar to those shown in Fig. 9(e).
Having identified the problem that branching errors have a weight /3 w 2 /3 with the sum of the lengths of the edges that identify the branch, we need to perform additional analysis to evaluate the weight of errors that include a branch. The general problem of finding a least-weight correction will require a least-weight hypergraph matching algorithm.
In lieu of hypergraph matching we propose another solution to find the weight of large branches that may contribute to a bad choice of correction where we use minimum-weight perfect-matching multiple times to find alternative corrections. Our method is based on a similar idea proposed in Ref. [38] used to find an alternative correction for the surface code with boundaries. We explain the details of this in Appendix D. This results in two inequivalent low-weight corrections, see Fig. III E. We can thus use the alternative correction to correct instances where a single use of the minimum-weight matching algorithm is biased towards a higher-weight solution. We discuss how we compare alternative corrections in the following Section; SubSec. IV C

IV. NUMERICAL RESULTS
In this Section we simulate the error-correction procedures we have proposed to evaluate their performance. We use an independent and identically distributed noise model to test the low error rate behaviour of the code as well as its threshold. We begin by using a basic implementation of our decoder that we shall henceforth call the Möbius decoder. This decoder uses the correction that is obtained from a single matching subroutine on the unified lattice. At low error rates, we show that the decoder is able to correct errors with a logical failure rate ∼ p αd with α ≈ 3/7, as predicted in Eqn. (13), and p is the probability of a bit-flip error occurring on any given qubit. We also evaluate the error tolerance threshold to be 9.0% using the Möbius decoder.
The value of α we obtain indicates that, as expected, there are errors of weight ≤ (d − 1)/2 that lead the decoder to fail. As such, we improve the Möbius decoder by introducing a variant that we term the comparative decoder. This variant uses an additional subroutine to find an alternative correction that is compared with the result obtained by the Möbius decoder. We verify the comparative decoder's ability to correct errors with weight up to FIG. 10. Obtaining two low-weight corrections that are logically inequivalent. We perform minimum-weight perfectmatching multiple times. The first time with a single use of the minimum-weight perfect-matching algorithm that we discussed above. We show this matching in blue. In the second use of the subroutine we change the topology of the manifold to force the algorithm to find an alternative low-weight correction, shown in red.
the code distance for d ≤ 13 by conducting an exhaustive search through all possible weight w ≤ (d − 1)/2 error configurations.
A. Logical failure probability at low error rates At low physical error rates, we expect the logical failure rate to decay rapidly with code distance. We show our data in Fig. 11. The major contribution to P fail will be errors of minimal-weight t ≤ (d−1)/2 that lead to logical failure. We compare the decoder performance at low p values to the ansatz [78] By fitting to the data collected, we get β = 0.148, the entropy term N = 12.49, γ = 0.488, and α = 0.422. We explain how we obtain these values in Appendix E.
Remarkably, the value of α we observe is close to 3/7(≈ 0.428). We predicted the Möbius decoder may fail for errors of weight t = 3d/7 from our analysis of branching errors in Fig. 9. Specifically, see the argument that gives rise to Eqn. (13). We find that a color code decoder that can correct t = αd errors with α = 3/7 is competitive with a decoder for the square-lattice surface code that decodes up to its distance [26,28,29]  qubits. Let n = d 2 s.c. be the number of qubits on the distance d s.c. surface code. Decoding the surface code up to its distance means we have t s.c. = √ n/2. We compare this with a color code with an equal number of qubits, n = 3(d − 1)(d + 1)/4 + 1 ≈ 3d 2 /4. Rearranging, we have that d = 2 √ n/ √ 3. Substituting this expression into our value of t = 3d/7 we find almost reaching the optimal logical failure rate scaling of the surface code. Together with its capability of performing fault-tolerant logical operations with low overhead, we might expect the color code to require fewer resources for quantum-logic operations at low error rates and modest system sizes. Moreover, given we have not reached the capacity of the color code to correct up to weight (d − 1)/2 errors, improved decoders have the potential to outperform the logical failure rate scaling of the surface code. In Sec. IV C we show an improved version of our decoder can correct all errors of weight (d − 1)/2 for d ≤ 13. We first evaluate the threshold of the basic implementation of our decoder.

B. Thresholds
The threshold is the critical physical error rate p c below which the logical failure rate P fail can be suppressed given increasing system sizes. The threshold is indicated by the intersection of curves recorded for different system Each dashed line indicates this fitting for a given system size.
sizes as seen in Fig. 12. We calculate that p c = 9.0% using Monte Carlo simulations, with 50 000 samples having been collected for each data point. We fit data close to the crossing to a Taylor expansion truncated to the quadratic term f = Ax 2 +Bx+C, where the function f is expressed in terms of the rescaled error rate x = (p − p c )d 1/v0 . This method has been explained in greater detail in Ref. [33]. We obtain the value of the critical exponent v 0 = 1.422 and the constants A = 1.215, B = 0.783 and C = 0.122.
The threshold we find demonstrates a modest improvement over the value ∼ 8.7% obtained with other matching decoders for the hexagonal lattice color code [51]. However, we note that our decoder is unable to match the performance of a maximum-likelihood decoder [45,69,70] or even a neural-network decoder [75]. It may be interesting to determine the types of errors that lead the decoder to fail for error rates 9% ≤ p ≤ 10.9%. This information may give us new insights into ways we can improve the Möbius decoder. In what follows we develop this decoder by finding and comparing two alternative corrections to find a better result at very low error rates, i.e., the comparative decoder. Surprisingly, we found that the threshold we obtained, ∼ 9.0%, was insensitive to this improvement. 13. We observe an error of weight w = 4 shown by the qubits in white on the distance d = 9 lattice. The minimum weight matching decoder will choose the (left) set of edges such that or. = 11. However, we note that the error configuration in dark gray traced out by these edges is logically inequivalent to the least-weight solution and is of weight w = 5. The alternate correction (right), which forces an inequivalent path around the Möbius Strip, finds a set of edges with alt. = 12 that correctly traces out the true least-weight error. This phenomenon is an instance of the branches we have discussed in Fig. 9.

C. Exhaustive simulations
We exhaustively test a comparative decoder over system sizes d ≤ 13 for errors of weight w ≤ (d − 1)/2. With the change introduced to our original Möbius decoder as described below, no logical failures were detected for all system sizes evaluated in this search. At d = 13 we test ∼ 5 × 10 9 error configurations.
In SubSec. III E we motivated the need to compare different low-weight corrections to make a better decision to recover the code state, and in Appendix D we gave details on how to obtain an alternative correction. In what follows we describe how we compared the original and alternative corrections to obtain this result. We also discuss other methods of comparing different choices of correction operator.
To determine whether we should change our correction to the alternative correction, we simply look at the total lengths of the edges returned by the matching carried out to find the two different corrections. We refer the reader back to SubSec. III B to see how we evaluate the lengths of the edges between defects for a matching. We denote the total lengths of the edges returned by original and alternative matching as or. and alt. , respectively. In this decoder, the logically inequivalent solution replaces the orginal decoder solution if and only if the following two conditions are met: and, for all the cases we consider in this exhaustive test, we take Υ = 1.
We motivate the conditions given in Eqns. (16) and (17) using the examples we have discussed in Sub-Sec. III E as follows: Eqn. (16) requires that the matching for original and alternate corrections, or. and alt. , differ in their length by 1. Let us look at a case of concern given by the example in Fig. 9(d). In the example, if a branching error of this type has weight w = (d − 1)/2 we argued that the decoder would identify an incorrect matching where the decoder matches the defects to the boundaries. On the other hand, we find alt. ∼ 2×w = 2×[(d−1)/2]+3 = d + 2. As we see, the difference between or. and alt. is 1, but that the alternative solution is the correct one.
For the system sizes of interest in the exhaustive search, we find that all the cases where the alternative correction provides the correct solution, or. is always one unit smaller than alt. . Let us also justify Eqn. (17). For d ≤ 13, while the condition given in Eqn. (16) is necessarily satisfied to consider choosing the alternative correction, this criterion is not sufficient to make any such change. A more careful examination shows us that in the cases that the original matching gives the wrong correction for a weight w = (d − 1)/2 error we find an unphysical solution. We find that we can test the physicality of the error. We state that 1 mod 2 = w mod 2. For close cases then, in the sense of the condition given in Eqn. (16), we can test that the matching corresponds to an error of weight w = (d − 1)/2. Substituting w = (d − 1)/2 into the unphysical solution, ( − w) mod 2 = 1, gives the condition of Eqn. (17) with a simple rearrangement of the expression.
Let us examine two specific error configurations that are representative of the types of errors we encounter in the exhaustive search that are corrected using the conditions we have proposed. In Fig. 13 we show an error of weight w = 4 on a d = 9 lattice that is representative of the error shown in Fig. 9(d). The initial use of minimumweight matching finds a set of edges or. = 11 that trace out a string-like error that is in fact logically inequivalent to the least-weight solution. In contrast, with the comparative decoder, we find an alternative matching that gives a solution with alt. = 12. We check that these val-1 Sketch of proof: A single error introduces an odd parity of edges to the unified lattice on the boundary of the single error where edges are weighted according to SubSec. III B. Therefore, introducing multiple errors of w bit flips mean that the parity of is equal to the parity of w, where cancellations due to the intersection of the boundaries of adjacent errors locally change by an even parity, and thus do not affect the overall parity of . The matching may follow a path that does not track the boundary of the error. In which case the path is deformed from the boundary by a trivial cycle on the unified lattice. Given that all trivial cycles have an even length (where we consider that trivial cycles may cross creases and corners), changing the correction by a trivial deformation does not change the parity of where, again, the difference due to the intersection of an edge with a trivial cycle will not change the overall parity of . ues of or. , alt. and d satisfy both Eqns. (16) and (17), thus enabling us to identify the true least-weight error with a branching point. At d = 11, we also find errors where minimum-weight perfect matching misidentifies a correction whereby a high-weight branching error demonstrates a matching with a disproportionately low value of or. = 12. See Fig. 14. This error is representative of branching errors similar to those in Fig. 9(e) where a high-weight branch gives rise to a low-weight solution to the original matching subroutine. Once again, we have that alt. = 13. Here, the original correction isolates a branch-like error and the alternate correction reflects a string-like error. Once again, we find that these values for or. , alt. and d satisfy our conditions that indicate we should choose the alternative correction to find a lower-weight solution.
Thus, in conjunction with the observations we have previously stated, under the condition of Eqn. (16) constraining the corrections to arise from errors such that w or. + w alt. = d and w or. − w alt. = ±1, Eqn. (17) is a simple odd-even rule that ensures that the switch to the alternate correction is only made if the original correction derives from the higher weight w = (d − 1)/2 + 1 error between the two.
It will be valuable to continue to test these conditions for larger system sizes and finite error rates. We do not expect that the conditions we have used here will continue to be successful for codes of larger distance. Indeed, due to Eqn. (13), and the discussion around it, we can expect that there will be a larger discrepancy between or. and alt. than that we check for the condition given in Eqn. (16). Indeed, based on our discussion in SubSec. III E, we conjecture that a general condition will have Υ scaling like d/14 ∼ d/2 − 3d/7. However, we do not observe error configurations that give rise to even lower matching lengths, as we might expect for errors such as those in Fig. 9(e), justifying our choice of Υ = 1 for the system sizes we test. Furthermore, it will be interesting if we can generalise the condition in Eqn. (17) for errors of weight greater than w = (d − 1)/2 and to determine if, in fact, there are additional conditions that can help us determine if we have found the correct solution following the initial matching subroutine. Discovering more general conditions will require a deeper understanding of the geometrical intricacies of the color code lattice.
To continue this analysis, we will need better methods for testing errors at low error rates. At d = 15 there are over half a trillion errors of weight (d − 1)/2; over one-hunrded times more samples than we have tested at d = 13. As such this test is impractical to run, even with a high-performance computing cluster. We might consider other diagnostics to test the performance of different conditions at larger system sizes, for instance, the methods proposed in [26]. Perhaps we can even find analytic expressions that explain the performance of the decoder that can be reached.

D. Estimating the weight of a least-weight correction
To generalise our result it will be interesting to find better ways of estimating the weight of a correction for some given matching, in order to accurately compare the results of the inequivalent matching subroutines. Let us remark on some other methods we have tested by exhaustive search to find a decoder that can correct all errors of weight (d − 1)/2. The tests we are about to describe were carried out for system sizes d ≤ 11.
Refs. [51] and [53] have both considered methods of estimating the weight of a correction given the output of matching subroutines on the different restricted lattices of the color code. Roughly speaking, Ref. [51] explains that a correction operator is supported on all of the qubits inside the boundary marked by a set of edges returned from the matching on all three restricted lattices. Ref. [53] builds on this idea, and shows that we can find a correction that is proportional to the lengths of the edges returned from two matching subroutines from two of the three different restricted lattices, up to some local corrections.
We found with a basic implementation of minimumweight perfect matching that we were not able to correct all weight (d − 1)/2 errors using either of the methods in Refs. [51,53] to estimate the weight of a correction for a given matching. A problem we encountered is that there are multiple solutions to the matching problem for a given pattern of defects. We give an example in Fig. 15 together with an explanation in the figure caption. Roughly speaking, while some solutions to minimum-weight perfect matching enable us to evaluate the correct weight for the least-weight correction, other degenerate solutions lead the decoder to overestimate the weight of the correction which can cause a comparative decoder to fail.
We add that we considered a number of ways to A weight (d − 1)/2 = 4 error on a d = 9 color code. A correction of weight five can introduce a logical error. As such it is important to determine that the weight of the correction associated to this matching is w = 4. A basic implementation of a matching subroutine is equally likely to choose between the (left) and (right) output, as both have = 12. However, only the matching and choice of paths on the matching of (left) will predict a correction of weight four using the methods of Refs. [51,53]. For the matching given on the left-hand side we see that the edges bound the lowweight error exactly, thereby showing that the correct weight will be obtained using the method of Ref. [51], for instance. Likewise, the set of edges, (left), will give the correct weight using the methods of Ref. [53]. Both of these methods will overestimate the weight of the correction given the matching shown on the opposite lattice, (right).
improve the solution returned by the minimum-weight perfect-matching algorithm to steer the matching subroutine to a favourable configuration to find the correct weight for the correction using the methods proposed in Refs. [51,53]. Specifically, we considered adding small adjustments to the edge weights for the input graph to bias the output to a more appropriate set of edges, and we were also selective of the paths we chose for the edges of the returned matching. We also tried comparing different subsets of edges according to their colors to find lower weight corrections with the method proposed in [53]. However, we were unable to find a strategy that would consistently correct all weight (d − 1)/2 errors.
It is interesting that the diagnostic we found to find the correct solution depends on global features of the two alternative solutions found by different matching subroutines. In contrast, the methods in Refs. [51,53] are able to locally estimate the weight of a small error cluster. In future work it will be interesting to learn how to generalise the conditions we have found to determine the best correction between that obtained with the original and alternative matching to go beyond the special case we have considered already with errors of weight (d − 1)/2 and codes of small distance. We may find that the global diagnostics we have proposed that take three integer values; or. , alt. and d, can be combined with local methods for evaluating the weight of an error.

V. DISCUSSION
In summary, we have shown how we can reduce logical failure rates for the color code with a matching decoder. We have demonstrated two innovations to correct high-weight errors, namely, we introduced a unified lattice for matching, which manifests as a Möbius strip for the triangular code, and we have also developed a method to arrive at lower-weight corrections using additional matching subroutines that produces inequivalent solutions to the decoding problem. To demonstrate the performance of our decoder we have evaluated logical failure rates at low p, and we have also conducted exhaustive analyses. Notably, our results generalise readily to the fault-tolerant setting where measurements are unreliable. This will be important to develop practical decoders for use with real physical hardware.
Our analysis suggests that there are ways to optimise our decoding strategy further. It may be fruitful to explore avenues by which soft information [79] can be integrated into the correction procedure via message passing in order to improve its performance. Let us also remark that there are now a number of different approaches to decode the color code, including approximate maximumlikelihood decoders [45]. It may be interesting to find new ways of comparing our decoder side by side with an optimal decoder to determine the classes of errors that are potentially correctable where our decoder is currently failing.
Ultimately, it may be valuable to generalise the matching decoder to more general classes of codes. The notion of stabilizer group symmetries offers us a natural route towards one such generalisation. A question that arises when discussing the use of symmetries to find matching decoders is how in general should we choose to over complete the generating set of the stabilizer group to make useful symmetries for decoding. For topological codes we typically use physical insights to find the symmetries that give rise to high-performance matching decoders. Specifically, the stabilizer relations for topological codes correspond to conservation laws among the low-energy excitations of their underlying phase [5,35]. Nevertheless, the example we have presented has shown us that we should even reexamine the symmetries we use to implement decoders for topological codes. To this end, in Appendix F, we discuss a number of ways that the methods we have developed here generalise to other decoding problems associated to the color code, and its variants. Our generalisations include a discussion on alternative symmetries that may be useful for decoding depolarising noise, and for fault-tolerant error correction, where we make use of the connection between the color code and the three-fermion model [22,80,81]. We also discuss how we can improve error correction with Majorana surface codes and higher-dimensional color codes by examining the symmetries of the boundaries of these codes. Finally, we discuss how these methods apply to single-shot error correction with the gauge color code. It is our hope that the ideas and tools we have developed here may inspire generalisations of the minimum-weight perfect-matching decoder to other codes in the future. We are particularly grateful to M. Newman for showing us the low-weight errors that inspired this work, to C. Jones for showing us some challenges associated to correcting branching errors, to P. Bonilla Ataides for conversations and for critically reading our manuscript, and to A. Doherty for pushing us to complete this project. BJB is also thankful for a great number of ideas and insights, as well as encouragement, over many conversations with David Poulin. KS is grateful for the hospitality of the School of Physics and the Quantum Theory Group at the University of Sydney. This work is supported by the Australian Research Council via the Centre of Excellence in Engineered Quantum Systems (EQUS) project number CE170100009. BJB also received support from the University of Sydney Fellowship Programme. The authors acknowledge the facilities of the Sydney Informatics Hub at the University of Sydney and, in particular, access to the high performance computing facility Artemis.

Appendix A: Interpreting edges from the matching subroutine
Here we explain why an error that anti commutes with the logical operator shown in Fig. 2 necessarily produces an odd parity of defects on either side of the logical operator of interest. To do so, let us divide the symmetry Σ r = Σ A r ∪ Σ B r into two disjoint subsets where Σ A r (Σ B r ) is the subset of faces above(below) the horizontal red line in Fig. 2.
Let us examine the subset Σ A r in Fig. 16(top). In the picture, we bring the logical operator L supported on the red line to the foreground of the image. The figure also illustrates that where we use a ∼ symbol to indicate that we are only interested in the support of the operator s∈Σ A r s that is on the restriction of the lattice shown in the figure panel.
To be more precise we have implicitly assumed that we have stabilizers, s∈Σ A r s and s∈Σ B r s, that can clean [82] a logical operator L far away from its own support.
We can use Eqn. (A1) to determine the commutator between E and L. Indeed, the relationship in Eqn. (A1) shows us that errors that anti commute with L must give rise to an odd number of defects on the faces of Σ A r . Operators (a), (c) and (d) are examples of errors that anti commute with the logical operator. As expected, all three of these errors give rise to a single defect on Σ A r . The figure also shows that the support of these errors overlaps with the support of the logical operator at a single site. In contrast, error (b) in the figure has no common support with the logical operator. Neither does it produce any defects on Σ A r , as such, this error is also consistent with Eqn. (A1). For completeness, we show the same errors for a third time in Fig. 16(bottom), with their defects now shown only on the subset of faces Σ B r . Again, errors (a), (c) and (d) all create a single defect on Σ B r whereas the error at (b) creates no defects on this subset. This is necessarily true given the definition of Σ r and Σ A r . The method we have given to find a correction operator is presented in a way that is readily generalised to other stabilizer codes. We have assumed that we can find an operator that recovers a code state C , and that we can find suitable elements of the stabilizer group that clean the logical operator L far away from its support. It will be interesting to learn if minimum-weight perfectmatching decoders will give a good performance for other codes where, perhaps, single-qubit errors give rise to a large number of defects with respect to some well-chosen symmetry. Examples of decoding algorithms for stabilizer codes beyond two-dimensional topological codes are presented in Refs. [35,60]. Further work is required to determine the success of minimum-weight perfect-matching decoders in the general case.
Appendix B: Using matching to find a likely error Here we justify weighting edges connecting two defects on the restricted lattice according to their separation. We propose a straight forward way of calculating this separation on the hexagonal lattice in Appendix C.
The negative logarithm of the probability that the independent and identically distributed noise model caused an error should be proportional to the weight of the error. We obtain a lower bound on the weight of the error by approximating the error as a series of strings that connect pairs of defects on the restricted lattice. Let us express an error as a product of strings, i.e., where α = v∈e(α) X v are string-like operators that create an even number of defects at their endpoints with respect to a symmetry and A denotes the set of stringlike operators that produced the error syndrome.
We show examples of string-like errors that make up A in Fig. 2(a), (c) and (d).
Let us assume that bit-flip errors occur with probability p such that the probability that error E occurs is where |E| denotes the weight of E. With the definition in Eqn. (B1) in place, we can write the negative logarithm of probability that error E occurred as where length(α) measures the separation between the defects created by α such that |E| ≥ α∈A length(α).
The decoder looks for an error E such that E has a syndrome that is consistent with that produced by E and where |E | is minimal as this will correspond to the most probable error that caused the syndrome according to Eqn. (B2). To estimate a least weight correction with respect to a symmetry we look for a minimal solution where A are the set of strings that give rise to the error E whose error syndrome is equal to that of error E.
We use the minimum-weight perfect-matching algorithm to find an error that produced the error syndrome with high probability. Given an independent and identically distributed noise model where bit flips occur at a low rate, we look for a low-weight correction operator. We find a low-weight correction by using minimumweight perfect matching where the defects of a symmetry of the code are vertices of the input graph and we weight edges according to their separation. The edges of the matching returned from the algorithm correspond to strings α of E .

Appendix C: Measuring the weights of edges on a restricted lattice
Here we show how to evaluate the separation between two defects on the hexagonal restricted lattice. Let us give each vertex a coordinate with three integer values x = (x 0 , x 2 , x 4 ). We show them in Figs. 17(left), (middle) and (right), respectively, where vertices with a common coordinate value lie on a bold line overlaying the lattice. We call them x 0 , x 2 and x 4 as they, respectively, align along the perpendicular to the clock-hand that points along twelve-hundred hours, 2 o-clock and 4 o-clock. Given this coordinate system it is easy to find the separation between any two defects. Suppose we have two defects at locations x = (x 0 , x 2 , x 4 ) and y = (y 0 , y 2 , y 4 ), we find that the smallest number of edges w between x and y is obtained by the formula where ∆ j = |y j − x j |. In Fig. 17(bottom) we show contours marking the number of edges a given vertex lies from the central vertex denoted 0. Lastly, let us remark that this coordinate system allows us to count the number of paths of length w between two vertices along the hexagonal lattice. We simply state the result and leave it to the reader to check its validity. We find that with where ∆ max. ≥ ∆ med. ≥ ∆ min. are the maximum, median and minimum values of the set ∆ 0 , ∆ 2 and ∆ 4 , respectively. One can use this formula to determine the degeneracy of a string-like error [83]. We note that we do not include this term in the edge-weight function for our implementation of the decoder where we match defects on the unified lattice. Indeed, this expression would require modification to deal with edges that pass over creases on the Möbius strip. We expect that including an adaptation of this term to the edge weights of the input graph to the matching subroutine may improve the performance of the decoder.

Appendix D: Finding an alternative correction
Here we explain how we use minimum-weight perfect matching to find a second low-weight correction that is logically inequivalent to the first matching. The idea is summarised in Fig. 18, and is developed from an idea proposed in Ref. [77]. The goal of this exercise is to find a second low-weight correction that is close to the weight of the correction obtained by the initial matching, except where the second correction differs from the first correction by a logical operator. We therefore seek a matching that differs from the first by a homologically non-trivial cycle about the Möbius strip. Here we explain how to use minimum-weight perfect matching to find such a correction.
We perform matching on an alternative manifold where we introduce a 'tear' to the Möbius strip, see Fig. 18(a). Specifically, the tear represents a barrier that prevents any defects from pairing across the tear. As we explain in the caption of Fig. 18, we remove all pairs of defects that were paired by an edge that crosses the tear in the initial matching subroutine, see Fig. 18(b). To find an alterna- . 18. Finding an alternative correction using an additional matching subroutine. The initial matching is shown by blue edges. We modify the input graph to the second subroutine to find an alternative correction. We introduce a 'tear' to the Möbius strip, see the zig-zag line at (a). In the second subroutine, no defects can pair using an edge that crosses the tear. We also remove all pairs of vertices from the initial matching that are paired by an edge that crosses the tear. For example, see the pair of defects connected by the dashed blue edge, (b). To find an inequivalent matching we introduce a single dummy node on either side of the tear, (c). The inclusion of these dummy nodes force the the matching to find an alternative correction, shown by the red edges. The two defects that are matched by the two dummy nodes form a new edge, see the endpoints of the edge at (d) and (e). Finally, we try to reduce the weight of the edge found using the dummy vertices by looking for a shorter route back over the tear using the edges that were removed when we made the tear. The defects found by the dummy nodes are subsequently paired to the green defects, connected by the dashed line, that were initially removed to obtain the alternative correction when we made the tear. The last step recovers the matching shown in tive correction we introduce a single pair of dummy nodes to the matching graph that begin on either side of the tear, see Fig. 18(c). The dummy defects may only pair by an edge with a very high weight that wraps around the manifold, as such, they find an alternative route via the other defects on the lattice.
The two defects that are paired with the dummy defects form a single new edge that crosses the tear. We show the end points of the new edge found by the dummy defects at Fig. 18(d) and (e). With this, we see why it is necessary to remove the defects that were previously paired by edges that cross the tear. If these defects remain on the torn manifold, the dummy nodes might be inclined to pair to these defects to find a second matching that gives a correction that is equivalent to the first. The figure clearly shows that the inclusion of the dummy nodes and the tear force the matching subroutine to propose a correction, shown in red, that is inequivalent to the original matching, shown in blue in the figure.
Once we have found an alternative correction, we attempt to reduce the weight of the matching using the edges we removed when we introduced the tear. Specifically, we attempt to find a lower-weight correction by connecting the two vertices that were paired via the two dummy nodes, let us call them u l and u r , to the defects that were removed for the second matching. We show this process in Figs. 18(d) and (e). To execute this operation computationally, we take a list of edges e j that we remove from the torn lattice because they cross the tear. These edges e j = (l j , r j ) connect vertices l j and r j that, respectively, sit to the left and right of the tear. We evaluate λ = min j (|u l − l j | + |u r − r j |), i.e., the sum of the lengths of the shortest two edges that connect u l and u r via an edge that was removed. Then, we compare λ to the weights of edges |u l − u r | and |l j − r j | to determine if we replace edges u l − u r and l j − r j with edges u l − l j and u r − r j to find a lower weight correction.
Finally, let us comment on the locations of the dummy defects. There are d locations along the tear where we can place the pair of dummy defects. It is not obvious, a priori, which location will give the least-weight alternative matching. As such, we perform matching to find the least-weight matching d times, where the pair of adjacent dummy nodes are translated along the lattice sites of the tear with each variation of the subroutine. Once we have performed all of the subroutines we compare the results of all of the subroutines and choose the alternative matching with the least weight. In practice we can perform all d of these matching subroutines in parallel, provided we have already obtained the results of the initial matching subroutine.

Appendix E: Fitting at low error rates
Here we explain how we obtained the values for our ansatz in Eqn. (14). We take the logarithm of the ansatz and separate out the terms dependant on log p such that Eqn. (14) takes the following form log P fail = G(d) log p + A(d). (E1) where now and We plot the logarithm of the logical failure rate against log p at different system sizes as in Fig. 19. Subsequently, we make a linear fit for each system size d such that we obtain data points for G(d) from the gradient of each value of d and for A(d) from the intercept for each d.
Data points used to obtain these fittings are collected using between 10 6 and 10 7 Monte Carlo samples, and we discard data points where P fail is lower than 5 × 10 −4 %. We use our data points for G(d) and A(d) to find the fitting parameters for our ansatz. Both G(d) and A(d) have a linear form in code distance d. We plot G(d) and A(d) as a function of d in the bottom left and bottom right graph in Fig. 19, respectively. Using our data we obtain  19. (top) The logarithm of the logical failure rate P fail is plotted against log(p) for different system sizes. Each individual plot is fitted to a linear equation, discarding points for which P fail is lower than 5 × 10 −4 %. These data points sit below the horizontal black line. We then extract the gradients G(d) and intercepts A(d) for each linear fit. (bottom) The gradients and intercepts of the linear fittings observed in Fig. 19 are plotted as function of the system size d. (bottom left) For the linear plot of G(d) in blue, the slope is the value of α for our decoder and γ is the intercept. The green line has a slope of 1/2, which is the gradient we expect for a decoder that can correct up to its code distance.
The red line has a slope of 1/3. We read α and γ from the gradient and intercept of the blue line, respectively. (bottom right) For the function A(d), the intercept is log β + γ log N and the slope is α log N . and To find the fitting parameters then we first read off α ≈ 0.422 and γ ≈ 0.488 from the gradient and intercept for the linear fit to G(d) as a function of d. We are then free to obtain the remaining parameters from the linear fit to A(d). We find N ≈ 12.49 and β ≈ 0.148 using the values of α and γ obtained from the G(d) fitting.
FIG. 20. The color code with two green boundaries and two blue boundaries. All four corners of the lattice are red. Like the color code on the triangular lattice, we can unify the restricted lattices of this code through the boundaries and the corners of the lattice. We show a green defect and a red defect created at the boundary. We show edges that pair them between different restricted lattices via the boundary. Furthermore, we can consider a Majorana surface code where an unpaired Majorana mode λv lies on each vertex and we have stabilizers S f = v∈∂f λv. In this picture the red face operators can be regarded as a tetron. Decoding the Majorana fermion surface code is equivalent to correcting bit flips for the color code.

Appendix F: Generalisations
We have proposed a decoder for the color code on the triangular lattice with three distinct and differently colored boundaries. This case is well motivated due to the capability of this code to perform a complete set of transversal Clifford gates. Nevertheless, we can conceive of other modes of quantum computation that will require alternative color code lattices undergoing more general noise models. Moreover, we may also consider generalising this decoder to higher-dimensional variants of the color code; notable among which is the three-dimensional color code that has a universal transversal gate set when supplemented by gauge fixing. Here we propose ways of generalising the methods we have presented to some other representative variations of the color code.

Alternative boundary configurations and Majorana codes
The unified lattice we have proposed for the triangular lattice can be generalised to other color code lattices with boundaries. In the general case, we can pair defects of different restricted lattices via appropriately colored corners. We show one such alternative lattice in Fig. 20 that encodes two logical qubits. As in SubSec. II A we can define a restricted lattice of face operators Σ u that includes all faces that are not of color u = r, g, b. Specifically, we can pair defects between the Σ u and Σ v restricted lattices via boundaries and corners of color w = u, v. Fig. 20 shows two errors that create defects that are paired be- tween two different restricted lattices via the boundaries. We show the topology of the unified lattice in Fig. 21.
Decoding the lattice shown in Fig. 20 may be particularly interesting, since this lattice has already been demonstrated to have a threshold at ∼ 10% [53] using a restriction decoder. It may be that the threshold will increase further using a unified lattice. Indeed, there may be additional syndrome information that is neglected by the restricted lattice that may improve the performance of the decoder. Moreover, the structure of this lattice is such that it is relatively straight forward to count the number of least weight errors that should lead the color code on this particular lattice to logical failure. As such, studying this model may provide a useful example to evaluate the entropic contributions that affect the performance of a color code decoder.
We also remark that decoding bit-flip errors on this color code is equivalent to decoding common types of errors acting on a Majorana surface code [44,[61][62][63][64]. In the example shown we can think of the red face operators as the support of a single tetron, i.e., an island where an unpaired Majorana mode lies at each corner of each square face. We therefore see that our methods are readily adapted to decode Majorana surface codes.
We have argued that it is advantageous to decode the color code using a unified lattice that consists of all of the restricted lattices. We have shown how to combine restricted lattices via the boundaries of a planar color code to form a crease. However, for completeness, we should also consider how we can combine restricted lattices on a continuous lattice with no boundaries. Let us now consider the color code with periodic boundary conditions.
To this end let us propose a 'seam'. This is a continuous line along which all three restricted lattices can be connected. The face operators of the symmetry over a single seam are collectively shown in Figs. 22(top) and (middle). In Fig. 22(top) we show the restricted lattice corresponding to the symmetry Σ b to the left of the lat-tice and Σ g to the right. We obtain the stabilizer that is the product of Pauli-Z operators along a vertical line at the middle of the figure where the two lattices meet. To recover a trivial operator from the stabilizer group corresponding to a symmetry we combine this operator with that obtained by taking the product of the face operators shown in Fig. 22(middle). These are the faces of the Σ r restricted lattice at the left of the figure. The product of all of these faces gives a symmetry along the seam.
Let us now consider how we correct errors that cross over this seam. We show two errors, Figs. 22(a) and (b) on both the (top) and (middle) image. Let us first consider the error shown at Fig. 22(a). The blue defect shown on the Σ g restricted lattice to the right of Fig. 22(top) can be paired to the blue defect on the Σ r restricted lattice shown in Fig. 22(middle) via the seam. The same blue defect does not appear on the Σ b restricted lattice at the left of Fig. 22(top). As such, naturally, we see that this error respects the defect conservation symmetry of the color code.
We also consider the error Fig. 22(b). Here the green defect appears to the left of Fig. 22(top) and (middle). We can consider pairing this defect to itself via the seam. The green defect does not appear at the right of Fig. 22(top) and Fig. 22(middle). Once again then, we see that this error respects the defect conservation symmetry, as we expect.
We have now proposed a seam where three restricted lattices meet. Let us now show how we can combine these lattices over the surface of a torus to make a unified lattice that respects a global symmetry. In Fig. 22(bottom) we represent each of the restricted lattices by a one-dimensional line, where these one dimensional lines are shown along the bottom edges of the restricted lattices shown in Figs. 22(top) and (middle) according to the color of the restricted lattice Σ u . They denote the onedimensional intersection of the restricted lattices along a line that runs orthogonal to the seam. We assume that the seams of the lattice are translationally invariant along the orthogonal direction that is not shown by the one-dimensional representation of the symmetry. Fig. 22(bottom) therefore shows how we can consistently combine the restricted lattices about a surface with periodic boundary conditions. Given this unified lattice that combines restricted lattices via multiple seams, we can adapt the method discussed in SubSec. II A to find the commutator of an error with some choice of logical operators using minimum-weight perfect matching.

Depolarising noise and fermionic symmetries
Let us consider different methods of error correction with the color code by considering the full stabilizer group. As we have already mentioned the symmetries we have used to find the matching decoder are connected to the conservation laws of the equivalent system of two decoupled surface codes that can be obtained by unfold- ing [40,54,55]. Here we look at an alternative unfolding of the color code into two copies of the so-called threefermion model [22,80]. We look at the conservation laws of this model and its corresponding symmetries. In addition to fundamental interest, this observation may be valuable to find a decoder for the color code undergoing depolarising noise. Indeed, it is known that the color code undergoing depolarising noise can demonstrate high threshold error rates [40,45].
We begin by redefining some of the notation we used above where we only considered a bit-flip error model. In addition to measurement errors let us now assume an error model where Pauli-X, Pauli-Y and Pauli-Z errors can occur. For our discussion, in addition to the Pauli-Z stabilizers S Z f = v∈∂f Z v we already defined, we also 23. The product of Pauli-X stabilizers, Pauli-Y stabilizers and Pauli-Z stabilizers on, respectively, red, green and blue faces gives rise to a symmetry.
We therefore have symmetries where P = X, Y, Z is a Pauli label.
We will now redefine defects c P with both a color label c and a Pauli label P according to the convention in [22]. We say that a face f supports a defect with Pauli label P if S P f = +1 but all other stabilizers at f give the -1 outcome. The defect also takes the color label corresponding to col(f ). Note that by definition. Therefore there must be an even parity of violated stabilizers at any given site. Roughly speaking this convention means a defect has an X label if it is created at the end point of a string of Pauli-X errors. Likewise a Z(Y ) label if the defect is created at the end point of a Pauli-Z(Pauli-Y) errors.
Having now extended the description of defects of the color code to include both a color and a Pauli label, we can also extend the conservation laws among the defects that respect the symmetries. By definition, the stabilizers of Σ c,P return −1 outcomes at faces that support defects c P such that neither c = c nor P = P can be true. For instance, the existence of the symmetry Σ r,Z indicates that the collection of all defects with labels g X , b X , g Y , b Y over the lattice respect parity conservation, up to the lattice boundaries.
In Ref. [35] it was proposed we could find alternative symmetries for the color code to obtain variations of the matching decoder, see Appendix D; example 5. It is pointed out, for instance, that the product of S X f stabilizers on red faces, S Y f on green faces, and S Z f on blue faces gives rise symmetry up to lattice boundaries, see Fig. 23. In general, on a lattice with closed boundary conditions, we have a symmetry where c X , c Y , c Z are colors that all take different values. Locally correctable clusters of defects can be obtained by combining groups of defects that are matched over four different variations of this symmetry where variations are obtained with permutations over the three elements (c X , c Y , c Z ). Specifically, the four variations of TABLE II. Color code defects cP expressed in terms of pairs of fermions α ± , β ± and γ ± of the three-fermion model. The superscripts ± denote different copies of the three-fermion model. Colors for defects c = r, g, b vary with rows and Pauli labels P = X, Y, Z vary along columns of the table.
this matching must include two even permutations of (c X , c Y , c Z ) and two odd permutations. We elaborate on this point shortly. Once again, by definition, we have that a subset of defects over the lattice respect a parity conservation law that corresponds to a given symmetry. For symmetry Σ(c X , c Y , c Z ), we pair all types of defects on the lattice neglecting only c X X , c Y Y and c Z Z . Indeed, these three defect types are not detected by any of the stabilizers in Σ(c X , c Y , c Z ).
We can understand the symmetries of the color code Σ(c X , c Y , c Z ) in terms of decoupled copies of the three-fermion model [22]. The connection between the color code and the three-fermion model is explained in Ref. [22]; Appendix B. See also recent work [81] on the three-fermion model. Let us give some brief remarks on the model. We label individual charges α, β and γ. We can think of the charges of the three-fermion model as different types of defects for the syndrome pattern of some abstract code that satisfies where |#f | 2 = 0, 1 denotes the number of defects of type f = α, β, γ over the global system modulo 2. While any given value |#f | 2 may take an odd or even parity, the sum of the number of two types of fermion must take an even parity as a consequence of Eqn. (F4), i.e., |#f | 2 + |#f | 2 = 0 modulo 2 with f , f = α, β, γ. We therefore obtain a charge parity conservation law that allows us to apply matching to find a correction for a syndrome with this structure. As an aside, we note that these conservation laws reflect those of the color code undergoing a bit-flip error model, see Eqn. (9). In Ref. [22] it is explained that we can express color code defects with labels c P presented above in terms of pairs of fermionic charges; one from each of two decoupled layers. We append a positive or negative superscript, ±, to the fermion labels to denote which of the two decoupled layers a given fermion belongs to. We forego the details of the mapping between the three-fermion model and the color code [22] that emerges at the level of the anyonic quasiparticle excitations of the underlying topological phase of the color code [13,40,80]. For this discussion it is sufficient to state how the color code defects equate with pairs of fermions as shown in Table II.
By examination of Table II we find that the defects that are identified by the stabilizers of Σ(c X , c Y , c Z ) symmetries correspond to the conservation laws of the fermionic theory; Eqn (F4). Let us take, for example, Σ(r, g, b) with c X = r, c Y = g and c Z = b such that we pair all defects on the lattice with the exception of r X , g Y and b Z . These three defects are those along the leading diagonal of Table II. We observe that all other defects include either a β + fermion or a γ + fermion. On the other hand all the defects along the leading diagonal include an α + fermion rather than a β + or a γ + fermion on the positive copy of the three-fermion model. Therefore, by pairing all defects identified by stabilizers Σ(r, g, b) we obtain pairs of color code defects that satisfy |#β + | 2 + |#γ + | 2 = 0 under the three-fermion mapping. With the arguments given above we have that any collection of color code defects is satisfied if, say |#β ± | 2 + |#γ ± | 2 = 0 and |#α ± | 2 + |#β ± | 2 = 0 are all satisfied on both the positive and negative copies of the three-fermion theory. Curiously, we find that we pair for the conservation laws of the positive copy of the three-fermion theory if we take the defects measured by symmetries with an even permutation of the three colors in the argument of Σ(r, g, b), and we pair for the conservation laws of the negative copy of the three-fermion model for odd permutations of Σ(r, g, b). For instance, Σ(g, b, r) with c X = g, c Y = b and c Z = r we obtain pairs of defects that respect |α + | 2 + |β + | 2 = 0. In contrast we have that pairing the defects measured by the stabilizers of Σ(r, b, g) gives pairs of defects satisfying |β − | 2 + |γ − | 2 = 0.

Fault-tolerant error correction
Minimum-weight perfect matching decoders can be generically adapted to the fault-tolerant setting where we assume unreliable measurements [6,35]. To identify measurement errors we repeat stabilizer measurements to collect syndrome data over a long time. We then define a defect in spacetime where the stabilizer measurement performed at time t, denoted S(t), differs from the outcome of the measurement S(t − 1). We therefore recover a defect parity symmetry in spacetime for each subset of stabilizer operators that respects a symmetry. Specifically, up to boundaries, we have a symmetry for checks S(t)S(t − 1) over all t and t − 1 for S ∈ Σ with Σ ⊆ S a symmetry of the stabilizer group. See Appendix D example 8 in Ref. [35] for a discussion.
With the extended symmetries defined we show an example of a measurement error in Fig. 24(top left). The figure shows a Pauli-Z stabilizer and a Pauli-X stabilizer measured sequentially over time on the same face where time runs upwards. A measurement error occurs on the central Pauli-Z measurement. This creates a pair of r X defects separated over time given that the S X f stabilizer has not been violated. Clearly, the creation of a pair of r X defects respects the defect conservation laws of the color code. Moreover, one can check that a single Pauli-X error on a qubit at some time will simultaneously create three defects on a common time plane; one r X defect, one g X defect and one b X defect. We can therefore adapt our methods of decoding proposed above in the spacetime picture, where we pair defects of two of the three colors for a common Pauli label. In this setting we can take a common simplification where we view Pauli-Y errors as creating two triples of defects; one triple of r X , g X , b X defects and one of r Z , g Z , b Z , both of which can be decoded independently.
We can use the notion that there are symmetries between different Pauli labels as described in SubSec. F 2 to propose alternative fault-tolerant error-correction protocols. Our strategy to repeatedly measure stabilizers means we can compare the values of stabilizer measurements taken at different times to test for agreement. This is because, assuming no errors occur, pairs of time adjacent measurements are constrained to agree. In general we can propose using the results of different subsets of stabilizer measurements that are constrained to agree to identify both measurement and physical errors. For instance, it follows from Eqn. (F2) that the product S X f , S Y f and S Z f are constrained to give a constant result. A violation of this constraint indicates that a local error has occurred; either a physical error or a measurement error.
Assuming a phenomenological noise model where we can choose to measure any one face operator per unit time, we propose measuring Pauli-X, Pauli-Y and Pauli-Z stabilizers sequentially. As we will explain, we find that this improves the distance of the code to time-like correlated errors. Now, given that the product of any three sequential stabilizer measurements are constrained to give the same result, we find that more defects are created for a given measurement error. In Fig. 24(top right) we show five measurements in this sequence. The figure shows a measurement error occurring on the middle measurement in the sequence. This measurement error will violate three of these constraints. These constraints consist of triples of stabilizers that are grouped by the braces at the side of the measurement sequence.
As with the conventional case already discussed, the defects of the spacetime syndrome produced for the phenomenological noise model respect the same conservation laws as in the idealised case where all measurements are reliable. As always, we color each defect according to the color of the face on which it is detected. Further, we give each defect a Pauli label according to the middle of the three stabilizer measurements for some given constraint. As shown in Fig. 24(top right), a single measurement error produces three defects of the same color, where each defect has a different Pauli label. Indeed, we show b X , b Y and b Z defects in the figure. This configuration is consistent with the defect conservation laws of the color code discussed in Subsec. F 2 for the case of depolarising noise and ideal measurements.
Let us show why this convention is consistent with that we have proposed above. We will concentrate on a single Pauli-X label, but our discussion is symmetric over any choice of Pauli error. We defined a defect with a Pauli-X label appearing at the end point of a string of Pauli-X errors. In Fig. 24(middle left) we suppose a Pauli-X error occurs on a single qubit during the interval marked by the red line. As the Pauli-X error is not identified by Pauli-X stabilizer measured in the middle of the displayed sequence, it does not matter if the error occurs before or after this measurement. We find that only one blue defect is produced that we label b X . The time adjacent constraints are not affected by the Pauli-X error as the error violates an even parity of each of the stabilizer measurements of both checks. Defects of other colors, r X and g X , must also be created but are not shown in the figure.
It remains to check the syndrome produced if a Pauli-X error occurs in between a Pauli-Y and Pauli-Z stabilizer measurements, see Fig. 24(middle right). In this case, we produce six defects at two different temporal planes. These are r Y , r Z , g Y , g Z , b Y , b Z . In the figure we find that only the Pauli-Z stabilizer is violated of the two green checks that are shown. Once again, the blue and red checks behave equivalently but are not shown. Given that for each color we have a c Y defect and a c Z defect, and each of these are locally consistent with a c X defect, we can maintain the convention we have proposed for a Pauli-X error occurring at any time on any qubit in this spacetime syndrome.
The spacetime defect configurations for this alternative procedure for fault-tolerant error correction shares a pleasing symmetry between single-qubit physical errors and single-measurement errors. Indeed, a single qubit physical error creates three defects of alternate colors, but with the same Pauli label, see e.g. 1(c). In contrast, a single measurement error creates three defects of the same color with three distinct Pauli labels, see Fig. 24(middle left). Likewise, in this scheme, we require two measurement errors to produce two defects of the same type. In Fig. 24(bottom) we show an error configuration that produces two b Z defects. This shares a commonality with physical errors where we require two bit-flip errors to separate two defects of the same type, see Fig. 1(d). It may be valuable to find better decoders to exploit this structure. Further development of the theory of symmetries and decoding might lead to such decoding algorithms [35].
We can obtain better fault-tolerant error-correction procedures by considering alternative stabilizer readout circuits, see for example [84][85][86]. Long sequences of measurement errors can lead to logical errors when we perform code deformations [87]. We obtain an improved code distance against time-like logical errors using our new stabilizer readout procedure. For the following discussion we assume we can make any one stabilizer measurement of our choice, S X f , S Y f , or S Z f , for each face per unit time.
With these assumptions, if we consider the standard protocol where we alternately measure Pauli-X stabilizers and Pauli-Z stabilizers, we find that a time-like logical error occurs if half of the measurements experience errors. For example, we obtain a timelike logical error if all of the Pauli-Z stabilizer measurements experience a measurement error. In contrast, an undetectable stringlike logical operator using the cyclic protocol requires that two-thirds of the stabilizer measurements experience measurement errors. For instance, measurement errors that occur on all of the Pauli-X measurements and all of the Pauli-Y measurements will lead to an undetectable error.
Therefore, under this phenomenological noise model, if we aim to reach a distance against time-like logical errors, d t , in the standard protocol we must measure for 2d t rounds of stabilizer measurement but only 3d t /2 rounds of measurement in the alternative cyclic protocol. We therefore complete code deformations in three-quarters of the time using the cyclic readout protocol as compared with the standard protocol, that is (3d t /2)/(2d t ) = 3/4. This is achieved with a simple local rotation on stabilizer measurements we would otherwise need to perform on a standard implementation of the color code.
We remark that we can obtain similar resource saving for foliated color-code models [88,89]. It is easy to include Pauli-Y stabilizer measurements in a foliated faulttolerant scheme using type-II foliated qubits [89]. Interestingly, the resource state we obtain if we produce a foliated system with type-I qubits to measure the standard stabilizer readout pattern [88] is the same as that which we obtain if we use type-II foliated qubits to measure the cyclic stabilizer readout pattern. The only difference between the two models is that in the latter case, some of the Pauli-X measurements used to read out the resource are replaced with Pauli-Y measurements. Nevertheless, we find a significantly different syndrome pattern, and a non-trivial improvement in resources. Given the development of better decoders, we can therefore obtain a significant resource overhead with a minor basis rotation to the measurement devices of the hardware that is designed to produce such a system.
To move forward, it will be valuable to evaluate the performance of these error-correction procedures with numerical simulations. This will require finding stabilizer readout circuits to test different fault-tolerant protocols undergoing circuit noise. Stabilizer readout circuits to measure the sequence of stabilizers in Fig. 24(top left) have been considered in Refs. [24,25,52]. It may be interesting to look for better readout circuits that also include Pauli-Y stabilizer measurements. See also Refs. [72][73][74] where remarkably high fault-tolerant thresholds are found using statistical-mechanical methods. It may also be interesting to repeat these analyses for alternative stabilizer readout protocols.

Higher-dimensional color codes
Let us look at how the methods for decoding we have introduced generalise to higher-dimensional color codes. See Refs. [14,15,65,90] for a detailed definition where these codes are introduced and Refs. [25,42,46] where decoders for the three-dimensional color code are implemented. We will focus on the example of decoding pointlike defects for the three-dimensional color code, but we expect that the principles we are developing for decoding point-like defects are general to decoding problems. We leave these generalisations as an exercise for the reader. We write the dimensionality of the system as D where a general statement is given although we largely focus on the case where D = 3.
We can define a D-dimensional color code with qubits on the vertices of a (D + 1)-valent lattice with (D + 1)colourable cells. Pauli-X stabilizers are assigned to the D-dimensional cells of the lattice. We focus on error correction with these stabilizers that identify Pauli-Z errors. Boundaries can also be defined with one of D + 1 colours assigned such that no cell of the assigned color is found at the boundary. The D-dimensional color code defined on a D-dimensional tetrahedral lattice with one boundary of each colour encodes a single qubit. Let us motivate the generalisation of the unified lattice matching decoder by looking at how the challenges of decoding with a restricted lattice generalise with dimension. These considerations will be important when comparing the resource cost of quantum computation using color codes of different dimensionality. We can decode point defects of the D-dimensional color code using a restricted lattice. Up to the lattice boundaries, the product of all of the cells of two different colors, independent of D, give the identity operator. We therefore obtain a symmetry that can be used to obtain a restricted lattice for decoding [53]. The result of matching the defects on D of these sublattices can be combined to find a correction.
In SubSec. III C we identified an issue with decoding the color code on a two-dimensional tetrahedral lattice (a triangle) using separate restricted lattices, namely, that errors of weight ∼ d/3 may lead to logical failure. Ideally we should aim to design decoders that can correct up to d/2 errors. In fact, as dimensionality increases, we expect errors of lower weight to lead to logical failure. Indeed, a string of length O(d/(D + 1)) that stretches from a boundary towards the centre of a D-dimensional tetrahedral lattice may cause a restricted lattice decoder to cause a failure. In Fig. 25(left) we show one such error of weight ∼ d/4 for the three-dimensional color code. Roughly, we might expect to obtain a quadratic improvement in logical failure rate for the three-dimensional color code if we can produce a decoder that can tolerate errors of weight up to (d − 1)/2.
We also find that the issue of syndrome degeneracy on a restricted lattice increases with dimensionality. In Fig. 2(d) we show an error with red defects that are neglected by the restricted lattice. See also Fig. 3. Nevertheless, one should expect to be able to use this information to obtain a more accurate correction. In Sub-Sec. III D we argued that decoding on the unified lattice gives us access to this information, and we give an example where an error of this type is corrected by a matching decoder on the unified lattice that might not otherwise be corrected by matching on separate restricted lattices.
In two dimensions, a bit flip on one of the two qubits separating two cells can give rise to the same syndrome on the restricted lattice. Therefore, there may be up to 2 w different error configurations that can give rise to the same syndrome on a restricted lattice for an error of weight w since any single qubit error can be moved onto the opposite qubit of its edge without changing the syndrome. We remark however that 2 w is an upper bound to account for error configurations where both qubits on some given edge separating two cells of the restricted lattice are flipped, in which case a rearrangement can not be made for any such given edge. At low error rates this is quite uncommon.
As dimensionality increases, we find that the degeneracy of the syndrome on a restricted lattice also increases. In Fig. 25(right) we show a single error together with its syndrome on two cells of the restricted lattice. In fact, an error on any of the qubits on the face at the intersection of the two cells will give rise to the same restricted syndrome. We therefore find that syndrome degeneracy on the restricted lattice increases as we progress from decoding in two to three dimensions. In three dimensions we have four-colourable lattices that are composed of faces of weight four or six. In addition to the lattice shown in the figure, see for instance [42].
The observations above motivate generalising the unified lattice to higher-dimensional systems. Let us concentrate on a three-dimensional color code. In three dimensions we use colors C = {r, g, y, b}. We show a construction for a unified lattice in Fig. 26. The figure depicts four boundary operators where u, v ∈ C and u = v. For instance, at the top left of the figure we show b gb where the support of the boundary operator on the tetrahedron is colored. The operator is supported on the green boundary as qubits on the green boundary support a blue cell but not a green cell. Likewise the blue boundary supports green cells but not blue cells, and therefore the boundary operator has non-trivial support on these qubits. As such the green and blue boundaries are colored. The boundary operator has support on all the edges of the tetrahedron with the exception of the edge connecting the blue and green corners of the lattice, and the edge connecting the red and yellow corners of the lattice. Finally, the boundary operator b gb also has support on the blue and green cor- shows the boundary operator for the red-blue restricted lattice, the yellow-red restricted lattice and the green-yellow restricted lattice. The restricted lattices can be combined by their shared boundaries via the displayed arrows to produce a unified lattice to decode the point excitations of the threedimensional color code.
ners of the lattice where only a single blue or green cell operator lie, respectively. We find that we can produce a unified lattice by combining four restricted lattices. For instance, the product of four boundary operators; b bg , b rb , b yr , b gy returns identity. We can therefore combine their respective restricted lattices to give a unified lattice that we might expect to correct errors of weight greater than O(d/4). We show how these operators are combined in Fig. 26 giving a generalisation of the Möbius strip we have studied for the two-dimensional case.
The unified lattice shown in Fig. 26 is in some sense unsatisfying, since we have six different restricted lattices for the three-dimensional code, and yet we only use four of them to obtain the lattice. We may expect a decoder that incorporates all of the available restricted lattices to perform better, as it captures all of the local correlations between pairs of syndrome defects. However, we do not expect to find any such lattice only using each restricted lattice once. In general we can check that the product of all boundary operators for a D-dimensional color code with odd D does not give a symmetry. For odd D we have that where we take the product over all (D +1)!/[2!(D −1)!] = D(D + 1)/2 boundary operators on the left-hand side of the equation. In three-dimensions this operator has support on the boundaries and the corners of the tetrahedral lattice where the qubits support an odd number of cell operators. It may therefore be worthwhile considering decoding strategies using alternative unified lattices in odd dimensional color codes. One can conceive of unified lattices that use all of the restricted lattices twice. We also remark that the defects on all of the cells of all D + 1 colors of an odd D-dimensional color code respect a symmetry, up to the lattice boundaries. We may therefore consider using this bulk symmetry together with each of the D(D +1)/2 restricted lattices to produce a unified lattice for D-dimensional color codes with D odd. In contrast to the odd dimensional case, for D-dimensional color codes with even D we have that b uv = c S X c D = 1 1. We can therefore expect to find a consistent generalisation of the unified lattice we have presented in two dimensions to decode point defects for any even-dimensional colorcode lattice that combines each of the restricted lattices of the code once and only once. Let us finally say that the arguments we have given for more general unified lattices only indicate that they can exist. We have not made any suggestions for how the restricted lattices should be combined to produce a decoder that performs well. We leave this as an open research problem for the reader.

Single-shot error correction
Let us now elaborate on single-shot error correction [66] for the gauge color code [16] in terms of symmetries [35,66,91]. Single-shot error correction is a procedure to accurately correct errors that have occurred on a code, even if error-detection measurements are unreliable. See numerical implementations of single-shot fault-tolerant decoding algorithms for the gauge color code in Refs. [25,42] and related work using the threedimensional subsystem toric code in [92]. Our exposition will explain how to implement syndrome estimation using minimum-weight perfect matching.
We will give a brief description of the necessary details of the model but we advise the reader to learn about the models we consider for the following discussion in Refs. [66,91]. The gauge color code is a subsystem code where we measure face operators to infer the values of the cell operators S X c of the three-dimensional color code. The gauge color code is self dual such that an equivalent discussion will hold for Pauli-Z face terms to infer the values of cell stabilizers that identify bit-flips. Given we can correct Pauli-X and Pauli-Z errors separately we concentrate our discussion on just Pauli-X face operators G f and note an equivalent discussion will hold for error correction in the alternate basis.
Face operators lie at the intersection of pairs of adjacent cell operators. We say that face f has color uv if neither of its adjacent face operators are color u or v, and recall that in three dimensions we use four colors.
Of course, due to the four-colorability of the color code lattice the two cells separated by the face have different colors. The ordering of the pair of colors of f is not important, i.e. col(f ) = uv = vu. One additional fact we need is that the faces about one cell of the color code are three colorable. A cell of color x has face operators on its boundary of color uv, vw, and uw such that each qubit on the boundary of cell c touch exactly one face of each color. These facts mean that we can express cell operators as follows where col(c) = u, v and here, the set ∂c denotes the set of faces on the boundary of c. Importantly, the face operators commute with all of the cell operators of the gauge color code such that we can measure them while remaining in an eigenspace of the cell operators. Moreover, as we have that the cell operators are obtained as the product of face operators, we can infer the values of the cell operators using the outcomes of the face measurements.
We assume an error model where, in addition to the dephasing errors that occur, a small fraction of face measurements return the value that is opposite to their measurement outcome. Single-shot decoders typically follow a two-step process where we first estimate the error syndrome among errors in the measurement pattern before trying to determine the qubits that have experienced errors. Here we elaborate on the syndrome estimation step. We write down gauge checks, see Fig. 27(top left), and discuss their symmetries to show how we can perform syndrome estimation using minimum-weight perfect matching on a restricted lattice. We pay specific attention to how the symmetries of the gauge checks are changed at the lattice boundary.
We consider gauge checks that are the product of faces that take two of the three colors at the boundary of each of the cells. We separate these gauge checks C u c into four colors u ∈ C whereby one such gauge check is the product of all the faces that contain color u, i.e., faces f ∈ ∂c with col(f ) = uv for any v ∈ C. We write this explicitly as We show a blue gauge check C b c on a red cell in Fig. 27(top left). We note that there are no gauge checks C u c on cells of color u since, by definition, cells of color u have no faces of color col(f ) = uv for any value of v. Up to boundaries we have four different symmetries; one associated to each color. We also point out that the values of C u c are independent of the values of stabilizer operators S X c since, assuming no measurement errors, we have that C u c = S X c × S X c which gives the trivial element of the stabilizer group. As such, these checks strictly identify errors among face measurement outcomes. It will be helpful to introduce some terminology. Let us call a single check that returns a −1 outcome a charge. Charges carry a single color u that corresponds to the color of the gauge check C u c = −1. Up to boundaries, the color charges of a common color respect a global symmetry. We also call a pair of color charges at a common cell a gauge defect. A gauge defect carries a color pair that corresponds to the color of its two constituent color charges. A cell c supports a uv gauge defect if C u c = C v c = −1. We remark that each cell respects a local symmetry that guarantees that there must be an even number of color charges at cell c, namely with u, v, w all distinct colors that are also different from the color of c.
In the bulk of the lattice, a single measurement error on some given face G f with col(f ) = uv creates gauge defects of color uv on its two adjacent cells c and c with f ∈ ∂c , ∂c. Specifically we have a syndrome where C u c = C v c = C u c = C v c = −1, and all other gauge 28 checks give values +1. Color charges of some color can be separated further with additional measurement errors on additional faces of the appropriate color. We can regard these measurement errors on faces as strings on the dual lattice. In Fig. 27(top right) we show two measurement errors on the highlighted faces; one face of color rb and another of color gb. These give rise to two violated blue gauge checks; one on the red cell and another on the green cell. In the bulk we can consistently match the blue charges as we necessarily have an even number of them.
In general we can find collections of gauge defects that can be locally corrected provided the collection respects all of the symmetries among the color charges. We can find collections that respect the color charge symmetries by matching for each of the four sets of differently colored charges separately. The edges returned from each matching produces networks of gauge defects such as that shown in Fig. 27(bottom right). These networks must respect the color charge symmetries since each edge of the network is incident to a pair of charges of a given color.
Let us now discuss how we can find a correction for these gauge defects. We note that the gauge defects have exactly two incident edges and as such the defects form a one-dimensional chain. We can therefore assign a number to each gauge defect in order such that the j-th defect is adjacent to the j − 1-th and j + 1-th defect, and the final defect in the chain is adjacent to the first defect. The location of the first defect is chosen arbitrarily. We can then combine pairs of defects in order to find a correction. We first combine the first and second defect. We suppose they have colors uv and uw since they must be connected by an edge of color u. The color v may or may not be different from the charge w. We find a correction to combine these two defects. This removes the first defect, and changes the second defect to give vw. Given that we have removed a pair of u charges, the network must still respect all of the global color charge symmetries. We then go on to combine the new vw defect at the second site in the chain and combine it with the third defect. In the case that v = w both gauge defects are corrected and we progress to combine the third defect with the next. This process is sequentially repeated along the chain until all of the gauge defects have been removed.
We remark that in fact we only need to perform matching on three of the four color charge symmetries. Indeed, a network of defects that respects three color charge symmetries necessarily respects the fourth. Suppose the network is supported on cells c ∈ R. One can check that To see why let us check these cells separately. For a cell that is not yellow, for instance red, we have that C g c C b c = C y c . On yellow cells we have that C r c C g c C b c = 1, thus giving our result above. Given Eqn. (F10) holds for a network of defects, and assuming that c∈R C u c = +1 for u = r, g, b, then it follows from Eqn. (F10) that c∈R C y c = +1 must also hold. This result may be of practical benefit as it may allow us to choose a preferred set of three matching results to find a more favourable subset of edges.
Let us finally discuss how gauge defects emerge at the boundary of the color code. At the boundary, we can produce single gauge defects that do not respect the global color charge conservation laws. Specifically, we can produce a single uv defect at a boundary of color that is neither u nor v. We therefore need to modify the matching problem to account for this at the boundary. Much like the examples we have already considered, we find that we can match differently colored charges via a boundary. For instance, we might consider pairing two differently colored charges, u and v via a boundary from which a single uv gauge defect can be produced, see Fig. 27(bottom left) where a blue charge and a yellow charge are paired via the green boundary. In general, we should consider matching all of the color charges in unison in a common matching problem where color charges of different colors, say u and v, can be matched via a path that crosses a boundary of color other than u or v. We therefore find a generalisation of the unified lattice to the case of matching gauge defects of the gauge color code.