Error-correction and noise-decoherence thresholds for coherent errors in planar-graph surface codes

We numerically study coherent errors in surface codes on planar graphs, focusing on noise of the form of $Z$- or $X$-rotations of individual qubits. We find that, similarly to the case of incoherent bit- and phase-flips, a trade-off between resilience against coherent $X$- and $Z$-rotations can be made via the connectivity of the graph. However, our results indicate that, unlike in the incoherent case, the error-correction thresholds for the various graphs do not approach a universal bound. We also study the distribution of final states after error correction. We show that graphs fall into three distinct classes, each resulting in qualitatively distinct final-state distributions. In particular, we show that a graph class exists where the logical-level noise exhibits a decoherence threshold slightly above the error-correction threshold. In these classes, therefore, the logical level noise above the error-correction threshold can retain significant amount of coherence even for large-distance codes. To perform our analysis, we develop a Majorana-fermion representation of planar-graph surface codes and describe the characterization of logical-state storage using fermion-linear-optics-based simulations. We thereby generalize the approach introduced for the square lattice by Bravyi \textit{et al}. [npj Quantum Inf. 4, 55 (2018)] to surface codes on general planar graphs.

One of the major benefits of the surface code is its high tolerance to errors in the physical qubits [11,12]. Error rates at the theoretically estimated fault-tolerance threshold have already been reached in experiments [1]. These thresholds are usually based on the assumption that the noise acts in the form of Pauli noise, an error model in which the action on the physical qubits is given by Pauli operators chosen from a probability distribution. In the uncorrelated case, the action on single qubits can be described by the channel where ρ is the state of the qubit, X, Y, Z denote the Pauli operators, and x , y , z are suitably chosen probabilities ( = j j ). This channel is also referred to as incoherent single-qubit error. Based on this error model, analytic results show that an error threshold exists under which, by increasing the number of qubits in the code, the error rate for qubits encoded in the code (so-called logical qubits) can be made arbitrary small [12]. The appeal of the incoherent error model is that all operations are from the Clifford group. This, together with the stabilizer code nature of the surface code, implies that the effect of such errors can be efficiently simulated classically according to the Gottesmann-Knill theorem [13]. This allowed the numerical establishment of high threshold rates [14,15], which gives reason for optimism that QEC and ultimately general quantum computation is achievable.
One of the limitations of the Pauli error model is that it does not include "coherent noise", e.g., errors where each qubit undergoes a unitary rotation. These kinds of errors inevitably occur (e.g., due to qubit detuning) in quantum devices and therefore their interplay with QEC procedures needs to be understood. Mathematically, focusing on single-qubit errors, coherent errors correspond to the error channel with U ∈ SU (2). Theoretical studies of coherent errors suggest that they act substantially differently from incoherent errors [16][17][18][19]. In certain circumstances, they can build up quadratically faster than incoherent errors [20]. It has been shown that they affect average fidelities less than incoherent errors, but introduce higher diamond-norm error rates [21]. On the other hand, the logical-level diamondnorm error rate can scale with code distance as a more favorable power of the physical-qubit diamond-norm error for coherent than for incoherent errors [22]. It was also found that even if physical qubits experience coherent errors, the logical-level noise, especially upon averaging over error-syndromes, becomes increasingly incoherent with increasing code distance [23][24][25], however quantifying this has some subtleties [26].
Simulations of QEC codes under coherent errors can give a useful picture of the resilience against this kind of noise. Direct simulations of the general coherent noise model are limited by the exponential scaling of Hilbert space dimension with the number of qubits. This may be partially sidestepped using tensor network descriptions of the surface code, using which systems up to 153 qubits have been simulated [27]. The size of the system was not sufficient to establish a threshold, but it provided evidence that using the so-called Pauli twirl to approximate coherent errors as incoherent noise on the level of physical qubits underestimates the logical error rate.
A key advance for understanding the effect of coherent errors was the recent development of an algorithm capable of simulating a subset of coherent errors with effort that scales polynomially with the system size [24]. The algorithm exploits a representation [28,29] of the surface code in terms of Majorana fermions, and links this to coherent errors via the classically efficiently simulable [30] fermion linear optics (FLO) framework. By construction, the algorithm is limited to coherent errors acting as unitary rotations about one of the axes defined by the stabilizers, e.g., U = exp(iηZ), and it was developed for surface codes defined on a square lattice.
Here we describe a general approach for representing surface codes with Majorana fermions on arbitrary planar graphs, including planar lattices, and show how the FLO-based algorithm can be adapted to this case. For incoherent errors, it was found [31,32] that by changing the lattice geometry, one can trade off resilience against phase flips for resilience against bit flips. By applying our method to various lattices, and relating Z-rotations in one lattice to X-rotations in its dual lattice, we show that a similar trade-off is present for coherent errors as well, but now for Z-and X-rotations instead of Z-and X-(that is phase-and bit-) flips.
Furthermore, we study the distribution of states resulting from the application and correction of a coherent error and investigate whether, and if so in what sense, the logical-level noise decoheres, i.e., is approximable by a distribution of Pauli errors. We show that the answer depends on a graph classification that we establish. En route to our analysis of the logical-level coherence, we also describe a coherent decoder that takes advantage of the deterministic nature of coherent errors.

II. SURFACE CODE ON GENERAL LATTICES
Stabilizer codes, and as such surface codes, are constructed by defining a set of independent, mutually commuting, products g j of Pauli operators, in particular g 2 j = I and g j = −I [33,34]. The logical subspace is the subspace of the Hilbert space "stabilized" by the g j : |ψ is in the codespace if g j |ψ = |ψ for all j. The condition g j = −I is required for the logical subspace to be nontrivial. In order to perform error correction with such a code, first each of the stabilizers is measured. The tuple s of outcomes that is obtained is referred to as syndrome. From this syndrome, the decoder of the code computes a Pauli correction operation C s that brings the code back to a state in which all stabilizers measure +1, i.e., the logical subspace.
The surface code is a particular stabilizer code derived from the toric code [35]. It is usually defined on a patch of a square lattice with a qubit placed on each of the links as shown in Fig. 1. Each of the vertices are as-X Z Z X FIG. 1. A small surface code on a square lattice. The white circles mark qubits. At each vertex of the lattice a Z-stabilizer is placed, acting on all adjacent qubits. On each plaquette, that is an area surrounded (or at the boundary partially surrounded) by links, an X-stabilizer is placed, acting on all qubits that are on its boundary. Examples of X-and Zstabilizers are indicated as grey boxes. The code patch has two rough (dashed bars) and two smooth boundaries (solid bars). For both types of boundaries an appropriately truncated stabilizer is shown.
sociated with a Z-stabilizer, j Z j , where the product is taken over the qubits adjacent to the vertex. Conversely, each plaquette, that is a square surrounded by links, carries an X-stabilizer, j X j , where the product is over the qubits on the plaquette boundary. In the toric code this pattern is placed on a torus or some other manifold without a boundary [35]. The surface code, in contrast, is a planar construction based on a patch with boundaries [9,11]. In order to obtain a finite sized surface code, the pattern must be terminated. The choice of boundaries determines the number of encoded logical qubits. The most often used boundaries are so-called rough boundaries and smooth boundaries. The rough boundaries are made of qubits on which only one (instead of two) Z-stabilizer acts. In terms of the lattice, they are stubs pointing out of the boundary of the patch, hence the name rough boundary. While the Z-stabilizers are unchanged compared to their bulk form at such a boundary, the X-stabilizers need to be modified due to the truncation of the plaquettes. Smooth boundaries are boundaries without such stubs. Now the X-stabilizers are unchanged compared to their bulk form, but the Zstabilizers need to be modified. The boundary stabilizers are shown in Fig. 1. A patch with two rough and two smooth boundaries, in alternation, as shown in Fig. 1, encodes one logical qubit. The logical Z-operator can be formed by a product of Z-operators acting on each of the qubits on one of the rough boundaries. Similarly, the logical X-operator can be constructed by a product of X-operators acting on each qubit on one of the smooth boundaries.
To facilitate defining and describing surface codes on arbitrary planar graphs, we first reformulate the above discussion in terms of graphs. We illustrate our considerations in Fig. 2. For this construction, we start with a graph representing the Z-stabilizers, together with external edges that mark the rough boundaries [ Fig. 2(a)].
To distinguish the two rough boundaries from each other and to keep track of them we add two connected virtual nodes, each connecting to all edges that belong to one of the rough boundaries as shown in Fig. 2(b). Using the virtual nodes, we can bring the external edges, which form the rough boundaries, to the conventional notion of graphs. This enables us to formulate the next steps in terms of standard graph operations. The virtual nodes themselves will not be translated into stabilizers of the code; instead, they can be used to define the logical operators, as we shall later explain. From that graph we can obtain the graph of Xstabilizers by building the dual. The dual of a graph embedded in a surface is constructed by placing a node inside all faces that are formed by the edges of that graph and connecting two nodes if the faces they were placed on share a common edge. This is illustrated in Fig. 2(c). The graph that is obtained is the graph of X-stabilizers. It also contains two virtual nodes, defined as the nodes that are connected by the edge that is crossing the edge connecting the virtual Z-stabilizers. They are also not translated into stabilizers. Finally, the qubits are placed on the intersections of edges from the X-and Z-stabilizer graphs, where the intersection of the edges connecting virtual nodes is left out, resulting in a code patch shown in Fig. 2(d). Each stabilizer acts on all qubits it is directly connected to. The stabilizers thus defined are guaranteed to commute because a face always shares two edges with each of the vertices on its corner, therefore the resulting node in the X-stabilizer graph will share two qubits with the node from the Z-stabilizer graph. Since the overlap is on an even number of qubits, the corresponding operators commute.
Logical operators, i.e., Pauli products that commute with all stabilizers, but are independent of them, can be obtained from the virtual nodes. Constructing an operator X L by building a product of X-operators over all qubits that are connected to one of the virtual nodes in the X-stabilizer graph produces an operator that commutes with all stabilizers for the same reasons the stabilizers commute with all other stabilizer, i.e., it overlaps on an even number of qubits with any of the Z-stabilizers and trivially commutes with all X-stabilizers. The same holds for the operator X L that can be obtained by choosing the other virtual node from the X-stabilizer graph, as well as for Z L and Z L the operators obtained from placing Z-operators on the qubits attached to the virtual nodes of the Z-stabilizer graph.
This gives a total of four operators (one for each virtual node), however only two of them are independent: X L can be obtained from X L by multiplication with all X-stabilizers and Z L from Z L by multiplication with all Z-stabilizers. Hence, we need to study only X L and Z L . While X L and Z L commute with all stabilizers, they anticommute with each other: we did not include the qubit in the intersection of the edges between the virtual nodes, hence Z L and X L overlap only on a single qubit. Therefore, they form a pair of logical X-and Z-operators. We choose Z L to be the logical Z-operator and X L the logical X-operator.
Applied to the square lattice, this construction recovers our previous discussion, as can be seen by comparing Figs. 1 and 2. However, it provides a framework for describing arbitrary planar graphs; an example is shown in Fig. 3 with the four panels describing the steps analogous to those in Fig. 2.
A surface code defined on a general planar graph shares many properties with the standard square-lattice surface code. Both are Calderbank-Shor-Steane [5] codes, which means that all stabilizers are either formed only by Xor only by Z-operations, a property we shall exploit to analyse the action of the error. Also, they can both be decoded by a minimum weight perfect matching (MWPM) algorithm [12,36,37]. Nevertheless, there are subtle differences, e.g., in the average stabilizer weight or the connectivity of the graph. In particular, the connectivity has been shown to influence whether the code, under incoherent errors, is more resilient against bit or phase flips [31,32].

III. ERROR MODELS
In these planar graph surface codes, we shall study error channels where N is the number of qubits and E j is a single-qubit error acting on qubit j. Our primary focus is the study of coherent errors of the form where η j is a real parameter. The consideration of mere Z-rotations is linked to the FLO simulability of the system [24]. We note, however, that due to the duality relation between the X-and Z-stabilizer graphs, we can also study X-rotations by exchanging the graphs for their duals.
To compare the effects of coherent errors to the incoherent case, we shall also study Eq. (3) with Eq. (4) replaced by its Pauli twirl We shall be interested in studying what refinements of the finding of Ref. [27] that the Pauli twirl underestimates the coherent error may arise in more general lattices, and to assess how potential trade-offs between resilience against X-and Z-errors compare in the coherent and incoherent [31] cases.

IV. QUANTUM ERROR-CORRECTION AND ITS CHARACTERIZATION
To study the logical errors that arise, we apply E to the code followed by error correction R = s R s based on the MWPM decoder. Here R s is the quantum operation of measuring syndrome s followed by the application of the corresponding Pauli correction. (In Sec. VII we provide a more detailed description of the procedure.) Inspired by Ref. [24], we shall investigate the logical error rate p L and the properties of the distribution of states after error correction. A key difference between the square-lattice case [24] and codes on general planar graphs is related to whether the weight of all Z-stabilizers (of Z L ) is even (odd). (The weight of a Pauli operator is the number of qubits on which it acts non-trivially.) In Ref. [24], an alignment of the square lattice is chosen where all Z-stabilizers have even weight and Z L has odd weight. (This does not hold in the conventional orientation shown in Fig. 1 due to the boundary stabilizers.) As explained in Ref. [24], a key consequence of this is that R s • E acts as a unitary channel on the logical qubit; the state ρ s arising after R s • E is where neither θ s nor the probability P s of syndrome s depend on the initial logical state ρ. Hence, both p L and the properties of the final state ρ s can be studied via a statistical analysis of θ s . For more general graphs, apart from some special cases, this property is absent and P s depends on the initial logical state |ψ L of the code; the error correction process thus reveals information about |ψ L . In studying the logical error rate, we eliminate this |ψ L dependence by defining p L as the diamond-norm distance between the actions on the logical subspace of the identity and the average logical channel [24,38] We are also interested in the properties of final states ρ s . To mitigate the |ψ L dependence of P s in this case, we adopt a statistical approach based on averaging with respect to a uniform distribution of |ψ L across the Bloch sphere. With |ψ L thus chosen randomly, the syndrome probability must be viewed as the probability of s conditioned on the initial state being ρ = |ψ L ψ L |. We shall be interested in the Bloch-sphere-averaged distance between ρ s and ρ. For a suitable (semi)metric δ 2 (ρ s , ρ) on the space of logical states, this is where Ω is the Bloch sphere and the conditional probability P (ρ|s) enters because we are after the Bloch-sphere average given that the syndrome outcome is s. The combined Bloch-sphere and syndrome average is where P (s) = Ω dρP (s, ρ) is a marginal of the joint syndrome-Bloch-sphere distribution P (s, ρ). For computational convenience, for δ 2 we shall use the square of the trace-norm distance between ρ and ρ s = |ψ s ψ s |. That is, we consider the average infidelity conditioned on measuring syndrome s. Eq. (10) thus gives the average infidelity to the identity of the average logical channel. For a discussion of the relation between the average infidelity and the diamondnorm distance see in Refs. [21,25,26,39,40] V. MAJORANA GRAPH To study the model introduced above we represent the surface code on a planar graph in terms of a corresponding Majorana fermion graph. Our approach is based on that of Refs. [24,28,29]; it proceeds by representing physical qubits in terms of Majorana fermions and a local constraint. In this way, the eigenstates of the surface code are described in terms of free-fermion eigenstates of a quadratic commuting-dimer Majorana Hamiltonian projected to the physical, qubit, Hilbert space.
To obtain this representation each qubit j is encoded in four Majorana fermions c j1 , c j2 , c j3 , c j4 . This is referred to as C4-encoding. The Majorana fermions c jk satisfy where {...} denotes the anticommutator and δ ij is the Kronecker delta. A conventional fermion, satisfying {d m , d † n } = δ mn , is built out of a pair of Majorana fermions via d m = c m1 + ic m2 . The four Majorana fermions for qubit j thus correspond to two conventional fermions, hence a four-dimensional Hilbert space. To arrive at a two-dimensional Hilbert space encoding a qubit, we introduce the stabilizer S j = −c j1 c j2 c j3 c j4 and work in the subspace satisfying S j = 1 for all qubits j. We shall refer to the S j as qubit stabilizers.
In the C4-encoding, the Pauli operators on a qubit are given by Majorana bilinears, X j = ic j1 c j2 , Y j = ic j1 c j3 , Z j = ic j2 c j3 . We shall call these bilinears Pauli dimers. They satisfy the commutation relation for Pauli operators and commute with the qubit stabilizer S j . Since C4-encoded states are stabilized by S j , there is for each Pauli dimer an equivalent Pauli dimer: It is beneficial to represent C4-encoded qubits in terms of a Majorana graph, as shown in Fig. 4. In this graph, nodes represent Majorana fermions and the edges between them represent bilinears ic ik c jl . The edges have an orientation indicated by arrows reflecting the operator order: for a bilinear ic ik c jl , the arrow points from fermion c ik to fermion c jl . The Pauli dimers we shall use for a single qubit j are those for X j and Z j . The graph for a single qubit is shown in Fig. 4 (a).
The Z-and X-stabilizers of the surface code involve products of Pauli operators from different qubits. Such products translate into products of Majorana operators which we have the freedom to reorder, provided we keep track of the signs. As illustrated in Fig. 4 (b), this freedom can be used to change from Pauli dimers to "link dimers", i.e., on the links between the qubits. In the example of Fig. 4 Note that the rearrangement is performed in such a way that the dimer ic c2 c b3 is also a part of the rearrangement of the stabilizer More generally, consider a stabilizer g involving a product of n Pauli operators, for which we can pick a dimer representations such that the Pauli dimers can be connected by additional edges such that the joint set of added edges and Pauli dimers form the boundary of a face in the Majorana graph. Then we can represent g in terms of a clockwise product of Majorana fermions along the boundary of that face. Considering this operator order, each of the Pauli dimers in g has the form is k c α k c β k , where α k and β k enumerate the double (i.e., qubit and Majorana) indices along the boundary of the face and s k = −1 if this operator order is opposite to that of the original Pauli dimer (s k = 1 otherwise). That is, We now rearrange the product such that it is over link dimers. A simple rebracketing is sufficient for this for all but the first and the n-th Pauli dimer, To form the link dimer ic βn c α1 , however, c α1 has to be commuted through an odd number of Majorana fermions yielding Provided we absorb this additional sign in the orientation of one of the link dimers, we now find that the stabilizer is expressed as a product over these. Since all but one of these have a corresponding Pauli dimer with the same orientation, the total number of clockwise oriented edges (i.e., Pauli and link dimers) around the face of the Majorana graph is odd. By the same logic, the same holds for each of the faces that represent a stabilizer in the graph. The Majorana graph thus has an orientation in which all faces have an odd number of clockwise oriented edges: a so-called Kasteleyn orientation. The free fermion state underlying the description of the surface code emerges from the observation that the entire set of stabilizer generators can be rearranged in the way described above and thereby be described in terms of mutually commuting link dimers. (These dimers, however, do not commute with the qubit stabilizers S j , highlighting the fact that the surface code eigenstates are not free-fermion states, but projections thereof.) For the square lattice, this is shown in Refs. [24,28,29]. In the following we shall describe an algorithm with which one can construct Majorana graphs for surface codes on arbitrary planar graphs. We shall illustrate our algorithm using the Z-stabilizer graph in Fig. 3(a) and the corresponding qubit graph in Fig. 3(d).
We start the construction of the Majorana graph M with the qubit graph Q, the graph containing both qubits and stabilizers as vertices and connecting each stabilizer to the qubits they are acting on. The construction of Q starting from any initial planar graph of Zstabilizers is described in Sec. II [an example is shown in Fig. 3(d)]. To construct M, we will construct one intermediate graph G by taking all qubits from Q and connecting them if this can be done without crossing any edge in Q. Fig. 5(a) shows the graph G obtained in this way from Q in Fig. 3(d). The graph G has the property that it contains a face for every stabilizer generator and the operator it represents acts on the qubits on the boundary of that face. Additionally, it has the property showing the C4-encoding of the individual qubits (cf. Fig 4).
that every qubit, apart from those at the corners, has four edges connected to it. Qubits at the corners are different because these miss the link to the qubit that was not inserted because it would have been at the intersection of edges connecting virtual stabilizers.
To obtain the Majorana graph M we place two Majorana fermions on each of the edges in G and associate each of the fermions to one of the qubits at the edge's ends. Using this method, we associate with each qubit the same number of Majorana fermions as the qubit has edges connected to it. Since in the graph G each of the vertices, except the four on the corners of the graph, is connected with four edges, all qubits, except the qubits in the corner, have four Majorana fermions associated to them. For the example system we are considering, this stage of the construction is shown in Fig. 5(b). In order to be able to encode all qubits in the C4-encoding we add one additional Majorana fermion to each of the qubits at the corners of the code, such that these additional fermions are in none of the faces of the graph G. This gives the complete set of vertices for the graph M.
To construct the edges of M, we first add an edge between two of its vertices if they were on the same edge in G; these edges will form the link dimers. We also add the edges associated to the C4-encoding of the qubits by adding the edges forming a face around each qubit; these edges will form the Pauli dimers. This completes the construction of the (thus far unoriented) edges of M.
Next, we have to associate some more structure to M: We must assign (i) a Kasteleyn orientation [cf. Fig. 4 and under Eq. (16)] and (ii) a placement of Pauli dimers (i.e., a numbering c j1 , . . . , c j4 of Majorana fermions) around each qubit such that the clockwise Majorana product around each face of G encodes the correct stabilizer. That such structure exists can be seen as follows. First, we consider the Pauli dimers. In the bulk of the code, the stabilizers surrounding a qubit alternate between X and Z, since qubits are placed on the intersection of an edge of the X-stabilizer graph and an edge of the Z-stabilizer graph. Similarly, the dimer representation of the Pauli operators in the C4-encoding alternates between X-and Z-dimers [cf. Fig. 4 (a)]. Therefore, we can always arrange Pauli dimers such that each of them is adjacent to the stabilizer face to which it contributes. (On the edge of the code, there are less than four surrounding stabilizers, however, the existing adjacent stabilizers already specify the placement of X-and Z-dimers). In a convention where X-dimers are oriented clockwise [ Fig. 4 (a)], there are two possible orientations for each qubit: we can choose which of the Z Pauli dimers is oriented anti-clockwise. We can pick any of the two. This defines the Pauli dimer part of the Majorana graph, including the orientation of the faces around each of the qubits.
To complete the structuring of M, we must orient the link dimers such that globally a Kasteleyn orientation is obtained. To this end, we can use that each of the so far unoriented faces (edges) in M is associated to a face (edge) in the graph G. Then, for any face in G, we count the number n of clockwise-oriented Pauli dimers surrounding the corresponding face in M. If n is odd, we have to orient the edges of this face of G such that an even number of edges are clockwise (and vice versa for n even). In this way, it is sufficient to find an orientation of G such that each face has the parity of clockwise oriented edges as determined by n before. To produce this orientation for G, we can proceed similarly to the first steps of the FKT algorithm [41]. By orienting M's link dimers according to the orientation obtained for G, we have obtained a Kasteleyn orientation of M. The resulting graph and orientations for our example are shown in Fig. 5(c, d).
The logical state of the surface code patch is defined by the state of the qubit encoded in the four unpaired corner Majorana fermions [24]. This becomes clear when we consider the Majorana encoding of the logical operators. Following steps analogous to Eq. (16), the Majorana encoding of either of the logical operators requires a new link dimer connecting two of the four corner fermions (as shown in Fig. 6); the new link dimer has orientation such that the resulting new face (which corresponds to a virtual stabilizer for G) has an odd number of edges pointing clockwise. In the initial state in which all stabilizers measure +1 all equivalent realizations of a logical operator must have the same expectation value. Therefore, we have to add such faces for both realizations (corresponding to both of the virtual stabilizers) of the logical operator. By stabilizing the state that is encoded in the Majorana graph with a logical operator, we fix the logical state of the code to be in a +1 eigenvalue of that logical operator. We thereby fix the code to be in either the |0 L state by choosing to stabilize with Z L or in the |+ L state by using X L . To initialize the code in the |Y L state, we can pair up fermions from diagonally opposite ends of the code patch.

VI. FLO SIMULATION
In the following we describe how to use the methods introduced in Ref. [24] to sample from the distribution of syndromes and how to compute, given a syndrome s, the overlaps where C s is the Pauli correction for syndrome s, η η η = (η 1 , η 2 , . . . , η N ), Z Z Z = (Z 1 , Z 2 , . . . , Z N ), and η η ηZ Z Z is the scalar product between the two. From these overlaps, the quantities characterizing the error correction process [cf. Sec. IV] can be extracted using Monte Carlo simulation, as shown in Secs. VII and IX.
To perform these operations, we use the framework of fermion linear optics (FLO). Within this framework we have access to the following operations: • Initializing a dimer in the +1 eigenstate.
• Applying the unitary operation R = exp(ηc i c j ) with an arbitrary real η.
• Projectively measuring a dimer operator, with or without post selection.
These are operations that maintain the property of a state to be a fermionic Gaussian state, which can be exploited to simulate their actions efficiently [30]. The limitation of the FLO algorithm is that it cannot treat quartic products of Majorana operators such as those in the qubit stabilizers S j . To bypass this problem, each qubit is projectively measured in the |± -basis; we shall see that this allows working with objects involving Majorana bilinears. Although this makes it impossible to evaluate the Z-stabilizers, such evaluation is not needed: since we apply only Z-rotations we know that none of the Z-stabilizers could have been flipped.
To sample from the syndrome distribution, we sample from the eigenvalues m a of single-qubit Pauli operators X a ; the eigenvalues of X-stabilizers can be computed from m a classically. The probability for measuring m a requires performing three steps on each qubit a: first switch to fermions and project into the C4encoding using (1 + S a )/2, then apply the coherent error U a = exp Iη a Z a , and finally apply the projector (1 + m a X a )/2. Since Z a commutes with the qubit stabilizer S a we can perform the rotation first; a further   a=1 P a (m a ) with respect to a Gaussian state. Note that computing this joint distribution requires only rotations and measurements with post selection, both involving dimers only, allowing a computation using the FLO operations introduced above. However, this by itself does not offer a route to efficiently sample from the exponentially many outcomes m m m. Ref. [24] showed how one may do this qubit by qubit, thus breaking the sampling down to a repeated sampling from just two states. The success of this approach hinges on choosing a correct order in which to measure the qubits: The order must be such that the graph G stays connected when removing, after every measurement, the qubit that was measured. Such an ordering can be obtained for our graphs by performing a breadth-first search through the graph; the obtained order can be used in reverse.
From the sampled m m m, we compute the syndrome s classically using suitable products of m a for the corresponding X-stabilizers. From s, the decoder produces the correction C s . Since, by construction, no Z-stabilizer is flipped, the correction contains only operators that correct X-stabilizers: C s contains only Z-operators. Using this, we define η η η s according to exp iη η η s Z Z Z = C s exp iηZ ηZ ηZ, that is, we absorb the correction operations into the parameters of the coherent rotations. The quantity we aim to compute is + L | exp iη η η s Z Z Z |+ L . We can expand |+ L in the computational basis; it is given by the sum over the set L of all computational basis states that satisfy all Z-stabilizers, Furthermore, where in the second line we used that exp iη η η s Z Z Z is diagonal in the computational basis to replace y∈L by a summation over all computational basis states, and in the third line we used that 2 N/2 |+ ⊗N = y∈{0,1} N |y . Hence, For the simulation, the operator Z L can be absorbed into η η η the same way we absorbed C s . In a similar fashion, we can compute the ratio i.e., the same expectation values but starting with the |Y L state. This can be done by initializing the simulation in a different state such that the logical state is given by |Y L .

VII. AVERAGE LOGICAL CHANNEL
In the following, we explain how to obtain the full action of the average logical channel from the observables q s , r s that are accessible via the FLO simulation.
To this end, we first need a description of the recovery procedure R. The recovery scheme consists of two steps. First, all stabilizers are measured; this projects the state into one syndrome s. This projection is performed by the projector Π s . Next, depending on the syndrome s, the decoder chooses a correction operation C s . The combined recovery is given by Since C s maps between the space in which the stabilizers have the syndrome s and the logical subspace we can represent Π s = C s Π 0 C † s , where Π 0 denotes the projection into the logical subspace. Using this relation and the fact that the corrections C s are Pauli operators and thus satisfy C s = C † s , the operation R can be expressed as The correction R together with the error E is This operation maps any state of the logical subspace back to the logical subspace. Within that subspace, it is the average logical channel where we introduced Π 0 to remind that we view Λ L [ρ] as a quantum channel on the logical subspace.
Since we have an algorithm to sample from the distribution of syndromes, we study the action corresponding to an individual syndrome s: where we introduced D s = Π 0 C s exp(iηZ ηZ ηZ)Π 0 . We consider the action of D s in the logical subspace. D s commutes with the logical Z L operator, therefore, D s is diagonal in the Z L -basis and hence can be represented as with a s , b s ∈ C and π i the projector on the logical states |i L i L |, i ∈ 0, 1. In terms of a s and b s , we have where bar indicates complex conjugation. The action of Λ L follows from s R s . We find Since Λ L is trace preserving, we have α = 1 and β = 1. Thus, the entire action of Λ L is encoded in the single complex parameter γ. Furthermore, from which, by the proportionality of Eq. (34) to the action of a unitary channel minus the identity, we read off [21,24,42] the diamond-norm [35] distance We wish to estimate γ using Monte Carlo simulation. Using the FLO approach, we are able to sample from the distribution of syndromes starting from the initial state |+ L . The syndrome probability is To estimate γ we seek a quantity c s accessible from the simulation such that In this way, the Monte Carlo average s P + s c s = γ. Using the simulation algorithm introduced in Sec. VI, we have access to (38) By expanding both expressions in a s and b s and reordering we find the relations which imply Conveniently, both expressions match the form of Eq. (37). Therefore, we can approximate γ using a Monte Carlo approximation of the sum

VIII. THRESHOLD
Simulations of surface codes on various lattices have shown that the thresholds of the codes depend significantly on the connectivity of the lattice [31,32]. In the following, we study lattices with different connectivity under coherent and incoherent errors.
In the choice of lattices, we follow Ref. [31] and perform simulations for the square, kagome, hexagonal, (3, 12 2 ) (triangle-hexagonal, also referred to as tri-hex) lattices, and their duals. For the square lattice, we study codes  [15]. Above the threshold, we sketch the graphs of the X-stabilizers. They are, from left to right, dual of tri-hex, dual of hexagonal, dual of kagome, square, kagome, hexagonal, and tri-hex.
with distances 25 (625 qubits), 37 (1369 qubits), 49 (2401 qubits) and for the other lattices we study system sizes with a comparable number of qubits.
For each surface code we perform two simulations: one for coherent errors [i.e., with E using Eqs. (3) and (4)] and one with incoherent errors using the Pauli twirl [i.e., with E using Eqs. (3) and (5)]. For simplicity, we apply the same error to all of the qubits, η j = η. For both error models and all lattices we first simulate an initial overview spanning from η = 0.4π to η = 1.6π in 0.1π steps with 10000 Monte Carlo samples for the coherent error and 40000 Monte Carlo samples for the incoherent error. The results are shown in Fig. 7. We then estimate the thresholds η th by first estimating their position from this overview, and then performing a simulation in 0.01π steps around the estimated position and fit a finite-size scaling ansatz [15]. Our threshold estimates are shown in Fig. 7 and Fig. 8. We find that for these lattices the coherent thresholds are consistently higher than the incoherent ones (or are at best comparable to them as for the tri-hex lattice). Our results also indicate that for η η th the logical error rate decreases with code distance slower for coherent than for incoherent errors.
The trade-off between resilience against bit-and phaseflips that is obtainable in the incoherent (twirled) case is reflected by the thresholds approaching [31] the bound for zero asymptotic encoding rate R → 0. (For our case of a single encoded qubit, R = 1/N .) Here h is the binary Shannon entropy and p x and p z are the probabilities of X-and Z-flips on individual physical qubits [33]. In our case, the thresholds p z,th are parametrized by η, i.e., p z,th = sin 2 η th and p x,th is obtained by lattice duality. In Fig. 8 we visualize this bound. The results show that the trade-off between resilience against bit-and phase-flips translates, for coherent errors, to a trade-off between resilience against X-and Zrotations. In Ref. [31] it is argued that the trade-off for incoherent errors is present because it is easier for the MWPM decoder to match up syndromes in a sparse graph. It is reasonable to assume that a similar effect is also causing the trade-off in the coherent case. However, considering Fig. 8, unlike for the incoherent thresholds, there does not appear to be a universal curve delineating this trade-off for coherent thresholds.  To provide further evidence for the absence of such universal curve, we construct a lattice that is self-dual and therefore can be directly compared to the square lattice, i.e., its thresholds for X-and Z-errors are equal by design. We call this lattice the "doubly-odd" lattice; it has faces with 3 and 5 vertices (see Fig. 9 inset). We compare the results for this lattice, both for coherent and incoherent errors, to the square-lattice case in Fig. 9. The results show that the coherent threshold for the doubly-odd lattice is significantly higher than that for the square lattice. For incoherent errors, however, the thresholds for the two lattices are very close, consistently with the observation of Ref. [31] that most surface codes with a MWPM decoder perform very close to the bound Eq. (42).

IX. FINAL-STATE DISTRIBUTION
To get further insights into the properties of the states after error correction, we study the action of the error and correction process conditioned on the individual syndromes. The final states have certain properties that are dependent on properties of the stabilizer group. It turns out that the parity of the stabilizers is of central importance. We shall generalize the property [24] that for codes which have only even-weight Z-stabilizers together with an odd-weight logical Z L -operator, a coherent Zerror followed by a correction acts as a unitary operation. We will assess the different symmetries that are present in the lattices we considered above and determine the consequences for the final-state distributions. Note that we are now investigating properties of the error correction process based on properties of the Z-stabilizer graph, while the argument that it is easier to correct errors in sparse graphs was based on the X-stabilizer graph.
We start by considering an individual syndrome s that is corrected with the operator C s that is made up only of Z-operators. In the following, we denote a string of Z-operators by Z(b), where b is a bit-string of length N whose value is one (zero) for qubits on which Z(b) acts nontrivially (trivially). In particular, Z(0) = I. Expanding both the error and the correction using this representation yields where h s is the bit-string encoding the correction operation, c g are real coefficients, g denotes the Hamming weight of the bit-string g, and B is the set of all length-N bit-strings. Therefore [24], where ⊕ is addition modulo 2.
As discussed in Sec. VII, D s is diagonal in the logical space of the code. Therefore, we can represent it as with suitable coefficients k s and l s . When expressing k s and l s as a sum over contributions from Eq. (44), k s is formed by terms in which Z(h s ⊕ g) acts trivially on the logical space, i.e., when Z(h s ⊕ g) is within the Zstabilizer group. Conversely, l s is formed by those terms in which Z(h s ⊕ g) acts as the logical Z L operator, i.e., those corresponding to Z L times a stabilizer. Hence, where A is the set of all bit-strings corresponding to operators in the Z-stabilizer group, l is the bit-string that encodes Z L , and A ⊕ l denotes the set {a ⊕ l|a ∈ A}.
Since g runs over all bit-strings, we can convert Eq. (46) into sums over A, We next study the complex phase of k s and l s . In general, A contains both even-and odd-weight bit-strings and we cannot make a definite statement. If, however, all bit-strings in A have even weight, the exponent of i has the same parity for all the terms in each sum in Eq. (47). This constrains k s and l s to be either real or imaginary, with their relative phase set by l. Therefore, we can distinguish between two families: codes in which all Z-stabilizers are of even weight and codes that do not satisfy this condition.
In the case that not all Z-stabilizers are of even weight we can make no further statements about the distribution. In fact, checking the distribution that is obtained for a code based on a Z-stabilizer graph defined on a hexagonal lattice, i.e., a system in which the majority of the Z-stabilizers are of weight 3, we find [ Fig. 10(a)] that at least with some probability all parts of the Bloch sphere of final states are reached.
For those lattices for which every Z-stabilizer has even weight, we can identify the relative complex phase between l s and k s . For this phase, the parity of h s is irrelevant since, whether it is odd or even, it affects both l s and k s the same way. However, the weight of Z L affects only l s . Hence, in the family in which all Z-stabilizers are of even weight we have two subfamilies, those with even-weight Z L and those with odd-weight Z L .
The case that Z L is of odd weight is already explored in Ref. [24]: we have Im k s = 0 and Re l s = 0 and where U s is a unitary operator. That is, the state ρ s satisfies Eq. (6). In consequence, the logical state is constrained to a circle on the Bloch sphere that is parallel to the XY -plane [ Fig. 10(b)]. The distribution of states that can be obtained starting from the |+ L state is shown in Fig. 10(c).
If Z L is of even weight, we have Im k s = 0 and Im l s = 0. The operation D s in that case is given by the, unusual, real combination of the identity and Z L . If |k s | > |l s |, and if |l s | > |k s |, Eqs. (49) and (50) show that syndrome measurements reveal information about the Z L -polarization of the initial logical state |ψ L . They also imply that the final state after the action of D s lies on the circle spanning |0 L , |1 L , and |ψ L . This is also illustrated in Fig. 10(b), and a numerical example is shown in Fig. 10(d). This leaves us with three classes of lattices to build Z-stabilizer graphs, and we have examples for each: • containing odd-weight Z-stabilizers: tri-hex, dual of tri-hex, hexagonal and dual of kagome, • all even-weight Z-stabilizers, odd-weight Z L : square, dual of hexagonal, • all even-weight Z-stabilizers, even-weight Z L : kagome.
To compare the final-state distributions for the different lattices, we proceed as explained in Sec. IV. We have  Fig. 7. We give a sketch of the corresponding Z-stabilizer graph above the distributions. (These are the duals of the graphs in Fig. 7.) The values of the coherent noise parameter η are shown above the distributions; they are chosen as η ≈ η th , i.e., approximately at the threshold (middle row), η ≈ η th − 0.01π (top row), and η ≈ η th + 0.01π (bottom row). Samples are taken for three different systems sizes O(500) qubits (red), O(1500) qubits (green), and O(2500) qubits (blue). |ψ s = D s |ψ L / P (s|ρ) with P (s|ρ) = ψ L |D † s D s |ψ L (recall, ρ = |ψ L ψ L |). Hence, To prepare for the Bloch-sphere average, we parameterize |ψ L = |θ, φ = cos(θ/2) |0 L + sin(θ/2)e iφ |1 L . (52) After some manipulation we find where due to Eq. (30). Using P (ρ|s) = P (s|ρ)P (ρ)/P (s), the Bloch-sphere average is where we used that for P (ρ) uniformly distributed over the Bloch sphere, Ω dρP (ρ) sin 2 θ = 2/3. Note that the probability in Eq. (36). Hence, Eq. (55) is entirely in terms of quantities that can be extracted from the FLO-based simulation. We sample from δ 2 s Ω for three different values of the error parameter η for each lattice: approximately the threshold value (η ≈ η th ) and η ≈ η th ± 0.01π. The results are shown in Fig. 11. For all simulations, we observe that δ 2 s Ω has sharp peaks around 2/3 and 0; these values correspond to |ψ s = Z L |ψ L and |ψ s = |ψ L , respectively. This shows that the coherent Z-rotations for the codes at the distances we study can be well approximated by a distribution of Pauli errors. However, in contrast to the effect of an incoherent error, each state ρ s after the operation R s • E is still a pure state; we get a mixed state only if the information of the syndrome outcome s is deleted (i.e., only for the output of Λ L ).
The distributions show patterns characteristic of the lattice geometry. In particular, while below the threshold all lattices have a distribution that is increasingly concentrated on 0 and 2/3 with increasing the code dis- tance, slightly above the threshold (η ≈ η th + 0.01π) this increasing concentration can be observed only for codes containing only even-weight stabilizers.

X. NOISE DECOHERENCE THRESHOLDS AND THE COHERENT DECODER
The code-distance dependence of the final-state distributions in Fig. 11 suggests that, at least for certain lattices, a second threshold η c th might exist such that increasing the code distance makes the logical-level noise increasingly Pauli like only for η < η c th . To study the existence of such a decoherence threshold, we first invoke a notion [25] of coherence for the operation R s • E on the logical subspace. For a logical state ρ, we have (with proportionality factor |k s | 2 + |l s | 2 ) . The coherent part can be defined [25] as the non-Pauli contribution to Eq. (57), quantified by c s . The coherent part is much smaller than the Pauli part if | c s | 1, i.e., if P s is either close to zero or one. (A good Pauli approximation thus requires action that is either nearly pure identity or nearly pure Z L .) This precisely corresponds to δ 2 s Ω being 0 or 2/3 because δ 2 s Ω = 2 P s /3.
We next introduce a linearized proxy for | c s |: The less | c s | is, the less is δ 2 s c Ω , and vice versa. The distribution for δ 2 s c Ω is obtained from the δ 2 s Ω distribution by mirroring, around 1/3, the part above 1/3 into the lower values.
The quantity δ 2 s c Ω can also be interpreted as the average infidelity obtained by choosing between the Pauli correction C s and its alternative, C s = C s Z L such that it minimizes δ 2 s Ω . With the alternative correction from where we get the expression The latter interpretation is reminiscent of the optimal decoder [12,15,[43][44][45][46]: this calculates, given an error model, whether the syndrome s is more likely to require correction with C s or C s Z L . For the optimal decoder for incoherent errors, this calculation holds for any initial logical state ρ and requires no choice of error measure.
Here, we optimize the Bloch-sphere-average and target a concrete error measure. [A ρ-independent variant optimizing θ s is, however, possible for lattices corresponding to Eq. (6).] Our approach is tailored for coherent errors: it takes advantage of the deterministic nature of these in an essential manner. Hence, we can refer to δ 2 s c Ω as the average infidelity for the "coherent" decoder.
To assess whether a decoherence threshold η c th exists, we study which we simulated using Monte Carlo sampling. The results obtained for the lattices and sizes introduced in Sec. VIII are shown in Fig. 12. In these averages, we can readily observe qualitative effects related to the three Zstabilizer graph classes introduced in Sec. IX: the three classes can be distinguished by the plateau value of ∆ attained when the rotation parameter η is sufficiently beyond η th . (A different behavior sets in upon approaching the S-gate limit η ≈ π/4; we henceforth focus on η sufficiently below this value.) For η in this regime, we find that the codes with even-weight Z-stabilizers and odd-weight Z L have the smallest ∆, followed by those with odd-weight Z-stabilizers; those with even-weight Zstabilizers and even-weight Z L have the largest ∆. The plateau value of ∆ for lattices with odd-weight Z-stabilizers can be estimated by assuming that the ensemble resulting from the action of the coherent decoder on the initial state |+ L corresponds to states uniformly distributed over the Bloch hemisphere closest to |+ L . This results in We can estimate the plateau of ∆ for lattices with evenweight Z-stabilizers and odd-weight Z L similarly. Since in these lattices the accessible states, starting from |+ L , are the part of the equator spanning |Y L over |+ L to |−Y L , the average is given by (63) Both of these values are indicated in Fig. 12 and fit well to the simulation. In the case of even-weight Z-stabilizers and even-weight Z L , an estimate based on the final states evenly distributed among the accessible states would give the same value as Eq. (63), but this is not the distribution we numerically observe. Instead, we find that the final states are increasingly concentrated around |0 L and |1 L upon increasing η. This explains the significantly higher plateau value in comparison to the lattices with evenweight Z-stabilizers and odd-weight Z L .
We now investigate the existence of a decoherence threshold η c th . To this end, we study the code-distance dependence of ∆. As already suggested by the final-state distributions in Fig. 11, we find qualitatively different behavior for graphs that include odd-weight Z-stabilizers and for those with even-weight Z-stabilizers. In the former case, using a similar fitting procedure as in Sec. VIII, we observe a decoherence threshold at η c th ≈ η th + 0.01π: the value of ∆ decreases with code distance only for η < η c th but it increases for η > η c th . For graphs with even-weight Z-stabilizers, we find that if a threshold exists, it is at a much higher value of η, however, we could not clearly establish threshold behavior.
These observations highlight that the sense in which increasing the code distance decoheres logical level noise depends on the graph class for η η th . While for graphs with even-weight Z-stabilizers our findings are consistent with the final-state distribution being increasingly well approximated by that resulting from a distribution of Pauli errors, for the complementary graph class η c th η th implies that the action of R s • E on the logical subspace can retain significant coherence for η η th ; the impact of the coherent part of the logical error is suppressed only upon averaging ρ s over syndromes (i.e., only on the average-logical-channel level).

XI. CONCLUSION
We described how the C4-encoding of qubits can be used to obtain a Majorana-fermion representation of surface codes on arbitrary planar graphs, and we characterized logical-state storage under coherent Z-rotations (or coherent X-rotations) using FLO-based simulations. These methods generalize the approach introduced for the square lattice by Ref. [24].
We studied surface codes on lattices with varying connectivity and estimated the average-logical-channel threshold values η th of the rotation parameter. Comparing η th to the thresholds η P th for the Pauli-twirl of the physical-qubit coherent error, we found that while η th and η P th are similar, the inequality η th ≤ η P th holds for all considered systems. We also found that, analogously to the case of incoherent Pauli noise, there is a trade-off between resilience against coherent X-and Z-rotations depending on the graph connectivity. However, while for Pauli noise the thresholds against bit-and phase-flips approach a universal bound, Eq. (42), this is not the case for incoherent errors. To demonstrate this, we have identified the doubly-odd lattice that is self-dual (hence has the same threshold for Z-and X-rotations) just as the square lattice, but has higher η th than the square lattice.
We also studied the properties of final states corresponding to individual syndrome measurements followed by recovery. These properties were found to follow a categorization of codes into three classes: those whose Z-stabilizers include odd-weight operators, those with only even-weight Z-stabilizers and even-weight logical Z-operator Z L , and those with only even-weight Zstabilizers and odd-weight Z L . The three classes correspond to three distinct patterns of accessible final states, as shown in Fig. 10(b).
The square lattice studied in Ref. [24] corresponds to the third class; it is only in this class that per-syndrome error and recovery R s • E corresponds to a unitary Z Lrotation of logical states, with state-independent syndrome probability and rotation angle. In all other cases, the syndrome probability depends on the initial state. To assess the average case (in the sense of this dependence), we studied the distribution of the average infidelity conditioned on syndrome s [ Fig. 11], and introduced a measure of coherence [Eq. (58) and Fig. 12] and the related coherent decoder. While for η < η th , upon increasing the code distance the distributions are increasingly well approximated by those resulting from a distribution of Pauli errors, codes that include odd-weight Zstabilizers were found to display a decoherence threshold η c th ≈ η th + 0.01π above which increasing code distance increases the coherence of the logical-level noise. The sense in which logical-level noise decoheres for η η th therefore depends on the graph class. In particular, for graphs with odd-weight Z-stabilizers, the action of R s •E on the logical subspace can retain significant coherence so that the decoherence of the logical-level noise holds only upon averaging ρ s over syndromes, i.e., only on the average-logical-channel level.
That correcting coherent errors is possible in all graph classes is an encouraging result. It shows that a unitary action for R s • E in the logical subspace, as for the square-lattice case of Ref. [24], is not a key requirement. However, our simulations are still constrained to uniaxial rotations along one of the directions specified by the stabilizers (i.e., Z-or X-rotations). It will be interesting to investigate more general situations, including more general forms of coherent rotations, or error models with the probabilistic occurrence of different coherent components such that the overall error is inequivalent to Pauli noise.