Floquet codes without parent subsystem codes

We propose a new class of error-correcting dynamic codes in two and three dimensions that has no explicit connection to any parent subsystem code. The two-dimensional code, which we call the CSS honeycomb code, is geometrically similar to that of the honeycomb code by Hastings and Haah, and also dynamically embeds an instantaneous toric code. However, unlike the honeycomb code it possesses an explicit CSS structure and its gauge checks do not form a subsystem code. Nevertheless, we show that our dynamic protocol conserves logical information and possesses a threshold for error correction. We generalize this construction to three dimensions and obtain a code that fault-tolerantly alternates between realizing two type-I fracton models, the checkerboard and the X-cube model. Finally, we show the compatibility of our CSS honeycomb code protocol and the honeycomb code by showing the possibility of randomly switching between the two protocols without information loss while still measuring error syndromes. We call this more general aperiodic structure `dynamic tree codes', which we also generalize to three dimensions. We construct a probabilistic finite automaton prescription that generates dynamic tree codes correcting any single-qubit Pauli errors and can be viewed as a step towards the development of practical fault-tolerant random codes.


I. INTRODUCTION
Any route towards new fault-tolerant schemes for quantum computing involves finding qualitatively different ways of performing quantum error correction.A recent approach called operator quantum error correction [1][2][3]requires one to recover only a part of the original 'logical' state, while errors are allowed to affect the rest of it, which is spanned by 'gauge qubits'.This can be accomplished by constructing a subsystem code, which is specified by a gauge group G that is generically a non-Abelian subgroup of the Pauli group.The stabilizer group S of the subsystem code is given by the cen-tralizer of the gauge group, i.e. the set of the elements in the gauge group that commute with all elements of the group, and the logical qubits of the stabilizer code are split into the logical qubits of the subsystem code and gauge qubits, which are no longer used for encoding logical information.Subsystem codes thus provide a generalization of the concept of stabilizer codes [4].
In subsystem codes, syndrome measurement can be performed using generators of the gauge group only, which are usually low-weight (non-commuting) operators.This makes subsystem codes attractive for achieving fault tolerance and gives rise to several new proposals for realization of universal quantum computing.A central idea in these proposals is a procedure called gauge fixing, which corresponds to measuring a commuting subset of gauge operators ("checks"), thus fixing the states of the gauge qubits.The measured gauge operators are then added to the stabilizer S of the subsystem code defined by the gauge group G. Different ways of performing gauge fixing allows one to switch between different stabilizer codes S 1 and S 2 starting from the same parent gauge group G.This is aptly called 'code switching' and a universal transversal set of gates can been realized this way from the gauge color code [5,6], quantum Reed-Muller code [7], and more [8].Furthermore, other methods that allow one to overcome the Eastin-Knill nogo theorem [9,10], such as lattice surgery and code deformation [11,12], can be unified into the framework of gauge fixing [13].
Recently, a new dynamic error-correcting code, comprised of a time-periodic sequence of two-qubit Pauli measurements, was proposed by Hastings and Haah [14,15]and dubbed the 'honeycomb code'.It is considered the first example of a Floquet codebecause of the inher-ent time periodicity in the measurement protocol.The honeycomb code is based on a subsystem code with a gauge group generated by terms in the Hamiltonian of the Kitaev honeycomb model [16].Notably, this subsystem code stabilizes no logical qubits [17].However, the honeycomb code remedies this and dynamically generates logical qubits by measuring a commuting subset of the gauge group at each round, which constitutes onethird of the full set of two-qubit Pauli checks.This dynamic protocol generates a different stabilizer group at each instant in time which differ from that of the original subsystem code.In particular, the instantaneous stabilizer group of the dynamic code is equivalent to that of a toric code [18]on a certain superlattice, and the embedded code changes with period 3 while conserving logical information.Remarkably, the honeycomb code was also shown to possess a threshold [19,20].From the quantum matter perspective, the honeycomb code not only switches between different realizations of Z 2 topological order but also exhibits a kind of time crystalline behavior -while the period of the cycling is 3, the period of the code is 6 because after 3 rounds an e/m automorphism occurs.This idea has been more generally explored in ref. [21].
In this paper, we propose a new class of Floquet codes in two and three dimensions that are not based on parent subsystem codes.Our 2D construction is geometrically similar to that of the honeycomb code, but possesses an explicit CSS structure; therefore we call our code the CSS honeycomb code.We show that this code embeds an instantaneous toric code, conserves logical information and possesses a threshold for error correction.It also turns out that the CSS honeycomb code performs an automorphism every three rounds.Our 3D construction embeds two distinct type-I fracton models: we show that it cycles between realizing instances of checkerboard and X-cube models [22]while preserving logical information and being error-correcting as well.This is the first Floquet code we are aware of that prepares and cycles between fracton stabilizer codes.
We argue that our 2D code cannot be reduced to the honeycomb code.However, we show that it is possible to fault-tolerantly switch between our CSS protocol and the honeycomb protocol.Moreover, we consider random disturbances of the protocol in time, thus generalizing Floquet codes to a large class of monitored random circuit codes which we call dynamic tree codes, as the path of a single instance of such a code is a branch of the history tree of a probabilistic process.We show that a special class of these codes, i.e. random-flavor Floquet codes, is fault-tolerant.Next, we construct a probabilistic finite automaton (PFA) that allows one to generate instances of dynamic tree codes that allow detection and correction of any single-qubit Pauli error.We conjecture that a large class of PFA-generated dynamic tree codes is fault-tolerant with an efficient decoder.This construction advances us one step closer towards fault-tolerant random codes.Practically, these codes also work well for error models that are dynamical in time.
Thus, the dynamic codes we construct in this paper present a new class of quantum error correcting codes and suggest a new route towards universal fault-tolerant schemes for quantum computation, that rely on neither stabilizer codes, nor subsystem codes, nor Floquet codes generated from the gauge group of subsystem codes.
The rest of our paper is organized as follows.In section II, we introduce the two-dimensional CSS honeycomb code, discuss it in detail and explain its errorcorrection properties.In section III, we elaborate on an example that generalizes CSS honeycomb codes to three dimensions and show that the instantaneous code cycles between different realizations of the checkerboard and X-cube model.In section IV, dynamic tree codes are introduced and argued to be a more general structure (that need not be periodic) bridging the honeycomb code and the CSS honeycomb codes.We propose a PFA construction of error-correcting protocols and also generalize dynamic tree codes to 3D.

II. 2D CSS HONEYCOMB CODE
We propose a dynamic quantum error correcting code built solely out of X and Z-flavored check operatorsfor this reason, we refer to this code as the CSS honeycomb code.Recall that in the honeycomb code of Hastings and Haah [14], one picks a 3-colorable planar graph and assigns labels of X, Y , and Z to each of the three orientations of the edges.The edges of the graph are also three-colorable, and the dynamic measurement protocol consists of measuring the two-body Pauli operators ("checks") of the flavor corresponding to the orientation of the bond at all the edges of a given color at each round.The color of the edge is defined by the colors of the two plaquettes that it connects, see Fig. II.In the CSS honeycomb code, the protocol is somewhat simpler and is shown in Table I.It is partially inspired  I. Summary of the the CSS honeycomb code.The table features the measurement sequence, the instantaneous stabilizer group S(r) at each round, the syndrome plaquettes, logical operators, and the instantaneous codes.The checks and plaquette stabilizers are color-coded for convenience.The 'syndrome' column contains the plaquette stabilizers that have been known in previous rounds but are also contained in the checks of the current round.These measurements are used as syndromes for error detection (see Sec. II C).The logical operators labeled as electric (e) and magnetic (m1,2) strings correspond to string operators that violate the superlattice vertex or plaquette stabilizers of the embedded toric code, respectively.The magnetic m1 and m2 strings are equivalent up to local operators acting at their ends.The connection between the logical operators of the CSS honeycomb code and the topological excitations of the embedded codes are explored in Sec.II B. TC(c) with c ∈ (r, g, b) stands for a toric code realized on a triangular superlattice with vertices of the superlattice located on plaquettes of color c, while TC is the same code conjugated by a layer of single-qubit Hadamards, i.e.where stabilizers have flavors exchanged, X ↔ Z.
by the construction of toric code topological order in [23,24].We similarly consider a honeycomb lattice with periodic boundary conditions and divide the plaquettes and the edges into three colors, red, green and blue.At each round of measurements, we apply either red, green, or blue checks.However, the flavor of the check operators applied at each round alternates between X and Z (i.e. one measures two-qubit operators XX or ZZ on the edges of the color of the given round).This gives a measurement schedule whereby we measure the sequence {rXX, gZZ, bXX, rZZ, gXX, bZZ} periodically in time.
Let us start in an arbitrary initial state (alternatively, one prepare a specific state in order to encode logical information in a code) and start measuring checks according to the proposed protocol.At each round r, the state prepared this way is a stabilizer state under an instantaneous stabilizer group (ISG) S(r).The generators of instantaneous stabilizer groups at each round are listed in Table I.As a remark, similarly to the honeycomb code, instead of post-selecting or correcting to the +1 values of the measured stabilizers, we instead record these signs and assume a convention where the ground state is eigenstate of the plaquette stabilizers with eigenvalues determined by the measured signs.
At initial round r = −3, the red checks shown in Fig. 1, which we denote rXX, are measured.At the next round, r = −2, we measure ZZ-checks on green edges, which anticommute with the measurements at the previous round.However, at this step, the ISG contains the stabilizers P b (X), which corresponds to a product of Pauli-X around blue plaquettes, and belongs to the center of the group generated by ⟨rXX, gZZ⟩, i.e. commutes with checks of both rounds.Measuring bXX in the subsequent round r = −1 produces plaquette stabilizers that are the center of the group ⟨bXX, gZZ, P b (X)⟩, which is P b (X) and P r (Z).
After measuring rZZ at round r = 0, the ISG includes P g (X), as well as P b (X) and P r (Z) from the previous rounds, as well as current checks rZZ.The prepared code has a number of stabilizers that matches the number of qubits on a torus minus two, because the plaquette operators are not all independent.This instantaneous code is equivalent to the toric code (TC(r) in the table).To see this, consider the superlattice formed by the dashed black lines in Figure 1.On this triangular superlattice, there are two qubits per edge.Constraining to the subspace where the checks rZZ simply fuse the two qubits into a single qubit degree of freedom, with effective qubits located on each red edge which have effective logical operators X = XX and Ž = ZI = IZ.Then, it can be seen that P g (X) and P b (X) simplify to products of three X operators around the triangles of the superlattice.Similarly, P r (Z) corresponds to the product of Ž on the star of the edges emanating out of each vertex of the triangular lattice.For the simplicity of the presentation, assume that all measured signs of rZZ checks are +1 (otherwise, the signs would appear as prefactors in each term in the Hamiltonian without changing the conclusions).Thus, the effective stabilizer code corresponds to the Hamiltonian where A v and B △ are the star and plaquette terms on the triangular lattice, respectively.This Hamiltonian simply describes the toric code, exhibiting the paradigmatic Z 2 topological order.When we continue implementing the protocol further, the number of logical qubits does not change, and the embedded code in each round is a different realization of the toric code; the period of the embedded code is 6.The logical information is preserved during this cycling, the details of which we address in the next section.To see that the embedded code changes each round, consider the subsequent r = 1 step when gXX checks are measured.The value of the stabilizer P b (X) from the previous step is already contained in the values of the measured green checks, and therefore we do not add it to the list of generators of the instantaneous stabilizer group (ISG) (we add it to the table as a syndrome, however, because the stabilizer value inferred from the green checks at the current round can be compared with the one stored earlier).Additionally, measuring gXX turns the rZZ checks of the previous round to P b (Z), so the number of logical qubits in the new code does not change.We can see that on round r = 1 we also obtain an effective toric code by drawing a triangular lattice centered on the green plaquettes, and viewing the gXX checks as a fusion of the two qubits on each green edge, which have effective logical operators X = XI = IX and Ž = ZZ.The Hamiltonian corresponding to the embedded code is which is again a triangular lattice toric code.On the next step, bZZ checks are measured, and the plaquette P r (Z) becomes redundant, so we do not list it in the ISG.A new plaquette P r (X) is added to the ISG, and the ISG yields an embeddded toric code centered on the blue sublattice (TC(b)).The instantaneous stabilizer groups of the next three rounds are identical to the previous three apart from X ↔ Z (and therefore TC code goes into TC, see Table I); therefore, the period of the code is 6.
Thus, starting from round r = 0, our CSS honeycomb code always embeds a toric code in its instantaneous stabilizer group.A striking difference between the honeycomb code and the CSS honeycomb code is that while the honeycomb code features fixed plaquette stabilizers after three rounds of measurements, the plaquette stabilizers in the CSS honeycomb code change from round to round via substitutions where P (Z) is replaced by another P (X) or vice versa.This suggests a fundamental difference between our code and the honeycomb code from the perspective of subsystem codes, which we discuss below.In Appendix A, we also show that this FIG.2. The rXX (a) and bZZ (b) rounds of the CSS honeycomb code realized on the three-colorable square-octagon lattice (periodic boundary conditions are assumed and only part of the lattice is shown for convenience).Because the algebraic relations between the checks are the same, and the square-octagon lattice is trivalent with even sided plaquettes, the properties of the square-octagon Floquet code and its error correction are the same as the honeycomb version.The left half of each lattice shows the original lattice and the ISG, and the right half shows the superlattice.At the rXX step shown in (a), if the two-body checks define local [[2,1,1]] codes, the embedded code on the superlattice is the toric code with qubits on the edges, where P g,b (Z) become the plaquette terms and Pr(X) becomes the star term.In (b) (at blue, and similarly, at green steps), one can view the blue checks together with Pr(X) plaquettes as stabilizers of a [ [4,1,2]] local code.This results in a Wen plaquette model where the effective qubits are located on vertices of the square superlattice.
code has a regular representation as the same protocol where only ZZ-checks are measured at each round and a layer of single-qubit Hadamard gates is inserted after each round.This immediately turns it into a period-3 protocol.Formulated this way using only ZZ-checks and unitary layers, the honeycomb code requires single-qubit S and H-gates with a period-3 pattern.
Finally, this protocol does not necessarily require a honeycomb lattice and will work on any three-colorable graph, similarly to the honeycomb code [25].In particular, if we apply the same protocol to the three-colorable square-octagon lattice, the embedded code will alternate between explicitly realizing the Wen plaquette model [26]and the toric code on a square superlattice, as shown in Fig. 2.

A. Relation to subsystem codes
As previously mentioned, subsystem codes are defined by a gauge group G that is generically a non-Abelian subgroup of the Pauli group.The stabilizer group of a subsystem code is given by S = C(G) G, where C(G) denotes the center of the gauge group.The subsystem code can be viewed as a generalization of the concept of a stabilizer code; the logical qubits of the stabilizer code defined by S above are now split into logical qubits and gauge qubits of the subsystem code.While the logical qubits of the subsystem code are stabilized by G \ S, the gauge qubits of the subsystem code are not, and G transforms them nontrivially.Logical operators of S are similarly split into 'bare' logical operators which act trivially on the gauge qubits, and 'dressed' logical operators, which transform the gauge qubits.Not only do subsystem codes require lower-weight measurements in general (given the non-Abelian nature of the gauge group), but they provide attractive proposals for universal quantum computing.In particular, one can perform a procedure called gauge fixing, whereby an Abelian subset of the gauge group generators is measured, thus fixing the states of some of the gauge qubits and adding additional stabilizers to S, turning it into S ′ .In this way, one can easily switch between different codes with different stabilizer groups, which is called code switching.
Therefore, one way to construct Floquet codes might come from starting with a subsystem code ('parent subsystem code') and measuring subsets of its gauge operators sequentially, arriving at different effective stabilizer codes as a result.The honeycomb code fits in this framework: the XX, Y Y , and ZZ checks of the honeycomb code correspond to the Hamiltonian terms of the Kitaev honeycomb model and generate a subsystem code.Even though this subsystem code does not contain any logical qubits, the dynamical protocol leads to an ISG that, at every round, is the same as that of the parent subsystem code minus two operators that cannot be obtained by such sequential measurements (the 'inner' logical operators).In contrast, an attempt to find a subsystem code that would play the same role for the CSS honeycomb code fails, as we show below.Note that the concept of the 'parent subsystem code' introduced above is distinct from and unrelated to the 'parent code' for anyon condensation, which has been used to independently derive the CSS honeycomb code in ref. [27].
The construction of the CSS honeycomb code does not involve the checks of a subsystem code, and therefore one might ask whether there exists a relation between this dynamical code and any subsystem code.Let us explore this question in more detail.Consider the group generated by all checks of our protocol, i.e.G = ⟨rXX, gZZ, bXX, rZZ, gXX, bZZ⟩ . ( For this subsystem code, S = Z(G) = ⟨ X, Z⟩.These extensive operators contain the Z 2 global symmetry of the effective codes realized by the Floquet protocol.At each step of the Floquet codes, one of these symmetries is just a product of all checks, and the other one is the product of one color of plaquettes with the opposite flavor of the checks; thus, both operators are contained in the ISG.The latter operator is a symmetry of the embedded toric code.Let us show that the subsystem code defined in eq. ( 3) provides only very limited information about the stabilizer codes realized by the Floquet protocols constructed from its checks, and does not play a useful role as a parent subsystem code.Assume that we gauge fix the code G by adding the checks rXX to the stabilizer group.The new code is G ′ = ⟨bXX, rZZ, gXX⟩ and S ′ = ⟨ X, Z, rXX⟩.This code does not bear resemblance to the topological codes realized by the Floquet protocol, and such topological codes cannot be achieved by further gauge fixing or removing some of the gauge checks.Therefore, even though the CSS honeycomb code is generated by sequentially measuring the checks of the subsystem codein (3), this subsystem code does not construe a useful parent subsystem code, unlike in the case of the honeycomb code.
We may also introduce a concept of k-sliding subsystem code, which is defined by a gauge group G r k generated by subset of checks from k consequent rounds r − k + 1, . . ., r.Let us see if this relaxed notion of subsystem code can be a parent subsystem code for the Floquet code at some of the rounds.First, we notice that if k = 1, the k-sliding subsystem code stabilizes too many qubits.Before proceeding to higher k, we note that without loss of generality, we can consider round r = 6n: • k = 2: The generators of the gauge group are simply The center of this gauge group is Z(G 6n 2 ) = ⟨P g (Z), P g (X), X, Z⟩, which does not have three distinct types of plaquette stabilizers.
The center of this gauge group is Z(G 6n 3 ) = ⟨P g (Z), P r (Z), s(rZZ − gZZ), X, Z⟩, where s(rZZ − bZZ) are strings of rZZ and gZZ checks along homologically nontrivial cycles of the torus.Moreover, since the stabilizers are of the same flavor, the code is classical.As we see, the stabilizer group also does not contain three distinct types of plaquette stabilizers.
• k ≥ 4: A similar exercise shows that the stabilizer group of a 4-sliding subsystem code produces a single flavor of a plaquette as well as the global symmetries.There is a more fundamental reason that looking beyond to k ≥ 4 is unreasonable.Recall that during each round, a c = r, g or b-colored plaquette substitution occurs where P c (X) is replaced with P c (Z).In the next round, the value of P c (X) is then destroyed (and the same happens on rounds where X ↔ Z).Given that after 4 rounds the values of a stabilizer is destroyed by measurements, there is no reason one should consider a subsystem code formed by checks from too many subsequent rounds.
Therefore, even though the 2-sliding subsystem code contains more stabilizers because it 'inherits' additional stabilizers from two previous measurement rounds, this nevertheless tells us that there is no useful concept of a parent subsystem code for the CSS honeycomb code, even if we generalize to a k-sliding subsystem code.This discussion also indicates that the CSS honeycomb code might belong to a different class of dynamic codes than the honeycomb code.Another example of a Floquet code that does not seem to have an immediate parent subsystem code is the automorphism code [21], although at the time of writing, these codes have not been shown to be error correcting or fault tolerant.
As we show below, our code nevertheless conserves logical information and possesses a threshold.Surprisingly, this shows that subsystem codes are not necessary for the construction of 'good' error-correcting dynamic codes.

B. Conservation of logical information and automorphism
The instantaneous code embedded in our dynamic code is equivalent to the toric code, as indicated in the last column of Table I.As mentioned before, the three plaquette stabilizers of the ISG become the stabilizers of a toric code on a triangular superlattice (see Figs. 1and 3). String operators, whose endpoints anticommute with the vertex or plaquette stabilizers, excite e and m-anyons on their ends, respectively.Note that on the triangular lattice, there are two types of m-anyons (m 1,2 ), corresponding to two orientations of triangular plaquettes, though they nevertheless belong in the same superselection sector.These strings for e and m 1,2 particles are formed by checks indicated in Table I, see Fig. 3.For example, at round 6n, the string operators creating e and m particles are where P i,c is a set of checks of color c that forms a string emanating from hexagon i of the same color.The two X and two Z logical operators for the instantaneous code can be obtained by taking e and m 1,2 -type strings around homologically nontrivial cycles of the torus.The m 1type string taken along such a cycle is the same as m 2type string; hence, they both produce the same logical operator.Table Ishows the m 1 anyon at each step turns into the e anyon of the subsequent round, and the e anyon turns into the m 2 anyon.The m 2 anyon 'disappears', which occurs exactly when the syndrome for plaquettes violated by this flavor of anyon is measured.In fact, the m 2 anyon is equivalent to m 1 one up to the checks of the current round, therefore the information carried in magnetic logical operator is not lost.This will be also useful for us when understanding the error correcting properties of this code.
Similarly to the transformation of anyons, the X and Z-type logical operators swap at each step but are never measured; thus, this code conserves logical information.
In the honeycomb code, the logical operators can be classified as either 'inner' or 'outer' ones.The 'inner' operators are products of checks along the homologically nontrivial cycles of the torus and they belong to the stabilizer of the honeycomb subsystem code, whereas the 'outer' ones do not.Because of the lack of a subsystem code framework, there is no concept of 'inner' and 'outer' logical operators as in the original honeycomb code.
Finally, we remark that it might appear as though our code does not possess an e ↔ m automorphism (which the currently existing examples [14,21]of Floquet codes do).However, let us follow a magnetic string gZZ measured in round 0 (see Table I).It is preserved for three steps, and at step 2 the magnetic logical produced by gZZ and rZZ is the same.Now, following rZZ to round 3, we see that it becomes an electric string.The codes at step 0 and 3 are the same toric code conjugated by on-site Hadamard transformations, and therefore, up to this transformation, an automorphism has in fact been performed.

C. Decoders and threshold
Despite the absence of the overarching structure of a subsystem code and a stationary ISG, the CSS honeycomb code surprisingly possesses a threshold.For a simplified X, Z-error model of single-qubit errors, we can reduce the decoding problem to that of the honeycomb model [14], and thus, argue that our code has a threshold.For other error models and more specialized decoders, the thresholds for these two codes are likely quantitatively different.
In the simplified error model, we only consider occurrence of single-qubit X and Z errors with probability p, corresponding to the quantum channel Since X(Z)-type single-qubit error can be commuted past similarly-flavored checks, we only need to consider the occurrence of an X(Z)-type error after even(odd) rounds.We simplify our error model assuming independent single-qubit errors of X(Z)-type.Because the error syndromes for errors occurring after even(odd) rounds are measured on odd(even) timestamps only, error correction can be performed separately on even and odd temporal sublattices.X and Z errors are treated identically, so for simplicity we will deal with Z errors in what follows.
We consider a simple non-optimal decoder for this error model.The value of each type of plaquette stabilizer is measured twice during each period-6 cycle, once at step r, and once at step r+4 (before being erased at step r+5), as shown in Fig. 4. Comparing these values allows one to infer the error syndromes that are necessary for decoding.Thus, one needs to record the syndromes inferred from comparing the the two values of each plaquette stabilizer obtained at different rounds.
For each type of the plaquette stabilizer, there exists a type of check that anticommutes with this stabilizer (for example, consider the pair P r (X) and rZZ).Thus, the values of such plaquette stabilizers are necessarily randomized after enough time, and it is impossible to fix the values of stabilizers in our Floquet code, say, to +1, once and for all.The only constraint on the values of the FIG. 4. A P b (X)-type detector cell is shaded blue; it can be violated by Z-type errors occurring on its spacetime support (not including t = r + 4).It consists of measuring the highlighted Pg(X) plaquette at time r and then measuring it again at time r + 4. Two such neighboring detection cells are required to determine the spacetime location of the edge where the single-qubit error has occurred.stabilizers in the system is that the logical information is preserved, which was argued above.Despite this, it is surprising that the protocol is still error-correcting.
In more detail, each plaquette stabilizer, once inferred from check measurements, survives for 4 rounds before a check is applied that anticommutes with this stabilizer.This allows us to define a corresponding spacetime detector cell.For example, consider the plaquette P b (X) that is inferred at step r from rXX checks (r = 6n + 3(mod 6) in Table I).One such plaquette at round r is highlighted in Fig. 4. Once measured, this plaquette is not randomized for the next three rounds, and appears in the ISG.At round r + 4, this plaquette is measured again (the 'syndrome' plaquette in Table I).The respective spacetime detector cell is supported on this plaquette between the rounds r and r + 4, i.e. between two consecutive measurements of P b (X) occurring at these rounds.The product of these two plaquette measurements determines the error syndrome and therefore whether the detector cell has been violated.As an example, the product {P b (X)} r ×{P b (X)} r+4 will be equal +1 if there is an even number of Z-type errors in its support and −1 if there is an odd number of errors.Finally, at round r + 5, P b (X) plaquettes are randomized by bZZ checks.
If a Z-error has occurred after an odd timestep, it can be commuted past the measurement of the next round, and therefore it is sufficient to only consider Z(X)-type errors after even(odd) rounds.Consider first an isolated single-qubit Z-type error occurring after an even round of the protocol.First of all, knowing only the edge where the error has occurred is sufficient to correct the error.In this situation, correction corresponds to applying a Pauli operator with the same flavor as the error to any of the qubits on this edge; if the qubit was guessed incorrectly, this turns the single-qubit error into a check error of the current round.However, one can show that such check errors do not affect the logical state in the instantaneous toric code, so long as the edge is of the same color as the checks of the last round.Now we discuss how to determine the edge where the error occurs.Let us assume that a Z error occurs after round r = 0.At the next round r = 1, measurement of one of the gXX checks involving this qubit will acquire an error.This will violate a P b (X)-type detector cell (its type is determined by the 'syndrome' column in Table I).Similarly, at round r = 3, the rXX checks are measured, and one of the checks changes its sign due to the error.This violates a respective P g (X)-type detector cell.These two detector cells share only one ZZ-type edge: the red edge at r = 0. Thus, the spacetime location of the faulty edge caused by a single-qubit error is determined unambiguously.Finally, in the case of multiple errors, a minimum-weight perfect matching decoder can be used (discussed below) and the location of error chains will be determined up to stabilizers of the code.
The principle of error correction after this point is same as in the honeycomb code, where the violated detectors have to be matched on the same spacetime lattice (decoding graph).This spacetime lattice parameterized 3 rotation of each previous one around the time axis (the distance between the centers of hexagons is taken to be √ 3, see Fig. 1).The links of the lattice are given by {b 1 , b 2 , b 3 }.We have two copies of the syndrome lattice, corresponding to odd and even timesteps, that store syndromes from bit-flip (X) and phase-flip (Z) errors, respectively.
The minimum-weight perfect matching decoder can be applied for error correction [28]and the problem can be similarly mapped to a random-bond Ising model with a phase transition to a confined "non-correctible" phase, thus exhibiting a threshold.Thus, the CSS honeycomb code has a threshold similarly to the honeycomb code.
Another way of convincing oneself of a threshold comes from the probability of error leading to a failure being exponentially suppressed as L → ∞ for a torus of size L × L. Let us sketch a bound on the failure probability in a similar manner to ref. [28].Each edge on the syndrome lattice has a one-to-one correspondence with a physical qubit of a toric code on the superlattice from one timestep earlier.Therefore, once the minimum-weight perfect matching decoder determines the lowest weight string of errors E 0 by finding the shortest string on the syndrome lattice, the set of links on the respective superlattices at the times when errors in E 0 occur will be determined unambiguously.As we noted earlier, only the edge of the superlattice on which the error occured needs to be detected (i.e. the errors have to be detected up to a position within the two-qubit check of the round after which the error occured).Because of this one-to-one correspondence, we can refer to the 'flipped' edges on the syndrome lattice as to errors.Let p be the probability of a single-qubit error and let us assume that there are no measurement outcome errors.We consider one type of error, and thus, one copy of the syndrome lattice, and assume that the true error string is E and that the recovery string is E 0 and has length w 0 .
Notice that the set C = E + E 0 contains a set of disjoint loops, either homologically trivial or not.A failure occurs if the set C contains at least one homologically nontrivial cycle.Consider an arbitrary connected path S(w) of length w, then the probability of failure can be bounded by the probability of C containing a path S(w) with length greater than the distance of the code: We consider only self-avoiding walks, since every closed loop can be eliminated.Assume that we have an arbitrary path containing w e errors (i.e.links belonging to S(w) E)).The probability of such a path is The probability of failure can be bounded by: where n S (w) is the total number of self-avoiding walks on the syndrome lattice.The lattice has 6 directions for the walk from each point, and therefore n S (w) ≤ 6 × 5 w × 1 9 L 2 T , where 1 9 L 2 T counts the number of the possible starting points.Therefore: The probability of the failure that we found above is exponentially suppressed as long as p, q < p (which is always the case, for example, for T (L) = poly(L)).Thus, the lower bound on the threshold within this model is p c ≥ p (0) c .A threshold in the X, Z-error model implies a threshold against measurement outcome errors as well, because a check error corresponds to a correlated-in-time application of a non-commuting single-qubit Pauli error right before and after the check is applied.Additionally, a partial implementation of any logical operator is correctable in a similar sense to how the 'inner' logical operators are correctable in the honeycomb code.Despite the fact that there is no concept of an 'inner' logical operator because the subsystem code structure is absent in our code, any partially implemented logical operator is detectable.Robust error correction during rounds r = −3 to r = 0 is not possible because the instantaneous code is still being prepared during these steps.One can instead start by initializing the effective toric code at r = 0 on the corresponding superlattice by a different high-fidelity method.Similarly, the measurement of the logical operators can be done by two-qubit checks applied to the effective toric code on the superlattice after termination of the protocol.
In the independent work [27], the CSS honeycomb code has been discovered independently, and the threshold of ∼ 0.3% has been found numerically for MWPM decoder for depolarizing noise model (at the circuit level).We refer the reader to this reference for a useful discussion of error correction for this code and the effect of measurement errors.

III. 3D GENERALIZATION: FRACTON FLOQUET CODE
In this section, we present an example of a 3D construction inspired by our 2D CSS honeycomb code, which we find gives rise to Floquet codes for fracton topological orders.
The general protocol and associated geometry are shown in Figure 5and Table II.In particular, we consider a truncated cubic lattice, which can either be thought of as a cubic lattice, where every site is turned into an octahedron, or a lattice of corner-sharing octahedra where each shared corner is extended into an edge.The physical qubits are located at the vertices of this lattice.It can be seen that the volumes of this lattice are three-colorable: we label the cubic volumes with red and blue ( r and b in Figure 5) and the octahedra with green.The protocol is implemented using checks of weight two and three.The checks that we will use in the protocol consist of the products of three Paulis rXXX(rZZZ) around red triangles, bXXX(bZZZ) around blue triangles, and the two-body checks along green links gXX(gZZ), see Fig. 5.This coloring was chosen to match the coloring in the 2D CSS honeycomb code: the green edges protrude out of green volumes, and red(blue) plaquettes interface between the volumes of two other colors (blue/green and red/green, respectively).
We repeat the measurement procedure similar to the 2D case.The protocol is outlined in Table II.After round -2 when the green links are measured, the product of the eight red triangular plaquettes that form the truncated faces of the blue cube are in the instantaneous stabilizer group.This is notated by a volume stabilizer In the next round r = −1 we measure the products of Xs on the blue triangles, indicated by bXXX.These do not commute with the green checks of the previous round, but instead commute with the product of green checks around a blue cube.Hence, only the product of gZZ checks forming b (Z) stays in the instantaneous stabilizer group.Naively, at round 0 one might wish to return to the beginning of the protocol and measure rZZZ, in accordance with the 2D protocol for the CSS honeycomb code.However, the rZZZ check commutes with bXXX of the previous step, which results in 6 independent plaquette stabilizers on a single octahedron.Since an octahedron is comprised of 6 physical qubits, this stabilizes a single state.The fix is to instead measure bZZZ at round 0, thereby cycling between the check colors in the sequence "rgb bgr rgb bgr . .." rather than "rgb rgb . ..". Measuring bZZZ preserves both b (Z) and b (X), but does not commute with bXXX of the previous round.Instead, on each octahedron, the square plaquettes ⋄ g (X) remain in the instantaneous stabilizer group of the next  II.Summary of the the three-dimensional CSS fracton Floquet code.The table features the measurement sequence, the instantaneous stabilizer group S(r) at each round, the syndrome plaquettes, logical string operators, and the instantaneous codes.The 'syndrome' column stores the star/volume stabilizers that can be inferred from the checks of the current round and compared with known value of this plaquette stabilizer in the previous round.Strings of checks are labeled electric (e) and magnetic (m1,2) in correspondence with the instantaneous code on the superlattice.The magnetic m1 and m2 strings are equivalent up to local operators at their ends.XC(g) stands for the embedded X-cube model realized on a cubic superlattice with effective qubits on its edges.CB(r/b) stands for the embedded checkerboard model realized on a cubic superlattice with effective qubits on its vertices and the volume stabilizers defined on r/b cubes.XC (CB) are the same codes conjugated by a layer of single-qubit Hadamards, i.e. having stabilizers exchange flavors X ↔ Z.
round, which we refer to as 'diamond' stabilizers.There are three such diamonds in total, but only two of them are independent.Using the fact that the product of bZZZ around an octahedron is the identity, there are five independent stabilizers per octahedron which means that a single qubit per octahedron (or, equivalently, a single qubit per vertex of the cubic lattice) effectively remains.The equivalent Hamiltonian for all of the stabilizers at round 0 is therefore (13) For J ≫ 1, the first term contains five independent stabilizers that fuse the six qubits of the octahedron into one effective qubit with local effective Pauli operators acting on it being X = rXXX and Ž = rZZZ.Now, since b (X)( b (Z)) are comprised of products of rXXX(rZZZ), each of which now acts as an effective Pauli on each vertex, the operators reduce to a product of X( Ž) around the eight vertices of the blue cubes This is nothing but the checkerboard model [29]defined on the cubic superlattice.
At round 1, we measure gXX on the green links.This set of measurements includes the b (X) stabilizer of the previous round (which is therefore updated and stored for determining the syndrome, see Table II), and also adds a new stabilizer r (Z) formed from the checks of the previous round.The Hamiltonian formed from the instantaneous stabilizer group is then For J ≫ 1, the gXX checks fuse the two qubits on each green edge of the cubic lattice into a single qubit per edge with effective Pauli operators X = XI = IX and Ž = ZZ.The effective model is Namely, the ⋄ g (X) stabilizers are the star stabilizers and the (Z) are the cube stabilizers of the X-cube model [29].There might exist a link between the emergence of the X-cube model subsequent to the checkerboard model and the fact that two coupled copies of an X-cube model are connected to the checkerboard model by an adiabatic deformation [30].
At round 2, one measures rZZZ, after which the checks of the previous round form the stabilizer r (X) and the stabilizer b (Z) is contained in the newly measured checks (and will thus be used for determining the syndrome).The effective Hamiltonian is then 21) which is again the checkerboard model, but now on the red cubes.
This concludes a period's worth of measurements and upon repeating the measurement sequence a similar cycling continues.To summarize, the embedded code alternates between two types of type-I fracton: the checkerboard model centered on b(r) cubic sublattice and the X-cube model.Additionally, an X ↔ Z mapping occurs every round, and the period of the code is 6.
We note in passing that the current protocol can be modified by measuring a periodic sequence that alternates between (rXXX, gXX) and (bZZZ, gZZ).This increases the rank of the ISG and fuses the two qubits on each edge of the cubic lattice into one effective qubit at each round.This protocol is equivalent to repetitive measurements of the three-body X and Z check operators of the subsystem toric code proposed in ref. [31].

A. Relation to subsystem codes
The conclusion about the relation between our 3D construction and subsystem codes is the same as in 2D.First of all, the stabilizer of the gauge group generated by all checks in the protocol contains only the subsystem symmetries shared by all ISGs of the Floquet code (i.e. the subsystem symmetries shared by the checkerboard and X-cube fracton orders).These are products of X and Z operators on planes formed by green checks.At each round, these operators are either contained in the last measured checks or in the product of one of the types of the volume stabilizers.Some of these operators become 'inactive' logicals that we discuss in next section.
Similarly to the CSS honeycomb code, gauge fixing the subsystem code comprised of all checks does not provide any useful information for construction of 3D Floquet code.
Consider further the following gauge groups for the ksliding subsystem codes (noting that k = 1 is trivial): • k = 2: The relevant gauge group is The center of this gauge group Z(G 2 ) contains r (Z), b (X), planes X, planes Z along with a possible sub-extensive number of stringlike/plane-like operators.
• k = 3: The relevant gauge group is The local stabilizers contained in Z(G 3 ) are r (X), b (X), planes X, planes Z along with a subextensive number of string-like/plane-like operators again.
• It is clear that the center will be a group that is not larger than that of Z(G 3 ) upon further adding checks to the gauge group at k ≥ 4.
Therefore, we again conclude that there is no single sliding subsystem code whose stabilizer group contains the set of plaquettes of any ISG for the 3D Floquet code.

B. Conservation of logical information
Consider the decorated cubic lattice on a T 3 torus of size 2L x × 2L y × 2L z , where the even-sized linear dimensions are required for three-colorability.The effective X-cube model on the corresponding superlattice has a ground state degeneracy of 4(L x +L y +L z )−3 while that of the effective checkerboard model is 4(L x +L y +L z )−6.Thus, there seems to be a discrepancy in the number of logical operators in the corresponding rounds.The resolution to this puzzle is a feature not present in the 2D code; there are three logical operators of the static Xcube model that are read out or scrambled by the measurement schedule, and therefore do not belong to the set of logical qubits in the Floquet code.We will call such logical qubits and respective operators inactive logical qubits and operators.In contrast, the remaining 4(L x + L y + L z ) − 6 logical qubits of the static code that store information in the Floquet code will be called active.In fact, the inactive logical operators that are read out/measured are among the symmetries in the center of the subsystem code G.
To see what happens explicitly, we first start from round one (r = 1 mod 6), where the ISG corresponds to the X-cube model.Let us recall how to count the logical operators for the corresponding instantaneous effective code.Given a straight line along the effective cubic lattice, the product of Že on all edges along the line commutes with the X-cube stabilizers.This physically corresponds to tunneling a lineon excitation around the torus.Moreover, Že strings applied along different parallel lines are distinct, since they are not related by a product of stabilizers.However, there is a relation between certain products of such logical operators.Taking a product of four adjacent parallel lines that form edges of a cube is equal to a product of the enclosed (Z) stabilizers.For concreteness, let us pick logical operators formed by products along the lines in the z-direction.There are (2L x )(2L y ) such lines.There are also (2L x − 1)(2L y − 1) relations imposed on these lines, one for each square in the xy plane minus conditions that product of all the FIG. 6.A fragment of the cubic superlattice of rounds r = 1 mod 6 with realization of the X-cube model XC(g).The qubits are fused by the gXX checks of rounds r = 1 mod 6 into a single effective qubit per edge.On a 3-torus, examples of the inactive logical Z and X operators are shown in panels (a) and (b), respectively.There are total of three such independent operators for each cycle around the torus.
cubes in a plane equals identity.All together, the zlineons give rise to (2L x )(2L y ) − (2L x − 1)(2L y − 1) = 2L x + 2L y − 1 independent Z logicals1 .Summing over the other two directions, we find 4(L x + L y + L z ) − 3 logical Z operators.Now, consider the logical operators formed by a product of all Ž = gZZ along all edges in a fixed xy plane.Measuring rZZZ in the next round r = 2, we note that such product of gZZ in the plane is equivalent to the product of all ⋄ g (Z) in the same plane, see example in Fig. 6.Moreover, each diamond is a product of two rZZZ operators.It therefore follows that this particular logical operator of the X-cube model is measured in the next round r = 2, and is therefore inactive.Similarly, the product of gZZ along all edges in one fixed xz and one fixed yz plane is also measured.This accounts for the three inactive Z logicals.
Next, let us similarly find the active X-logical operators, i.e., those that commute with the measurements of the next round r = 2. Define a product of rXXX along a straight line.Suppose the line points in the z direction, which tunnels a lineon, which is a bound state of an xzplanon and a yz-planon2 .Like the Z-type lineons, there is a similar constraint on the X-type lineons: the product of tunneling four adjacent lineons forming the edges of a cube can be decomposed into a product of stabilizers, and the product of tunneling all lineons for a fixed plane is the identity.The importance of defining the bound states is that the local hopping operators come in pairs, and hence they will commute with rZZZ checks of the next round, which is the neccessary condition for them being active.This gives the total of 4(L x + L y + L z ) − 6 active X logicals.Now we ask what the remaining three logical operators are which anticommute with the checks of round two.Fixing a direction, say y, consider the product of X's that hops a planon (which is a bound state of two fractons) across the y direction, as shown in Fig. 6.It is clear that this operator anticommutes with one of the rZZZ operators.Moreover, this operator is unique up to stabilizers and the active X-logical operators.Lastly, it anticommutes the inactive Z logical operators in the xy plane.By rotational symmetry, we conclude that there are three such inactive X logicals.
Finally, let us confirm that the active logical qubits indeed survive and are transferred to logical qubits of round 2, which is the checkerboard model.The active X logical operators are products of bXXX, which is the effective qubit X of round r = 2.They therefore transfer faithfully to the X logical operators of the checkerboard.As for the active Z logical operators defined as products of gZZ along a line, using the rZZZ checks of round two, we find that their strings are equivalent to strings of bZZZ operators: which is the product of effective local of round two.They therefore faithfully transport into the logical operators of the checkerboard model.
Going from round r = 2 to r = 3, we see that the ISG is defined on the same lattice.We find that the Z(X) logical operators of CB(r) in round r = 2 become Z(X) logical operators of CB(r), which equate to X(Z) logical operators of CB(r) in round three.We hence conclude that the X and Z logical operators are swapped.
Finally from round r = 3 to r = 4, the logical information is transferred from a product of bXXX to a product of gXX using the measurements rXXX: Thus, the logical operators of the checkerboard model embed into the active logical operators of X-cube in the next step.The next three rounds, r = 5, 6, 7, proceed identically but with X and Z swapped.
Finally we discuss the automorphism occurring in the model.The obvious one that exchanges between magnetic and electric sectors is seen from comparing CB(r) and CB(r) in rounds 2 to 3 and comparing CB(b) and CB(b) in rounds 5 to 6, where the role of the X and Z logical operators are swapped.Therefore, the automorphism between these codes occurs in the same sense as in 2D Floquet code, up to a layer of Hadamard gates.Comparing XC(g) and XC(g) in rounds 1 and 4 is more subtle.Although the active logical operators get swapped, this does not produce a well-defined automorphism for the X-cube model.The inactive logical operators cannot be permuted, since they have different spatial support.Therefore, transfer of the logical information from electric to magnetic sectors given by the protocol does not preserve the fusion rules for the excitations.It would be interesting to examine more generally the connection between the existence of inactive logical qubits and the absence of an automorphism in Floquet codes during certain rounds.

C. Decoders and threshold
The error correction in 3D Floquet fracton codes is remarkably similar to that in 2D CSS honeycomb codes, mainly because the former is a natural generalization of the latter.The details of the error syndromes looks different, as we discuss in more detail below, but the decoder on (3+1)-dimensional spacetime lattice that is a generalization of the (2+1) dimensional case considered earlier will perform well, and will have a threshold by an analogous argument.Moreover, if analyzed using statistical mechanics mappings [28,32], larger thresholds are likely to be facilitated thanks to the higher dimensionality.
Considering the same error model as before, where X and Z errors occur with probability p, we only need to consider two distinct times for the errors to occur, and the rest of the behavior of syndromes can be deduced by symmetry.We also find that the syndromes for the errors that occurred after even(odd) rounds are measured at odd(even) rounds only.This means that the errors after even and odd timesteps can be corrected separately.
Consider first a Z error occurring at a single qubit right after round 0. One of the gXX checks of round 1 will be affected, as well as two cubes b (X) inferred using this check, which can then be compared with the stored value and recorded as a syndrome.This allows one to determine the green link on which the error occurred.Then, on step 3, rXXX is measured, and two triangles on the same octahedron will have their values flipped.The three (redundant) ⋄ g (X) plaquettes belonging to the same octahedron can be inferred using by combining two triangles belonging to this octahedron.These stabilizers can be compared with the values that have been stored earlier, and the comparison uniquely determines the diagonal of the octahedron on which the fault occurs.Together with knowing the green link where the error occurs, this allows one to unambiguously determine the location of the error.However, the same syndrome is found when the error instead occurs on two complementary qubits belonging to the same blue triangle.In this case one can still assume a (more probable) single-qubit error and correct for it.If the actual error occurred on the two complementary qubits, the total error will become a bZZZ operator, which is inconsequential, because it corresponds to the check of the round r = 0 after which the error occurred.Similarly to the 2D case, these kinds of check errors do not affect the logical state.
Errors occurring after round 3n + 2 lead to a syndrome that is qualitatively similar to the one discussed above.A qualitatively different type of error syndrome is found for errors that occur after rounds r = 3n + 1.Without loss of generality, consider an X single-qubit error occurring after round r = 1.In round r = 2, two of the rZZZ checks will be flipped and two cubes b (Z) sharing an edge whose values are inferred from these will be flipped and stored as a syndrome.Similarly, a pair of cubes r (Z)will be flipped in r = 4.The syndromes at both rounds r = 2 and r = 4 allow to determine the location of the flipped edge ℓ, and the additional redundancy can be used for error correction that is more robust against the measurement outcome errors.Applying a correcting single-qubit Pauli-X to any of the qubits on this edge will either correct the error or apply an XX to the entire edge.In the latter case, this will be removed once the round that re-measures this check occurs.A pair of errors on two neighboring qubits belonging to the same octahedron produces the same syndrome as the pair of errors on the other two qubits belonging to the same ⋄ g -plaquette.Nevertheless, this error can still be corrected up to an inconsequential edge error on a green link by applying a Pauli operator on any of the qubits not belonging to this diamond.
Thus, the syndromes will occur on the spacetime lattice that is formed by the centers of cubic volumes of the same color at t = r (mod 3) and t = r+2 (mod 3), and by the vertices of the cubic lattice at t = r + 1 (mod 3).At times t = r + 1, the measured syndrome can take one of eight values, indicating which of the three ⋄ g square plaquettes have (or have not) been violated.Mapping this problem onto a graph matching problem and designing an efficient minimum-weight decoder based on the known syndromes is an involved task that we leave to a future work.

IV. DYNAMIC TREE CODES
We have considered CSS versions of Floquet codes in both two and three dimensions.Both these codes have robust error-correcting properties, but fall outside of the subsystem code formalism.In this section, we further generalize these results by introducing a broader family of dynamic codes where the measurement sequence need not beperiodic.Surprisingly, under certain constraints on correlated randomness of the measurements, this random code can correct arbitrary single-qubit Pauli errors.This construction bears relation to some classes of monitored random circuit codes and random unitary circuits [33][34][35][36][37][38][39][40], achieving practical quantum error correction in which has been a long-standing challenge [36][37][38][39][40].As of now, it is unclear whether random circuits, The syndromes that are obtained in rounds 1 and 3 and are listed in the 'Syndrome' column.We only show the syndromes obtained in odd rounds which are used to detect the error after even rounds and omit listing the syndromes at even rounds for clarity.If we assume that a single-qubit Pauli error of flavor f0 occurred after round r = 0, the listed syndromes will allow one to unambiguously determine the red edge where the error has occurred.
including those considered in refs.[41] and [42], consisting of randomly applied checks of the honeycomb code, can possess a finite threshold.We call the proposed random codes dynamic tree codesbecause a given code carves out a path on a configuration space of allowed checks which forms a tree.Dynamic tree codes can be viewed as the first instance monitored random circuit codes that are capable of correcting arbitrary single-qubit Pauli errors, though they are restricted to correlated randomness and the absence of spatial randomness.Practically speaking these codes might be useful if the error model itself is dynamical: for example, it could adapt the error correction procedure to biased error models and to adversarial time-dependent error models.

A. Random-flavor Floquet codes and switching between CSS honeycomb code and honeycomb code
We start with the 2D case.Let us show that if the colors of the checks follow an rgb sequence but the Pauli flavors are randomized such that the flavors of two consecutive rounds are different, the resulting random-flavor Floquet codewill be error-correcting and will have a threshold.The condition on flavors of two consecutive rounds being different ensures that the checks of the two rounds always anticommute and the rank of the ISG stays the same.
Without loss of generality, we can consider a code shown in Table III(considering only four arbitrary rounds is sufficient for the argument), where f r ∈ {X, Y, Z} stands for the flavor of the given round r, f r+1 ̸ = f r , and the colors of the checks follow rgbrgb... sequence (or the symmetric thereof, rbgrbg...).We can also assume that the code has been properly initialized far in the past.By inspection, we see that this code realizes a sequence of toric codes by analogy with 2D CSS and honeycomb codes on a superlattice corresponding to the color of the current round; the ISG at each round is shown in the table.From the logical strings of the code shown in Table III, we see that the condition f r+1 ̸ = f r and c(r + 1) ̸ = c(r) is indeed sufficient for logical information between the rounds, because is ensures that one never measures logical operators from round to round.
Let us now show that the random-flavor Floquet code can correct arbitrary single-qubit Pauli errors.The error after each round r can be expanded in the basis of Pauli flavors f r and f r+1 of the current and the next rounds, respectively.The f r+1 component of the error can be commuted past the checks of round r + 1, and only the error of the flavor the current round f r needs to be considered.Therefore, we again need only consider the error model where single-qubit Pauli errors have the flavor of the last round.
Without loss of generality, we can consider an f 0 -Pauli error that occurred after round r = 0. Let us show that we can detect the red edge where this error occurred in spacetime in our random-flavor Floquet code shown in Table III.This is sufficient for being able to correct the error: as before, we only need to apply the f 0 Pauli operator on any of the qubits of the edge.If we guess the wrong qubit, the result is the two-Pauli operator equivalent to the check of the last round which is an inconsequential error.
At round r = 0, prior to the error, the values of plaquettes P g (f 1 ) and P b (f −1 ) are known.If one re-measures the values plaquettes P g (f 1 ) and P b (f −1 ) after the error has occurred, the change of the sign of these plaquettes will allow to determine the edge where the error has occurred, which in this case is a red edge between these two plaquettes.Referring to Table III, we see that indeed, the P b (f −1 ) and P g (f 1 ) plaquettes are immedi-ately re-measured at round r = 1 and r = 3, respectively, yielding the needed syndrome changes.This allows us to detect and correct single-qubit errors.Additionally, note that the locations and the timestamps of the errors are the same as in the CSS honeycomb code and in the honeycomb code.
Random-flavor Floquet codes are fault-tolerant by a similar argument to that for the CSS honeycomb and the honeycomb codes.If the ISG and syndrome are bookkept similarly to how it is shown in Table III, it is clear that the plaquette occurring for the first time at round r will be kept in the code until round r + 3 when it will be updated.The new value of the plaquette will be able to detect errors that have occurred after rounds r and r + 2. This is the same syndrome-error relation as in the codes that have been studied earlier and the decoding procedure is analogous to that of the honeycomb code.
One consequence of the existence of error-correcting random-flavor Floquet codes is that one can switch between the protocols for the honeycomb code and the CSS honeycomb code fault-tolerantly, so long as the color sequence of the checks remains unperturbed.Therefore, this shows that the CSS honeycomb code and the honeycomb code are compatible, which might be useful for future design of error correcting codes with time dependent error models.

B. PFA construction of error-correcting dynamic codes
Next, we address the question whether it is possible to introduce more randomness into the Floquet code protocol while preserving its ability to detect and correct errors.We propose a construction that uses what we call a T -probabilistic finite automaton (T -PFA).If initialized in a toric code ground state, the automaton chooses the flavor and color of each check at random, realizing a toric code at each round and by design guarantees that any single-qubit Pauli error will be detected no later than T steps after its occurrence and is thus correctable (we use the same error model as earlier in the text).If this protocol is initialized in an arbitrary state, it will prepare the toric code model no later than after T steps, and will continue to function as described, with a length-T window for error detection.
The construction is outlined below and is exemplified in Fig. 7.In the discussion below, we assume that the protocol is initialized in a toric code state corresponding to the first check of the protocol.
The T -PFA has memory containing T arrays, each consisting of up to 4 plaquettes with status either 'not remeasured' or 'remeasured'.When a new check at round r = n + 1 is measured, 4 plaquettes with 'not remeasured' status are added to the corresponding array.Two of the plaquettes are elements of the current ISG with flavors that are not the same as the flavor f n+1 of the current check.The other two are equivalent to the first two plaquettes up to checks of the current round.The memory is designed this way because if an error occurs after round r = n + 1, remeasuring any two of these four plaquettes of different colors would be sufficient to detect the edge where the error has occurred and its timestamp.We keep these plaquettes in memory in order to ensure that the possible syndrome for a single-qubit Pauli error can be tracked and recorded.
The update rules for T -PFA after r = n are: 1. Pick the check of the next round from (rXX, rY Y, rZZ, gXX, gY Y, gZZ, bXX, bY Y, bZZ): (a) Eliminate the checks which have the same color or flavor as the past round r = n−1 (this guarantees that the rank of the ISG stays the same).
(b) Eliminate checks that randomize plaquettes that are stored in memory with 'not remeasured' status.
(c) If in memory cell with r = n − T there is a 'not remeasured' plaquette, choose only from checks that remeasure this plaquette.
(d) Otherwise, pick a random check from the remaining options.
2. Update the memory based on the new check.
(a) Scan the memory for 'not remeasured' plaquettes.For each such plaquette change the status to 'remeasured' if the value of this plaquette can be inferred based on the current check and the interim checks.
(b) Erase any plaquettes in the memory that are redundant with those already remeasured.
(c) Erase any plaquettes that have been randomized by the new check.
(d) Erase the array at round r = n−T (as the syndrome measurement has been concluded for this round).
(e) Create a new array with timestamp r = n + 1 which holds four 'syndrome' plaquettes with 'not remeasured' status.
The rules above implicitly use the fact that whenever the new check is measured, one of the plaquettes of the previous round is updated.This is guaranteed because subsequent checks are non-commuting.For the same reason, one of the two plaquettes of the other color has to be randomized.Thus, one can verify that for the rule 1(c), there will indeed be only one plaquette with 'not remeasured' status at r = n − T .Additionally, we find by explicit verification that it is always possible to find a check that satisfies the requirement in 1(c).Similarly, if step 1(d) is reached, one can see that there will be at least two choices for checks.Thus, the algorithm cannot halt due to unsatisfiability of the requirements of the update rules.FIG. 7.An example of a random sequence of measurements using the T -PFA scheme discussed in Sec.IV B. The PFA stores a running window containing T arrays of cells (apart from during the initialization rounds between 1 and T ).We assume that the code has been properly initialized in a toric code state at r = 0. Here, r counts the rounds in the measurement history and n is the current round in the PFA operation.The arrays correspond to the T memory cells of the T -PFA.In each array, the darkened plaquette label indicates a plaquette that needs to be remeasured in order to infer the syndrome corresponding to a possible error on the plaquette.Light gray plaquettes that are not crossed out indicate that the plaquette has been already remeasured.We keep the plaquettes that are redundant up to the checks of the current round for completeness.Light gray plaquettes that are crossed out have been randomized before they could have been remeasured.Each new check is chosen by the PFA based on the memory of checks and plaquettes, corresponding to the set of update rules of the PFA denoted by operator R described in the main text.In short, the new check is chosen such that it differs in color and flavor from the previous check and such that it does not randomize any of the 'not-remeasured' (darkened) plaquettes in the memory.Furthermore, checks must also be chosen such that the measurement of all syndromes from more than T steps in the past is completed; so long as this holds true, it suffices for the PFA to store memory from the past T rounds and for the error to be undetected only for up to T rounds.
Altogether, the protocol based on the T -PFA guarantees that for a single-qubit Pauli error, the first syndrome will be measured immediately after the error occurs, while the second one will be measured no later than T rounds afterwards.This allows one to determine the spacetime location of the faulty edge and correct the error.In fact, additional information about errors is contained in checks because there are multiple ways to obtain the second syndrome from the measured checks (a plaquette can be formed in multiple ways by checks taken at various pairs of times; this applies e.g. to the last update of P b (Z) in Fig. 7).Together with the correctability of single-qubit errors, this argues for the likelihood of faulttolerance of either this protocol or at least some of its subclasses.It would be interesting to see if there exists an efficient decoder for dynamic tree codes generated by T -PFA and benchmark its performance.
One might wonder if there exist nontrivial examples of such dynamic tree codes.In fact, random-flavor Floquet codes are the only solution for T = 3, which, as we argued above, comprise a class of fault-tolerant dynamic codes.Another example are codes that follow the color sequence equivalent to (rf 1 f 1 ) [(gf 2 f 2 )(bf 3 f 3 )] s (gf 2 f 2 ) k=0,1 with s + k ≤ T , or symmetric versions thereof.An illustration of such code is shown in Fig. 7.

C. 3D generalization
It is a straightforward but cumbersome task to confirm that the random-flavor Floquet code in 3D, i.e. one that follows the rgbbgr-like color sequence with consecutive checks anticommuting, corrects all single-qubit Pauli errors.The time signatures of the syndrome measurements are altered depending on the flavor sequence, however, which might affect the fault-tolerance properties of the code.Similarly, one can verify that a T -PFA approach can be, in principle, generalized to 3D.Designing an efficient decoder for the 3D fracton Floquet code and 3D dynamic codes and analyzing their fault tolerance properties would require a more involved analysis, so we leave this to a future work.

V. DISCUSSION
In this paper, we presented several new dynamic codes in two and three dimensions that cannot be described under the subsystem code framework.One immediately important direction would be to benchmark these codes and compare their performance to the honeycomb code with various error models.It should be possible to extend our analysis to finite Abelian groups, i.e. the case when the qubits are Z N variables.Progress on this question has been made for the honeycomb code [43]and it would be useful to see if there are major qualitative differences for the CSS honeycomb code.
Our protocols in 3D involve 3-body measurements, and it would be beneficial to find alternative constructions where measurements involve 2-body operators whilst preserving the error correcting properties of the fracton Floquet codes.We found a preparation protocol for Haah's cubic code (shown in Appendix B) using two-body measurements, however constructing Floquet codes for type-II fractons would be very interesting.Furthermore, fracton codes have been recently shown to have outstanding optimal thresholds for error correction [32]with the possibility of parallel error correction [44].Therefore, it would be interesting to rigorously benchmark the fracton Floquet code.
Another interesting question is the relation between the CSS honeycomb code and the e-m automorphism code from ref. [21].Furthermore, one might ask whether there exists a unifying picture for dynamic tree codes and automorphism codes from the perspective of adiabatic paths of Hamiltonians, perhaps by utilizing the parent color code model.
Finally, the dynamic tree codes proposed in this paper, especially the T -PFA generated codes, present an interesting way of constructing monitored random circuits using correlated randomness.Understanding the robustness of this error correcting phase, and generalizing the code to a T -PFA construction that incorporates spatial nonuniformity would be valuable pursuits.It would be curious to prove fault tolerance of these monitored random circuit codes by mapping to models of statistical mechanics.
Note Added:While completing this manuscript, the authors became aware of an upcoming work where twodimensional CSS honeycomb codes are independently found from anyon condensation [27], which provides a valuable framework for understanding Floquet codes, and also finds a numerical threshold for the code.
After completion of this manuscript, the authors learned of another forthcoming work [45] which introduces a fracton Floquet code with a codespace that grows with system size and a non-zero error threshold.We draw the configuration of links formed in Fig. A1.Since there are two links coming out of each site, each link corresponds to a two-spin interaction.The preparation protocol is therefore shown in the table above.The notation ℓ corresponds to the links labelled in the figure, where we assume that orange and pink links act on qubits of type '1', and blue and green links act on qubits of type '2'.α, β, γ, δ correspond to the labelled edges while p and q correspond to labelled vertices.The subscripts 1 and 2 correspond to the flavors of the spins at each site.

FIG. 1 .
FIG. 1. Fragment of a honeycomb lattice with three-colored plaquettes (P r,g,b ) and edges.The red, blue and green checks correspond to the edges connecting two plaquettes of the same color.The red checks (r) which are measured in rounds 3n are shown by bold lines and the triangular superlattice is shown by dashed black lines.

FIG. 3 .
FIG.3.String operators generating anyon excitations e and m1 anyons shown on (a) honeycomb lattice and (b) the superlattice (with qubits on the edges), occurring at rounds r = 3n.At steps corresponding to odd n, the e-string is formed from rXX checks, whereas m1(m2)-strings are generated by gZZ(bZZ) check strings, respectively, as shown in the figure.On the triangular superlattice, the red plaquettes turn into vertices whereas the blue and green plaquettes correspond to the two types of triangular plaquettes.At rounds corresponding to even n, the picture is the same upon exchanging X ↔ Z.

w we p we ( 1
− p) w−we .Now, if the path S(w) happens to be contained in C, the number of errors on it w e > w 2 because of the assumption of the minimum weight matching.We formulate this as S \ (S E) ⊆ E 0 .This yields: w w e p we (1 − p) w−we ≤ the timescale of running the code T (L) before the error correction is performed satisfies lim L→∞ L 2 T 10 2 p

FIG. 5 .
FIG. 5. (a) Decorated cubic lattice with two qubits per edge (located at the vertices of the resulting lattice).The cubes of the two types correspond to r and b , respectively, and the triangular plaquettes between the octahedra located at each vertex and the cube of type b(r) are shaded red(blue), respectively, i.e. the complementary color.The two square plaquettes ⋄g that produce two independent stabilizers are shown in (b).

TABLE III .
The random-flavor Floquet code, where the checks follow a fixed color sequence rgb, but the flavors fr in each round r are randomized (with the constraint that fr+1 ̸ = fr).