Closest lattice point decoding for multimode Gottesman-Kitaev-Preskill codes

Quantum error correction (QEC) plays an essential role in fault-tolerantly realizing quantum algorithms of practical interest. Among different approaches to QEC, encoding logical quantum information in harmonic oscillator modes has been shown to be promising and hardware efficient. In this work, we study multimode Gottesman-Kitaev-Preskill (GKP) codes, encoding a qubit in many oscillators, through a lattice perspective. In particular, we implement a closest point decoding strategy for correcting random Gaussian shift errors. For decoding a generic multimode GKP code, we first identify its corresponding lattice followed by finding the closest lattice point in its symplectic dual lattice to a candidate shift error compatible with the error syndrome. We use this method to characterize the error correction capabilities of several known multimode GKP codes, including their code distances and fidelities. We also perform numerical optimization of multimode GKP codes up to ten modes and find three instances (with three, seven and nine modes) with better code distances and fidelities compared to the known GKP codes with the same number of modes. While exact closest point decoding incurs exponential time cost in the number of modes for general unstructured GKP codes, we give several examples of structured GKP codes (i.e., of the repetition-rectangular GKP code types) where the closest point decoding can be performed exactly in linear time. For the surface-GKP code, we show that the closest point decoding can be performed exactly in polynomial time with the help of a minimum-weight-perfect-matching algorithm (MWPM). We show that this MWPM closest point decoder improves both the fidelity and the noise threshold of the surface-GKP code to 0.602 compared to the previously studied MWPM decoder assisted by log-likelihood analog information which yields a noise threshold of 0.599.

Quantum computers hold the promises to solve certain families of problems with significant speedups compared to its classical counterparts [1].However, due to the ubiquitous noise in the physical systems that are used to build the quantum computers [2], quantum error correction (QEC) is essential to protect quantum information from errors due to decoherence and other quantum noise [3].The idea behind QEC is to encode a logical qubit onto several physical qubits that are highly entangled [4][5][6][7][8].A widely used family of QEC code is the stabilizer code where the logical information is stored in the +1 eigenstates of a set of commuting Pauli operators, known as stabilizers [9,10].The syndrome measurements of the stabilizers provide information on the location and nature of the possible errors.Before attempting to correct the errors, a classical decoding algorithm is typically used to analyze the results of the syndrome measurements to determine the most likely errors.During the development of fault-tolerant quantum computing, a lot of efforts have been put into creating better ways of encoding the logical qubits or reducing the noise of physical qubits.However, given its essential role in QEC, devising classical decoding algorithms that can reduce the effect of noise in a fast time scale is an equally important problem [11].
Among different platforms for quantum computers, bosonic systems have became increasingly promising because, thanks to the infinite dimensional Hilbert spaces of the bosonic modes, QEC can be implemented in a hardware efficient way [12][13][14][15].For example, two-component cat codes, which can be realized in circuit QED and trapped ion, naturally realize noise-biased qubits where the phase-flip error is more prominent compared to the bit-flip error [16][17][18][19][20].With that, a bias preserved CNOT gate can be realized with two noise-biased cat qubits [21][22][23][24], which is however not possible with conventional two level systems [25].Such unique feature can be used to significantly reduce the required resource overheads for implementing fault-tolerant quantum computation [25][26][27][28][29][30][31][32].The Gottesman-Kitaev-Preskill (GKP) qubit is another example of bosonic qubit that has unique feature unattainable to two level qubits [33].The main novelty of the GKP encoding is that it is designed to protect against small errors on all qubits, which is in contrast to the conventional encoding that corrects errors of arbitrary amplitude for only a subset of qubits [34].Hence the GKP encoding is more resilient to errors in the phase space that shift the values of the canonical variables q and p of the quantum system.GKP qubit with a single mode has been realized in various platforms [35][36][37][38], and shown to suppress errors from photon losses and dissipation processes.Unfortunately, GKP code with a single mode cannot correct random shift errors with size large than certain critical value, thus the logical error rate cannot be suppressed to arbitrarily small value.To improve the QEC properties of the GKP code, or increase the critical value of shift errors, one approach is to consider multimode GKP codes.For instance, one could concatenate a single-mode GKP code with a conventional multi-qubit code, such as the repetition code or the surface code [39,40], which is generally referred to as the concatenated GKP codes [41].For this family of codes, the standard decoding techniques of the multiqubit stabilizer codes, such as minimum-weight perfect matching (MWPM) [42,43], can be used for error correction.Importantly, the accuracy of the decoder can be significantly enhanced by using the analog information from the homodyne measurements of the GKP qubits [39,40,[44][45][46][47][48][49].
The QEC property of multimode GKP codes can be understood in terms of lattices in the phase space.In the original proposal [33], it has been shown that the stabilizer group elements of an N -mode GKP code are in oneto-one correspondence to the points of a 2N -dimensional lattice in the phase space.It follows from the commutation relation between the canonical variables that the lattice has to be symplectic integral.The logical operators of the GKP codes, which commute with the stabilizers but not in the stabilizer group, correspond to the symplectic dual lattice quotient by the original lattice.Although the lattices for single-mode GKP codes have been often used for illustration purpose, there are only a few examples in the literature [50][51][52][53][54][55][56] that attempt to use the lattice structure to better understand the properties of such codes, especially in high (i.e., greater than two) dimensions, let alone devise a lattice-based decoder.
In this work, we numerically implement an exact closest point decoder for the multimode GKP codes that are based on the lattice structures in the phase space.For a given GKP code, we first identify the lattice Λ that is isomorphic to its stabilizer group, and its symplectic dual Λ ⊥ that consists both the stabilizers and logical operators of the GKP code.In the absence of noise, the outcome of the syndrome measurement corresponds to a lattice point in Λ ⊂ Λ ⊥ , the identity operator in the code space.A random shift errors on the canonical variables of the GKP code will shift the syndrome away from the lattice points in Λ ⊥ .Since the actual shift error is unknown a priori, the goal of the decoding algorithm is to find a candidate error that has the shortest length and being the most likely shift error compatible with the syndrome measurement.It has been known that this is equivalent to finding the closest lattice point in the symplectic dual lattice Λ ⊥ of the GKP code for the given syndrome, which is also known as the closest point search problem in the mathematical literature [57].
The general closest point problem, however, is known to be NP-hard [58,59] thus solving the problem, or even finding an approximate solution [60][61][62], requires runtime that is exponential in the dimension of the lattice.But intuitively we expect that the closest point of a lattice can be found much faster if the lattice has certain structure [63][64][65][66].One trivial example is the integer lattice Z n : in order to find the closest lattice point to an arbitrary realvalued vector, we simply round each component to its nearest integer.Hence a GKP code based on Z n lattice can be decoded with runtime that scales linearly with the number of modes.Building on that, root lattices, such as checkerboard lattices D n and their Euclidean dual lattices, also admit decoding algorithms with runtime scaling linearly with the dimensionality of the lattices [63].More complex lattices can be built by taking the direct sum of lattices, or a union of cosets for certain lattice Λ.Instead of decoding the lattice as a whole, in these cases, one could decode different components separately followed by assembling the result together.Aided with these strategies, we show that certain concatenated GKP codes, which correspond to glue lattices, can be decoded more efficiently by decoding different cosets separately followed by selecting the result with the shortest distance to the input vector.We apply these techniques to generalizations of the tesseract and D 4 codes as well as to the surface-GKP codes, and show that they can be decoded in linear and polynomial time respectively.
The remainder of the paper is organized as follows.In Sec.III, we provide the necessary background information on the GKP codes and show that every GKP code can be viewed as a symplectic integral lattice.In Sec.IV, we formulate the problem of decoding GKP codes as finding the closest lattice point in its sympletic dual lattice.A general algorithm is presented for solving the problem in the unstructured lattices.In Sec.V C, we utilize the algorithm to analyze the error correction properties, including code distance and fidelity, for several known GKP codes, as well as generalizations of the tesseract and D 4 codes.We provide a proof-of-concept demonstration that the lattice perspective of the GKP code allows one to numerically search for optimized GKP codes.The numerically found GKP codes, despite being not optimal, exhibit distances and fidelities that are comparable or better than those of the best known GKP codes.However, solving the closest point problem for unstructured lattices incurs exponential time cost in the number of modes.In Sec.VI, we present several strategies to decode structured GKP lattices.The strategies play an important role in decoding the D n lattices, as shown in Sec.VII, which in turn serve as building block for more sophisticated GKP codes.In Sec.V B, we show that the generalizations of the tesseract and D 4 codes can be decoded with runtime that scales linearly with respect to the size of the codes, which enables us to benchmark the error correction capabilities of these code families at scale.In Sec.IX, we present an exact and polynomial-time closest point decoder for the surface-GKP codes based on MWPM.We show that this decoder improves both the fidelity and the noise threshold of the surface-GKP code, compared to the previously studied MWPM decoder assisted by log-likelihood analog information.
We conclude and discuss the future directions in Sec.X.We provide more technical details in the appendices.

II. SUMMARY OF MAIN CONTRIBUTIONS
Here we summarize the three key contributions of this work.First, we numerically demonstrate a closest point searching algorithm for decoding general multimode GKP codes.Since the initial proposal [33], decoding a GKP code is known to be related to finding the Voronoi cell of its dual lattice, the cell containing all the points that are closer to the origin than to any other lattice site.In a recent publication [54], it is shown that the optimal maximum likelihood decoding strategy for multimode GKP codes can be approximated by the closest point decoder.In these prior works, however, there has not been numerical implementations of the exact closest point decoder for general GKP codes.Here we present a self-contained introduction of a closest point searching algorithm for general lattices [67].The source code and data used in this work are available through the package LatticeAlgorithms.jl[68].
Closest point searching is a well known NP-hard problem, and decoding a generic unstructured GKP code generally takes exponential time cost in the number of modes.Hence, there is no a priori known evidence that it is practical to decode large instances of multimode GKP codes via the closest point searching strategy.The second contribution of this work is to show that certain structured GKP codes can be decoded efficiently with the closest point decoder.We present a set of tools to decode structured GKP codes, with which linear time closest point decoders are constructed for two families of repetition-GKP codes.More remarkably, we demonstrate an efficient closest point decoder for the surface-GKP code, which improves both the fidelity and threshold of the surface-GKP code while has exactly the same time complexity as the commonly used MWPM decoder.Our finding suggests that efficient closest point decoding strategy may exist for other commonly studied structured GKP codes.
The third contribution of our work is we find three instances of GKP codes that outperform the known structured GKP codes in terms of code distances and fidelity.In particular, with numerical optimization of multimode GKP codes up to ten modes, we find GKP codes with three, seven and nine modes that have better performance than the known GKP codes of the same modes, including repetition-GKP code, [ [7,1,3]]-hexagonal GKP code and surface-GKP code.For GKP codes with even number of modes, the distances of the optimized codes are smaller than the YY-repetition-GKP codes, the concatenation of two copies of repetition-GKP codes with the YY stabilizer.Despite that, we find that the fidelity of the optimized codes are the same or better than that of the YY-repetition-GKP codes with the same number of modes.The detailed study of these new GKP codes are interesting future research topics.

A. Displacement operators and Gaussian unitaries
In this work, we will work with the quantum Hilbert space of N harmonic oscillator modes.Let âj and â † j denote the creation and annihilation operators for the j-th mode, we have the commutation relation [â j , â † k ] = δ jk where we have set ℏ = 1.Since we use Gaussian operations and related concepts in many places, it proves convenient to introduce the quadrature operator Then, we have where the symplectic form Ω is a 2N × 2N matrix and is given by Here I N is the N × N identity matrix, and we have denoted operators with a hat and (column) vectors in bold fonts.We note that Ω −1 = Ω T = −Ω.
We remark that the choice of the ordering of the position and momentum operators in x is not unique.We refer to the ordering convention chosen above as the qpqp ordering.Occasionally it is convenient to work with a different ordering convention where x ≡ which is referred as the qqpp convention.In the latter case, the symplectic form Ω reads In this paper, we will mostly work with the qpqp ordering as in Eq. 3 unless we explicitly state that the qqpp ordering is used instead.The quadrature operators can be thought as the generators of the translation in the 2N dimensional phase space of the N oscillator modes.Specifically, let u = (u p ) T ∈ R 2N be a vector in the phase space.Then, the displacement operator D(u) is defined as This is a displacement operator in the sense that or equivalently, D † (u)x D(u) = x + u, which shifts the quadrature operators x by an amount of u.With that, we have the following commutation relation for the displacement operators Hence the two displacements associated with u and v commute if and only if their symplectic product u T Ωv is an integer multiple of 2π.Displacement operator is an example of Gaussian unitary operators that preserve the symplectic form Ω: it is clear that the commutation relation in Eq. 2 is invariant under the translation in Eq. 6.More generally, one could consider a Gaussian unitary Û that transforms the quadrature operator x into S x + u [69], If the symplectic form Ω is invariant under transformation, i.e., then from we conclude that S is a 2N × 2N symplectic matrix, i.e., Hence a Gaussian unitary operator is fully characterized by a symplectic matrix S and u.In this work, it suffices to consider Gaussian operations with u = 0.Those with u ̸ = 0 and S = I 2N are referred to as the displacement operators as above.

B. Multimode GKP code
A GKP code with N modes is stabilized by 2N independent stabilizer generators.Each stabilizer generator is given by a displacement in the 2N -dimensional phase space where j ∈ {1, ..., 2N } and v j is a translation vector corresponding to the j-th stabilizer generator.We also introduce a vector of operators ĝ ≡ (ĝ 1 , ..., ĝ2N ) where M is a 2N × 2N matrix with the j-th row given by the row vector v T j .The full stabilizer group is given by and since the stabilizer generators commute with each other, a generic stabilizer group element reads Eq. 14 establishes an isomorphism between the stabilizer group and a lattice with the generator matrix M where the stabilizer group element Ŝ is mapped to the lattice point √ 2πM T a. Since the stabilizers form an Abelian group, we require that D( Equivalently, it is required that the symplectic Gram matrix has only integer entries.Lattices with this property is called symplectic integral lattices [57]. Hereafter we shall use n for the dimension of a general lattice, and N for the number of modes of a GKP code.We refer to the matrix M in ĝ = √ 2πM Ω −1 x as the generator matrix of the GKP code (or sometimes simply the GKP generator matrix).We will also use M Λ to denote the generator matrix of a lattice Λ.

C. Canonical generator matrix of a GKP code
Much like the set of generators of a stabilizer group is not unique, one could pick a different basis, and hence a different but equivalent generator matrix, for the same lattice.For instance, a lattice Λ(M ) can also be generated by where R is a unimodular matrix, which is an integer matrix with |det(R)| = 1.The corresponding symplectic Gram matrices are related via Since A is integer-valued, and anti-symmetric, i.e., it is possible to find a unimodular matrix R such that where diag(d Here, d k can be interpreted as the number of states encoded in the k-th "mode".We say that a generator matrix M of a GKP code is in the canonical form when its symplectic Gram matrix A = M ΩM T is given by diag(d) ⊗ ω for some d ∈ N N .In App.B, we provide more details on how to find an appropriate unimodular matrix R that converts a valid GKP generator matrix M into a canonical GKP generator matrix M ′ = RM .

D. The logical operators of a GKP code
Displacement operators that preserve the GKP code subspace form the normalizer group of the code, which consists of phase space translation that commute with the stabilizer group By definition, for an arbitrary element Ŝ⊥ ≡ D(u) ∈ S, we have u T Ωv ∈ Z for all v ∈ Λ.This gives precisely the symplectic dual lattice for Λ which could be generated by Indeed, as one can check, b T M ⊥ ΩM T a = b T Ωa is an integer for all a, b ∈ Z 2N .In the literature, there is an alternative definition M ⊥ = A −1 M which differs from our definition in Eq. 24 only by a unimodular matrix Ω multiplied from the left.Hence the two definitions generate the same lattice.We note that since the stabilizer group is Abelian and all elements commute with each other, it implies that S ⊂ S ⊥ .Since the logical operators of a QEC code leave the stabilizer group invariant, analogously, we can associate all translations in the dual lattice to the logical operators defined as where j ∈ {1, • • • , 2N } and w T j is the j-th row of M ⊥ .The 2N logical operators of the GKP code are, however, not independent to each other, because logical operators differ by a stabilizer are indistinguishable in the code subspace.This corresponds to the fact that the logical information of the GKP code is encoded in the quotient group Λ ⊥ /Λ, and the number of distinct logical operators is equal to the number of elements in the quotient group, or [54,55] We have used Eq.16 and 24 to derive Eq. 26.Hence the number of states encoded in the GKP lattice is given by |det(M )| 2 , the square of the determinant of the GKP lattice generator matrix.The generator of the symplectic dual lattice takes a particularly simple form when M is in the canonical form, i.e., A = diag(d) ⊗ ω because Hence the logical operators are simply the stabilizers divided by the corresponding integers d i in the canonical basis.We note that sometimes it may be more convenient to use the identity

E. Code distances of a GKP code
In order to quantify the error correction capability of a GKP code, we will need several metrics for evaluating GKP codes.For standard qubit-based stabilizer codes, one such metric is the distance of the code, defined as the weight of the shortest nontrivial logical operator [1].Motivated by that, we can define the distance of a GKP code as the Euclidean length of the shortest non-trivial logical operator [54] where the factor of √ 2π comes from the definitioin in Eq. 25, and the minimum is taken over the lattice vectors that are in the symplectic dual lattice Λ ⊥ = Λ(M ⊥ ) but not in the original lattice Λ = Λ(M ).
To be more concrete, let us consider a GKP code that encodes a qubit into N modes.Since Euclidean distance is independent to the basis vectors of the lattice, we assume that the generator matrix M of the GKP code is canonized with d 1 = 2 and d 2 = ... = d N = 1.From Eq. 27, we can notice that the symplectic dual lattice Λ ⊥ = Λ(M ⊥ ) is spanned by the same set of basis vectors as Λ = Λ(M ), except that the first two basis vectors of Λ ⊥ are only half of those for Λ.Hence the quotient group Λ ⊥ /Λ is generated by w T 1 , w T 2 , where w 1,2 correspond to the logical operators of the encoded qubit.Since the logical operators are indistinguishable if they differ by a stabilizer, we can identify the following set of vectors to the logical X, Z and Ȳ operators respectively.Note that the logical identity operator Ī is simply all the lattice vectors in Λ, i.e., {u, ∀ u ∈ Λ}.We can define the distances of the different logical operators as the minimum length of the corresponding set of vectors and the distance is given by Eq. 32 is a special case of Eq. 29 for GKP codes that are in the canonical basis and encode a single qubit, because the vectors in the three summands of Eq. 31 are guaranteed to lie in Λ ⊥ but not Λ by construction.

F. Transformation between GKP codes
Recall a Gaussian unitary Û transforms the quadrature operator x into S x where S is a symplectic matrix.If we apply the Gaussian unitary to a stabilizer group element Ŝ = D(M T a), as defined in Eq. 14, the new GKP code is then stabilized by the stabilizer where we used the fact that S −1 = ΩS T Ω −1 because S(ΩS T Ω −1 ) = I.Thus the new GKP lattice has generator matrix With Eq. 34, it allows us to realize any GKP code by applying a Gaussian unitary operator to an N -mode square lattice GKP code which is generated by To see that, consider a general GKP code in its canonical basis, we can always decompose M into M = M sq (d)S T with S T = M sq (d) −1 M , which is a symplectic matrix because If S T is symplectic, then S is also symplectic because Thus we see that any GKP code can be understood as a code that results from applying a Gaussian unitary operator Û , with a corresponding symplectic matrix S = (M sq (d)M ) T , to a square lattice GKP code.The corresponding stabilizers are then given by Ŝ2k−1 = D( 2πd k s 2k−1 ), Ŝ2k = D( 2πd k s 2k ), (38) for k ∈ {1, • • • , N }, where s 1 , • • • , s 2N are the columns of the symplectic matrix S.

G. The concatenated GKP code
Here we describe how to concatenate a GKP code with another qubit stabilizer code.We assume that the base GKP code encodes a single qubit in each mode and we prepare N such GKP qubits with N modes.Then we consider a standard qubit-based stabilizer code [[N, k, d 0 ]], where d 0 is the distance of the qubit stabilizer code (not to be confused with the distance of the resulting GKP code).Then, the resultant concatenated GKP code encodes k qubits in N modes.
For the qubit stabilizer code [[N, k, d 0 ]], recall that each stabilizer generator corresponds to a binary vector with 2N components, where the odd-numbered and even-numbered entries represent the presence of an X and Z operators respectively.
Hence we shall use g j , j = 1, ..., N − k to denote the set of binary vectors for the stabilizer generators.For simplicity, we start with the square lattice as the base GKP code, and form a separable lattice generated by N copies of the square GKP code M (sq) = M ⊕N sq .From Eq. 35, we see that in an appropriately chosen basis.Here the prefactor √ 2 is due to the fact that each base GKP code encodes one qubit (i.e., two states) in a mode.In order to obtain the lattice corresponding to the concatenated code, we replace N −k rows in M (sq) by the following set of vectors such that the resultant matrix, denoted by M (sq) conc , remains full-rank.In App.A, we will show more details on how to arrive at M (sq) conc , and that det(M (sq) conc ) = 2 k which indicates that the resultant lattice indeed encodes k qubits as desired.This process of deriving a lattice from a binary code is known as Construction A in Ref. [57].
The Construction A procedure allows us to concatenate generic base GKP code to a stabilizer code.Let M base be the generator matrix of a generic qubit-intoan-oscillator GKP code and assume that M base is in the canonical form.From the discussion in Sec.III F, we can always find a symplectic matrix S base such that M base = M sq S T base .Hence, the generator matrix of the concatenated GKP code is given by H. Examples of symplectic lattices and GKP codes The error-correcting capabilities of a GKP code is strongly tied to the properties of the underlying symplectic lattice of the GKP code.In this section, we review several relevant symplectic lattices.
Z-type lattice -The simplest way to encode a state into a multimode GKP code is to use the hypercubic lattice denoted as the Z 2N generated by M Z 2N ≡ I 2N , the 2N × 2N identity matrix.The resulting stabilized state is given by a tensor product of 2N GKP qunaught states [39,70,71].
One could scale the lattice spacings along different axes for encoding multiple states and qubits in the lattice, as shown in Eq. 35.For instance, one way to encode a qubit into a single mode GKP code is to use the two dimensional rectangular lattice given by where η > 0 is the square root of the aspect ratio between the two axes.The rectangular lattice can be obtained from the square lattice M sq (2) via the transformation Here the symplectic matrix S rec corresponds to a onemode squeezing operation.Similarly, an N -mode hyperrectangular GKP code can be obtained by applying the tensor product of N one-mode squeezing operations to an N -mode hypercubic GKP code generated by a scaled Z 2N lattice (by a factor of √ 2).The logical operators of the code can be deduced from Eq. 25.From Eq. 24, we has Upon solving Eq.31, we have the code distance for the rectangular code as D-type lattice -The D-type lattice, denoted as D n , is an n-dimensional sublattice of the Z n lattice such that the sum of the components of the lattice points is even [57].Formally, In other words, the D n lattice can be obtained by coloring the Z n lattice in a checkerboard pattern, hence the D n lattice is also called the checkerboard lattice.The simpliest example is the D 2 lattice given by which is nothing but a rotated square lattice (by 45 degrees), also known as the diamond code.An important feature of the D n lattice is that the volume of its fundamental parallelotope is always 2, i.e., det(M Dn ) = 2.
Combining this with the fact that the symplectic gram matrix of M Dn is an integer matrix, we find that the D n lattice can be used to define a GKP code that encode one qubit in N modes for even valued n = 2N .In fact, it was shown in Ref. [55] that the GKP code defined with the lattice D 2N can be viewed as an N -qubit repetition code (with Y -type stabilizers) concatenated with the diamond GKP code defined in Eq. 47.
The fact that the D 2N lattice can be viewed as a repetition code allows us to infer its code distances straightforwardly.Because the diamond code in Eq. 47 is simply a rotated square lattice, its code distances are the same as those given in Eq. 45 with η = 1.Since the YY stabilizers in the repetition code would detect the X and Z errors but not the Y errors, the Euclidean distance for the logical Ȳ operator remains the same as that for the diamond code.Further, because the X operator for the YY repetition code is a tensor product of Pauli X operators, and the X operator for the diamond code corresponds to w 1 = 1 2 (1, 1) T , one can show that the X operator for the D 2N code corresponds to an 2N -component vector with all entries equal to 1/2 [55].This vector has the minimum length of N/2 among {w 1 + u, ∀ u ∈ Λ} because Λ is an integral lattice.As a result, we have Because of the symmetry between the logical Z and X operators, we conclude that We see that Z if and only if N = 2 for the D 4 lattice.We will call a code with equal X, Ȳ , Z logical distances a balanced code.
Tesseract lattice -The tesseract lattice is a fourdimensional analogue of a cube with the following generator matrix We can notice that this is a direct sum of two sublattices, where q1 and q2 quadratures form a D 2 lattice, and p1 and p2 form a Z 2 lattice.More importantly, the tessearct code can be viewed as the rectangular GKP code concatenated with the 2-qubit repetition code.To see that, we consider two copies of the rectangular GKP codes in Eq. 42 with η = 2 1/4 and the 2-qubit repetition code with a XX stabilizer.Following the approach above, we arrive at the following generator matrix where M tess is given in Eq. 49 and Since M ′ tess differ from M tess only by a unimodular matrix, they are different basis for the same lattice, namely the tesseract lattice.
For the distance of the tesseract code, since the XX stabilizer cannot detect the logical X errors, it follows that d tess X is the same as On the other hand, we expect that both distances for the logical Ȳ and Z will be improved due to the XX stabilizer.Upon explicitly solving Eq. 31 for M ⊥ tess , we have We see that for the case of N = 2, the D 4 lattice has distance d D4 = √ 2π which is greater than d tess = 2 √ π for the tesseract code.However, one problem for the D 2N lattice is that its distance remains the same for all values of N because the distance of the logical Ȳ operator do not scale with the number of modes.In Sec.V B, we will introduce two generalizations of the tesseract and D 4 codes that has larger distances than the D 2N lattice for N > 2.

IV. CLOSEST POINT DECODER FOR THE GKP CODES
In this section, we introduce the closest point decoder for the GKP code, which is based on the lattice structure of the code.We begin by following Ref.[54] to formulate the decoding problem as a lattice problem.

A. Error syndrome for GKP code
Suppose that we have a GKP code that encodes one qubit in N oscillators.The defined GKP code can be used to correct small shift errors, and we assume that the oscillator quadratures undergo independent and identically distributed (iid) additive errors where ξ ≡ (ξ q , ..., ξ ) ∼ iid N (0, σ 2 ) are the random shifts that follow the Gaussian distribution N (0, σ 2 ).The errors can be modeled by applying the displacement operator D(ξ) onto the GKP code.Our goal is to apply another displacement operator D(−ξ * ) onto the errant GKP code to minimize the chance of getting a logical error.
The shift error ξ is not known a priori, and we have only the information from the stabilizer measurements.Recall that the stabilizers of a GKP code are given by Ŝj where v T j is the j-th row of M (c.f.Eq. 14).Because the stabilizers commute with each other, they can be measured simultaneously.This is equivalent to measuring the exponents i √ 2πv T j Ω −1 x modulo 2πi.Let s j denote error syndrome from the homodyne measurements, then it differs from √ 2πv T j Ω −1 ξ by an integer multiple of 2π, i.e., where the modulo operation is applied element-wise.In other words, the shift error is related to the syndrome via ξ = 1 √ 2π ΩM −1 (s + 2πa) for certain integer valued vector a.For the purpose of decoding, we can write where we have used the identity in Eq. 28 and introduced the integer valued vector b ≡ −Ωa and Thus, we learn about the shift ξ only modulo the lattice generated by √ 2πM ⊥ , i.e., a lattice of logical operators.Condition on the error syndrome s obtained from the homodyne measurement, we are looking for a shift ξ * that has the shortest length and being the most likely shift error compatible with the measured stabilizer values.Thus we need to solve the following problem With that we will apply the counter displacement After the correction, the initial state is translated by D(−ξ * ) D(ξ) = e iα D(e) where α is an irrelevant phase and e ≡ √ 2π(M ⊥ ) T c for some c ∈ Z 2N .To see the net result of the shift error and the counter displacement, we can again assume M ⊥ is in the canonical basis, where the first two rows of M ⊥ generates the logical operators.Hence, after the attempted correction, we will have on the encoded information.Hence there will be logical error if and only if either of the first two components of c is an odd integer.
For more general cases with non-canonical M ⊥ , let R denotes the unimodular matrix that canonizes the generator matrix M , i.e., M ′ = RM is in canonical basis, and M ′⊥ = −Ω(R T ) −1 ΩM ⊥ is the canonized logical operators, according to Eq. 24.With that, Eq. 58 can be applied for −ΩRΩc in a non-canonical basis.
Before proceeding, we remark that Eq. 57 is referred to as minimum energy decoding (MED) in Ref. [54], which is an approximation of the optimal maximum likelihood decoding (MLD) for general GKP codes.In particular, the MED is only optimal when σ → 0 as it searches for the most possible error, instead of the most possible coset of errors.Nevertheless, for certain quantum error correction codes, MED is shown to have similar performance compared to the MLD, which generally has greater time complexity [72].We further note that in Ref. [54], MED is only discussed as a subroutine for MLD to decode concatenated codes.Here, on the other hand, we present a general algorithm for decoding generic GKP codes from the lattice perspective.

B. Closest point search problem
The problem in Eq. 57 is known as the closest point search problem, or the closest vector search problem, in the mathematical community, which can be stated formally as follows.For a given lattice Λ ⊂ R n , find a lattice point u ∈ Λ that is closest to an input vector t ∈ R n : In the case of tie, χ t (Λ) is chosen arbitrarily from the closest points.Equivalently, if the lattice is generated by the matrix M , then where the right hand side gives the coordinates of the closest point in the basis of M .We emphasize that although the closest point problem is described with respect to M in Eq. 59-60, for a GKP lattice M gkp , the closest point problem is to be solved for its symplectic dual M ⊥ gkp , instead of the lattice itself, per Eq.57.Although the closest point problem has been known to be NP-hard for decades [58,59], due to its many applications in diverse areas [73][74][75][76][77][78][79], there has been many attempts to reduce the search time for exact solutions [67,[79][80][81][82][83][84], or approximate solutions [60][61][62].Here we discuss an exact algorithm which is based on [67], and more details of the algorithm will be presented in App. C. Note that we are interested in an exact algorithm (despite its exponential time cost) since we would like to use it to understand generic, unstructured, and small-sized GKP codes, as well as to benchmark efficient decoders for structured GKP codes.
The algorithm starts by preprocessing the generator matrix M into a lower-triangular form via the transformation where R is a unimodular matrix and Q is an orthogonal matrix.As described in Sec.III C, matrices differed by a unimodular matrix multiplied from the left generate the same lattice, and because the orthogonal transformation can be regarded as rotating the basis vectors, matrices M and L generates the identical lattice.The transformation in Eq. 61 is also known as lattice reduction, which is a process of selecting a good basis for speeding up the closest point searching process.The Lenstra-Lenstra-Lovász (LLL) algorithm and the Korkine-Zolotareff (KZ) algorithm are two widely used techniques for lattice reductions.They have advantages in different scenarios, which will be discussed further in Sec.X and App.C. The next step is to find the closest point in the new basis L. For that, we first notice Hence, the problem reduces to finding the closest point χ t ′ (Λ(L)) for t ′ ≡ Qt.The basic idea of the searching algorithm is to view the n-dimensional lattice as a stack of (n − 1)-dimensional sublattices, and search these sublattices recursively.For instance, a 2D lattice can be viewed as a collection of 1D lattices as shown in Fig. 1.We can similarly decompose the input vector as which are parallel and perpendicular to the sublattices respectively.The searching proceeds with the Schnorr-Euchner strategy [85], which sorts the sublattices in the ascending order of their vertical distances to t ′ .Let us denote the nearest sublattice as Λ ′ , then we can first identify

V. SEARCHING FOR OPTIMIZED GKP CODES
In previous sections, we illustrate that a GKP code can be viewed as a symplectic integral lattice, and decoding the GKP code is equivalent to finding the closest point in the lattice.In this section, we will use this machinery to analyze several known concatenated GKP codes and to numerically search for optimized GKP codes.

A. Analysis of known concatenated GKP codes
Here, we analyze known concatenated GKP codes.As a warm-up, we start with the [ [5,1,3]] and [ [7,1,3]] qubit codes, whose stabilizers are shown in Tab.I. We form concatenated GKP codes by concatenating them with the hexagonal GKP code generated by The resulting codes have five and seven modes respectively, and upon solving Eq.31, we find that they are balanced GKP codes with distances equal to d = 3 1/4 √ 2π ≈ 3.2989.These concatenated GKP codes are balanced because their stabilizer groups are invariant under the cyclic transformation of the Pauli operators which is evident from Tab. I, and the hexagonal GKP code is itself balanced with distance 3 −1/4 √ 2π.In fact, for the concatenated GKP code with a balanced base GKP code, its distance is given by [55] where d 0 and d base are the distances of the qubit stabilizer code and the base GKP code respectively.In a similar spirit, we can form another balanced GKP code by concatenating the [ [5,1,3]] code with the D 4 code, and the resulting code has ten modes with distance 3 1/2 √ 2π ≈ 4.3416.In contrast, the concatenation of the d 0 = 3 surface code with the hexagonal GKP code is not a balanced code because its stabilizer group is not invariant under the cyclic transformation in Eq. 64.Nevertheless, Eq. 65 still holds and the surface-hexagonal GKP code has distance 3 1/4 √ 2π.As a comparison, we plot the distances of these four concatenated GKP codes in Fig. 2(a).
In addition to using the code distance, we can also quantify the error correction capability of a GKP code by calculating its fidelity, subject to independent and identically distributed Gaussian shift errors, as we assume in Eq. 53.More specifically, we use the Monte Carlo method by sampling 10 6 random shifts from the Gaussian distribution N (0, σ 2 ), followed by using the closest point decoder and Eq.58 to determine the probability that the logical information is preserved.The number of samples is determined such that statistical fluctuations are negligible.In Fig. 2(b), we show the fidelities of the four concatenated GKP codes discussed above with noise strength σ ≈ 0.5143.We notice that the fidelity of the d 0 = 3 surface-hexagonal code is similar to that of [ [5,1,3]]-hexagonal or [ [7,1,3]]-hexagonal codes, which are worse than the [ [5,1,3]]-D 4 code.It is possible to improve the fidelity of the surface-GKP code by considering larger distances such as d 0 = 5, but the runtime of finding the closest point increases exponentially with d 2 0 (i.e., the number of modes in the surface-GKP code) since we are using a general-purpose closest point decoder here.This poses serious challenge to decode a single syndrome for surface-GKP codes of large distances, not mentioning that one has to repeat for 10 6 samples in order to estimate its fidelity.In Sec.IX, we will devise an exact and polynomial time closest point decoder that is tailored to decode surface-GKP codes much more efficiently.With that, we will benchmark the fidelity of surface-GKP codes with larger distances and different noise strengths.

B. Generalizations of the tesseract and D4 codes
In Sec.III H, we demonstrated two lattices, the tesseract lattice and the D 2N lattice, and show that both of them can be used to encode a logical qubit.For the case with two modes (N = 2), the D 4 lattice outperforms the tesseract lattice because d D4 = 2 1/4 d tess .Despite that, the distance for the D 2N code with N > 2 is the same as the Y is fixed to be √ 2π independent of N .Interestingly, one could generalize the tesseract lattice to higher dimensions with larger code distances than the D 2N code.In particular, since the tesseract lattice corresponds to a 2-qubit repetition code, we consider the concatenation of the N -qubit XX repetition code with the rectangular GKP code in Eq. 42 with η = N 1/4 .We shall denote the resulting code as rep-rec N which corresponds to a 2N -dimensional lattice.The distance of the rep-rec N code can be understood in the following way.Since the XX stabilizers cannot detect the logical X errors, the distance d √ π, which is the X distance of the rectangular code when η = N 1/4 .For the Z operator, it corresponds to a concatenation of N copies of 1 √ 2 (0, N −1/4 ) T , the shift vector corresponding to the Z operator of the rectangular code.Because the distance is the length of the Z operator multiplied by √ 2π, we arrive at the distances for the rep-rec N code Here we used the fact that d ) 2 because the vectors for the logical X and Z operators are orthogonal to each other.From Eq. 66, we have d rep-rec N = N .In order to balance the distances of different logical operators, and increase the code distance further, we consider concatenating two copies of the rep-rec N codes with the YY stabilizer.We shall denote the resulting code as YY-rep-rec N , where N is an even number.Because the YY stabilizer detects both the logical X and Z errors, but not the logical Ȳ error, the distances for the logical X and Z are enhanced such that The distance for the rep-rec N and the YY-rep-rec N codes are confirmed via explicitly solving Eq.31, and the results are shown in Fig. 2(a).The green and red solid lines indicate their respective scalings with respect to the number of modes.We note that YY-rep-rec N code always has even number of modes and its distance is always larger than the rep-rec N code with the same number of modes, as expected.
We calculate the fidelities of the two codes with the same Monte Carlo method described in Sec.V A, and the results are shown in Fig. 2(b).For σ ≈ 0.5143, the fidelity of the six-mode YY-rep-rec N code is comparable with [ [7,1,3]]-hexagonal code, and could improve beyond that of the d 0 = 3 surface-hexagonal code with increasing number of modes; on the other hand, the fidelity of the rep-rec N code saturates at around 0.82.We emphasize that the rep-rec N and YY-rep-rec N codes are different from the biased GKP repetition code introduced in Ref. [86] which exhibits a threshold of σ * ≈ 0.599.An N -mode biased GKP repetition code is constructed by concatenating N one-mode rectangular GKP codes with an N -qubit repetition code, with the aspect ratio of the inner GKP code optimized for a given set of N and σ.Although it is similar to our construction, the aspect ratio for the inner GKP codes of both rep-rec N and YY-rep-rec N codes are fixed to be η 2 = N 1 2 .We choose to fix the aspect ratio for the rep-rec N code because it is a natural generalization for the two-mode tesseract code, which has aspect ratio √ 2. Hence from the lattice perspective, the lattices for these two GKP codes can be viewed as higher dimensional generalizations of the tesseract lattice.
In Sec.VIII, we will demonstrate that the rep-rec N and YY-rep-rec N codes can be decoded in runtime that is linear to the number of modes.We also perform more detailed fidelity analysis for larger instances of these two families of codes.Our result show that these two families of codes do not exhibit thresholds, and increasing the number of modes do not necessarily improve the fidelity for the range of noise studied, which is in contrast to the surface-GKP code or the biased GKP repetition code in Ref. [86].This is evident for the rep-rec N code, as shown in Fig. 2(b), and indicates that the threshold of the rep-rec N code, if any, is tied to the optimization of the biasing for the inner GKP code.

C. Numerical search for optimized GKP codes
We have studied several families of GKP codes, which can all be understood as a concatenation of a certain qubit stabilizer code with a base GKP code.The list of concatenated GKP codes can grow further by including, for example, Shor's nine qubit code [4] or Bacon-Shor code [87], which can be similarly analyzed from a lattice perspective.Besides the concatenated GKP codes, however, viewing GKP codes as lattices allows us to numerically search for optimized GKP codes with good metrics, such as code distance.
Recall from Sec. III F that an arbitrary GKP code can be understood as a code that results from applying a Gaussian unitary operator to the square lattice GKP code.The resultant generator matrix reads where M sq (d) is defined in Eq. 35, and S is the (2N ) × (2N ) symplectic matrix for the Gaussian unitary.Here we focus on GKP codes encoding a single qubit in N modes (i.e., d 1 = 2 and d 2 = ... = d N = 1), and aim to optimize the symplectic matrix such that the resultant code has as large code distance as possible.For this purpose, we consider the Bloch-Messiah decomposition for a general symplectic matrix where O 1 and O 2 are orthogonal symplectic matrices, and Z = diag(e −r1 , e r1 , ..., e −r N , e r N ) with real parameters (r 1 , ..., r N ).Here the diagonal matrix Z represents a set of one-mode squeezing operations, and O 1 and O 2 correspond to the unitaries that preserve the total excitations in all the modes, such as beam-splitting.This can be seen by noticing that the total number of excitation n where Y T = Y is an N × N real symmetric matrix and X = −X T is an N × N real anti-symmetric matrix.Upon combining Eq. 68 and 69, we see that a generic GKP code has generator matrix of the form M = M sq O T 2 ZO T 1 .But since O 1 is an orthogonal matrix which only rotates the basis vectors, the GKP code is equivalent to the lattice generated by assuming the underlying noise model is isotropic (i.e., invariant under rotation).Hence for a general N -mode GKP code, it can be parameterized by N 2 + N real parameters, N for the squeezing parameters and the rest for the orthogonal symplectic matrices.We will optimize over this set of parameters, for a given number of mode, to find good GKP codes.In Fig. 2(a), we show the distances for the numerically optimized GKP codes (purple squares) as a function of number of modes N .Each data point is obtained by initializing 10 4 random symplectic matrices, and performing gradient descent with respect to the negative distance, followed by selecting the code with the largest distance.We further apply the same Monte-Carlo method, as described in Sec.V A, to calculate the fidelity of the numerically optimized codes, as shown in Fig. 2(b).We remark that one could use the fidelity as the cost function for searching the optimized code.However, this requires one to perform Monte-Carlo sampling at each iteration step of the optimization.Since finding the closest point for a general lattice incurs significant time overhead, particularly for large lattices, this approach is inefficient in practice.We believe distance is a reasonable indicator for the error correction capability of a GKP code, particularly for low noise regime.
For the case of two modes, the optimizer finds the rep-rec N code with N = 2, which is equivalent to the D 4 code.The D 4 lattice has been shown to be the best quantizer in the context of classical error correction because it supports the densest sphere packing in four dimensions [57].Hence we believe that the optimizer has found the optimal GKP code for the case of two modes.However, we emphasize that the found optimized codes need not be optimal for N > 2. Particularly, it is clear that certain optimized code with N ≥ 3 has distance shorter than that of the YY-rep-rec N code.This may be attributed to the fact that only 10 4 random ansatzs have been used in the search, and we expect that the distance of the optimized code will get closer or even beyond that of the YY-rep-rec N code if more random initial points are used.Surprisingly, the found optimized codes in fact have comparable or better fidelities compared to the YY-rep-rec N code.More interestingly, for the case with nine modes, the optimizer finds a GKP code that has better distance and fidelity than the d 0 = 3 surfacehexagonal GKP code.Similarly, the optimized threemode and seven-mode codes outperform the rep-rec N code and [ [7,1,3]]-hexagonal code respectively, in both the distance and fidelity metrics.We have shown the generator matrices for the optimized codes with N = 3, N = 7 and N = 9 in App.F. Unfortunately, we have not been able to understand the structure of these numerically optimized codes, which will be left for future works.
For optimized codes with even number of modes, we notice that their distances are generally smaller than those for the YY-rep-rec N codes.On the other hand, we remark that, the fidelity of the optimized codes are generally better than that of the YY-rep-rec N codes for larger number of modes.This is evident in Fig. 2(b) where we compare their fidelity at σ ≈ 0.5143.In particular, we notice that despite the optimized code with N = 4 has smaller distance compared to the YY-rep-rec N code with the same number of modes, they have almost the same fidelity.We leave the detailed study of this family of optimized codes to future works.
In summary, we have shown that general GKP codes can be viewed as parameterized lattices, and in principle one could find good GKP codes via numerically optimizing their distances.In fact we found three code instances with N = 3, N = 7, and N = 9 which outperform the known concatenated GKP codes with the same number of modes, in both the code distance and fidelity metrics.We have also illustrated two generalizations of the tesseract codes, namely the rep-rec N and the YY-rep-rec N code, which exhibit good code distances and fidelities.For the rest of the paper, we will switch gear and focus on efficient closest point decoders for these two codes as well as the surface-GKP code.

VI. EFFICIENT CLOSEST POINT DECODER FOR STRUCTURED GKP CODES
In this section we describe several techniques to decode lattices with well defined structures [57,66].For convenience, we first rephrase the closest point problem in Eq. 59 for a generic set of points: for any discrete set of points Σ ⊂ R n , find the closest point for a given target t ∈ R n .In the case of tie, χ t (Σ) is chosen arbitrarily from the closest points.

A. Decoding a discrete set of points
We note that Σ in Eq. 72 needs not be a lattice, and it needs not have regular patterns.Nevertheless, we can still decompose the set Σ into several subsets, or apply a shift to all the points in the set.Particularly, for any discrete set Σ ⊂ R n , if Σ can be decomposed into the union of k discrete sets as Σ = ∪ k i=1 Σ i , then we we have which suggests that we can find the closest points for each subset Σ i , followed by comparing their distances to t and select the one with shortest distance.This indeed works because the closest point in Σ must lie in some subsets Σ i , which is by definition as close or closer than the closest point from all the other subsets.Further, for any discrete set of points Σ ⊂ R n , we can obtain a new set of points by shifting all the points by a vector r, denoted as r + Σ.The closest point in the set of shifted points can be obtained as To prove this, let g = χ t (Σ), if all the points are shifted by r, then the closest point is also shifted by the same amount, i.e., χ t+r (r +Σ) = g + r = χ t (Σ)+r.Redefine t ′ = t + r, we have χ t ′ (r + Σ) = χ t ′ −r (Σ) + r as desired.

B. Decoding direct sums of lattices
Suppose we have a generator matrix M , which is a direct sum of several square matrices M = ⊕ k i=1 M i , then the corresponding lattice is also a direct sum of several sublattices, Such direct sum of lattices can be decoded by simply decoding each orthogonal projection of t onto the space spanned by each component lattice, followed by combining the results.Formally, we have where π i denotes the orthogonal projection onto the space spanned by Λ i .In practice, we select the corresponding components t i in t and decode it with M i , followed by assembling the result together to arrive at χ t (Λ).

C. Decoding union of cosets
Consider a lattice Λ, and a set of vectors r i (i = 0, ..., l − 1), we can construct a union of cosets of Λ as Here we fix r 0 = 0 such that Λ ⊂ Σ, and other coset vectors r i are real valued vectors in R n .For a lattice point u ∈ Λ, Σ contains u together with its translations by all the coset vectors.Upon combining Eq. 73-74, the union Σ can be decoded as We note that despite Σ needs not be a lattice, the Euclidean dual of a lattice can be treated as a union of cosets with respect to the original lattice,exactly as shown in Eq. 76.Hence, if a lattice can be decoded efficiently, its Euclidean dual can also be decoded efficiently with Eq. 77, provided only a handful of cosets to be decoded.
As will be shown in App.E, concatenated GKP codes, such as surface-GKP code, can be viewed as a union of cosets, where the group elements in the stabilizer group play the role of coset vectors r i in Eq. 76.However, because the size of its stabilizer group generally grows exponentially with the number of modes, and naive application of Eq. 77 will require exponentially many cosets to be decoded.In Sec.IX, we overcome this difficulty by combining Eq. 77 with an MWPM algorithm, which yields a polynomial time decoder for the surface-GKP code.

D. Decoding glue lattices
A glue lattice can be regarded as a union of cosets for a direct sum of lattices Since r 0 = 0, the glue lattice has a sublattice which is a direct sum of m lattices.In this context, the vectors r i are called gluing vectors.Combining Eq. 75, 77, we have the closest point for the glue lattice as Glue lattices encompass concatenated GKP codes, including those obtained through Construction A [54].In Sec.VIII B, we will show that the YY-rep-rec N code can be viewed as a glue lattice, and Eq.79 plays a key role in decoding the code in linear time.

VII. LINEAR TIME DECODER FOR Dn LATTICES AND THEIR EUCLIDEAN DUALS
In this section, we will present the linear time decoders for D n lattices and their Euclidean duals, denoted as D * n .These two lattices will serve as building blocks for the more complex GKP lattices as shown in Sec.VIII.As illustrated by Conway and Sloane in Ref. [63], the decoding of D n and D * n turn out to be very straightforward, after we understand a bit deeper the simplest case, namely decoding the Z n lattices.

A. Linear time decoder for Zn lattices
The n-dimensional Z n lattice is an integer lattice with generator M Zn = I n .The closest point in the Z n lattice for an arbitrary point t ∈ R n is given by Here ⌊t⌉ denotes the closest integer to t ∈ R, and in case of a tie, the integer with the smallest absolute value is chosen.This algorithm is presented as ClosestPointZn in Alg. 1.

Algorithm 1: ClosestPointZn(t)
Input: The error syndrome t ∈ R n ; Output: The optimal integer b ∈ Z n ; b ← (⌊t1⌉, ..., ⌊tn⌉) For decoding the D n lattice, finding the nearest point for Z n is not sufficient, and we have to find the second nearest point for t.For that, we introduce the function w(t), the second nearest integer to a real number t, In Ref. [63], w(t) is called rounding x the wrong way, and is the key to find the second closest point in the Z n lattice for a given point t.The idea is to find the component of t, say t k , which is the furthest from its closest integer, and round it the wrong way.Mathematically, let χ ′ t (Z n ) denotes the second nearest point for the given t, then we have where

B. Linear time decoder for Dn lattices
In order to find the closest point in the D n lattice for a given t, recall that D n is a sublattice of Z n where the sum of the components of any lattice point is always even.In order to identify χ t (D n ), we first find the closest and second closest points, namely χ t (Z n ) and χ ′ t (Z n ), in the Z n lattice.We note that since the two only differ by one component, i.e., one and only one of them lies in the D n lattice.Hence, the closest point in D n lattice is whichever of χ t (Z n ) and χ ′ t (Z n ) that has the even sum of the components.Since both χ t (Z n ) and χ ′ t (Z n ) can be found with linear runtime, D n can be decoded with runtime linear to n.The decoder is presented as ClosestPointDn in Alg. 2. In App.D, we generalize this decoder to D n lattices with different lattice spacings in different directions.

C. Linear time decoder for D * n lattices
An Euclidean dual lattice Λ * is defined to be the set of all the points that have integer inner product with all the points in the original Λ.In other words, The Euclidean dual lattice can be generated by M * = (M T ) −1 [55], which can be seen by noticing where the n-components vectors r i are defined as This can be seen by noticing that r T 1 u ∈ Z for all u ∈ D n , and det(M Hence with Eq. 77, we have which can be found in runtime proportional to n.In particular, this suggests to decode Z n twice with t and t− r 1 respectively, followed by picking the one with closest distance to t.The decoder for D * n is presented in Alg.In this section, we present the linear time decoder for the rep-rec N and YY-rep-rec N codes introduced in Sec.V B, which are based on the strategies in Sec.VI-VII.We will use these decoders to benchmark the fidelities of these two codes for different number of modes and noise strengths.
We can notice that for the pi subspace, the generator matrix is an identity matrix multiplied by a factor of √ 2 3 1/4 , hence it generates a scaled Z N lattice.Similarly for the qi subspace, the sum of the components of the basis vectors are all even numbers multiplied by a factor of 3 1/4 √ 2 , which generates a scaled D n lattice.More generally, with the qqpp ordering, the generator for the rep-rec N code reads which is a direct sum of scaled D N and Z N lattice.
To decode the rep-rec N code, we will need to consider its logical operators which turns out to be the direct sum of the Euclidean duals of the Z N and D N lattices.To see that, recall that in the qqpp convention, the symplectic form is defined in Eq. 4 such that we have Here Z * N = Z N and D * N denote the Euclidean dual lattices for Z N and D N respectively, and both lattices can be decoded in linear time as demonstrated in Sec.VI.With Eq. 75, the closest point for Λ(M (qqpp)⊥ rep-rec N ) is simply the assembly of those from the Z N and D * N lattices, and hence the rep-rec N code can be decoded in runtime proportional to 2N .We present the closest point decoder for the rep-rec N code in Alg. 4. In order to characterize the error correction capability of the code, we calculate the fidelity of the rep-rec N code with the same Monte Carlo method and Gaussian noise distribution N (0, σ 2 ), as described in Sec.V A. In Fig. 3(a), we show the fidelity of the rep-rec N code as a function of the noise strength σ and the number of modes, up to N = 30.One immediately notices a bandlike feature, which indicates that increasing the number of modes needs not improve the fidelity for the rep-rec N code.In particular, for the low noise regime with σ = 0.4, we see that upon increasing the number of modes, the fidelity reaches maximum at N = 11, beyond which the fidelity starts to decrease.This is confirmed in the topright inset, where we show the infidelity between σ = 0.4 and 0.4898.Upon increasing the noise strength, we see that the code with N = 6 has the highest fidelity at σ = 0.5714, and the code with N = 30, the largest number of modes studied here, has the lowest fidelity.This is similar to what we found in Fig. 2 that the fidelity of the rep-rec N code attains the maximum value of 0.82 for N = 7 at σ ≈ 0.5143.For Fig. 3(a), at the high noise regime with σ = 0.8, we find that one-mode rep-rec N code outperforms other codes as expected, i.e., the noise rate is high enough and thus increasing the number of modes only degrades the fidelity.Our result indicates that the rep-rec N code does not exhibit noise threshold below which increasing the number of modes will consistently improve its error correction capability.This is in sharp contrast to the biased GKP repetition code introduced in Ref. [86], and as discussed in Sec.V B, the difference can be attributed to the fact that we have fixed the aspect ratio of the inner rectangular GKP code to be η 2 = N 1 2 .For biased GKP repetition code with generic aspect ratio, its generator matrix in the qqpp ordering can be obtained by substituting η for N 1/4 in Eq. 90, hence it can also be decoded with our closest point decoder.
In Fig. 3(b), we compare the runtime of the linear-time decoder to the exponential-time closest point decoder, presented in Sec.IV B, where for each data point, we average over all the samples for all the values of σ considered.The time overhead for the exponential-time decoder is increasing rather rapidly compared to the linear time decoder, as expected, and the difference is around five orders of magnitude for the case of ten modes.In the inset, we confirm that the runtime of the linear time decoder increases linearly with the number of modes.
There is an important remark before we proceed.Recall from Eq. 57 that for both exponential-time and linear-time decoders, we are finding the closest lattice point in the symplectic dual lattice for a given η(s).where s is a syndrome measurement result.From its definition in Eq. 56, η(s) can be obtained from the syndrome s in linear time if M ⊥ is a sparse matrix with small number of nonzero entries in each column.This is indeed the case for the rep-rec N code as shown in Eq. 91, hence it guarantees that the calculation of η(s) would not incur additional overhead for decoding the rep-rec N code.The similar conclusion holds for the multimode GKP codes discussed below, including the YY-rep-rec N code and surface-GKP codes.

B. Linear time decoder for the YY-rep-rec N code
Recall that for the rep-rec N code is not a balanced code as it has d . In Sec.V B, we introduced the YY-rep-rec N code for balancing the code distances which concatenates two copies of the rep-rec N codes with the YY stabilizer.To see how to decode the YY-rep-rec N code efficiently, we start with its generator matrix where S rec is given in Eq. 88.Here M  The stabilizers for the XX repetition code concatenated with the YY stabilizer for N = 2, 3.For both cases, the last stabilizer is the YY stabilizer, which is the tensor product of the logical Ȳ operators in the two blocks.We have also shown the logical operators X and Z.
We now show that the stablizer part of the YY-rep-rec N code is a glue lattice, i.e.,

Λ(M
where g Y Y is the binary vector corresponding to the YY stabilizer and Here Λ(M which is evident from the construction of M (sq) conc .For a given x ∈ Λ(M for some integers a i and b.As one can show, This concludes that x is an element of the glue lattice shown in Eq. 93, and hence Λ(M (sq) conc ) is a sublattice of the latter.Similarly, we can show that the glue lattice in Eq. 93 is a sublattice of Λ(M (sq) conc ) and hence the two represent the identical lattice.
The fact that Λ(M (sq) conc ) is a glue lattice is important for decoding the YY-rep-rec N code, whose symplectic dual lattice is generated by Here we have used the fact that the D * 2N lattice is a union of two cosets as shown in Eq. 84-85.Similar strategy can be applied to simplify the lattice We first notice that g Y Y = g ′ Y Y + g Z where g ′ Y Y has nonzero entries only at the first and the N + 1-th positions.For instance, we have g ′ Y Y = (1, 0, 1, 0, 0, 0, 0, 0) T for N = 2. Since 1 √ 2 g Z ∈ Λ 3 by definition, the glue vector in Eq. 102 can be replaced by 1 which has no support in the p subspace.Hence Λ 4 reduces to a direct sum of two lattices for the q and p subspaces respectively.In particular, the lattice for the p subspace is Λ 3 , whereas that for the q subspace reads Λ where g Since the sum of the components of g Y Y is equal to 2, an even number, and similarly for the vectors in Λ is a union of two cosets of Λ(M ⊕2 D N ), the volume of the fundamental parallelotope of Λ (q) ′ 4 is equal to 2, the same as that for the D 2N lattice.We conclude that Λ (q) ′ 4 = Λ(D 2N ) and hence Upon combining Eq. 101, 102 and 105, we have which is a glue lattice with only one glue vector that is proportional to the binary vector for the logical X operator.
The simple structure Λ((M (sq) conc ) ⊥ ) enables a linear time decoder for the code.Recall the generator M ⊥ YY-rep-rec N defined in Eq. 95, in the the qqpp ordering, we have (we omit the superscript qqpp again) With that, the symplectic dual lattice for the YY-rep-rec N code reads where g X = S ′ rec g X , and Λ 5 is a direct sum of two lattices generated by Because Λ(M ⊥ YY-rep-rec N ) is a glue lattice, we can use Eq.79 to find its closest point efficiently.In particular, since both D 2N and D * 2N lattices can be decoded with runtime proportional to 2N , and Λ(M ⊥ YY-rep-rec N ) consists of only two cosets, we conclude that the YY-rep-rec N can be decoded with runtime proportional to 4N .We present its closest point decoder in Alg. 5.
We calculate the fidelity of the YY-rep-rec N code, up to N = 40, with the same Gaussian noise model as in Fig. 2(b) and Fig. 3(a), and the results are shown in Fig. 4(a).The fidelity curve exhibits a similar band-like feature, similar to that of the rep-rec N code shown in Fig. 4(a).We have indicated the number of modes that support minimum and maximum fidelities from the low to high noise regimes, which show that increasing the number of modes needs not consistently improve the fidelity of the YY-rep-rec N code.For instance, as shown in the lower-left inset, the infidelity of the code at σ = 0.4 and 0.4734 reach minimum for N = 28 and 20 respectively, which outperform the code with N = 40.As a result, we conclude that the YY-rep-rec N code, similar to the rep-rec N code, does not exhibit a noise threshold.
We compare the runtime between the exponentialtime, general-purpose closest point decoder and the linear time decoder tailored to the YY-rep-rec N code, as shown in Fig. 4(b).Each data point corresponds to the runtime averaged over all the samples for all the values of σ considered.As expected, the time overhead of the former is increasing rapidly, and the runtime difference is around four orders of magnitude for the case of twenty modes.In the inset, we confirm that the runtime of the linear time decoder increases linearly with the number of modes.

Algorithm 5: DecodeYYRepRecN(t)
Input: The error syndrome t ∈ R 2N ; Output: The optimal integer b ∈ Z 2N ; tq ← t[1 : 2 : end]; tp ← t[2 : 2 : end]; b1,q ←ClosestPointDn(tq); b1,p ←ClosestPointDnDual(tp); b1[1 : 2 : end] ← b1,q; b1[2 : 2 : end] ← b1,p; g X,q ← g X [1 : 2 : end]; g X,p ← g X [2 : 2 : end]; b2,q ←ClosestPointDn(tq − g X,q ); b2,p ←ClosestPointDnDual(tp − g X,p ); b2[1 In Sec.VIII B, we show that decoding the YY-rep-rec N code is equivalent to finding the closest point for a glue lattice, as presented in Eq. 96.This is certainly not unique to the YY-rep-rec N code, and in App.E, we gen-eralize the argument to show that a general concatenated GKP code can also be viewed as a glue lattice.Here we focus on the surface-GKP codes which encode a single logical qubit by concatenating a [[N, 1, d 0 ]] surface code with N square GKP codes.We note N = d 2 0 for surface-GKP codes.As shown in App.E, the symplectic dual lattice of the surface-GKP code can be written as where the vectors g j correspond to the elements in the normalizer group.For notation simplicity, we assume g 0 = 0 and g 1 , • • • , g N −1 generate the full stabilizer group, and g N , g N +1 are the logical X and Z operators.The problem of decoding the surface-GKP code is equivalent to finding the closest point in Λ(M ⊥ surf ) for a given syndrome.However, because the lattice is a union of 2 N +1 cosets, direct application of either Eq.77 or 79 will incur an exponential runtime for the decoding.We now show that the search of 2 N +1 cosets can be efficiently performed with an MWPM algorithm, and the surface-GKP code can be decoded in polynomial time.
As discussed in Sec.IV B, we aim to find the closest point for a given t, i.e., For simplicity, we will consider the scaled lattice which is an integral lattice, and find the closest point for the scaled vector Since χ ′ is an integer valued vector, we first observe that each component χ ′ i has to be either the closest or second closest integer of t ′ i .In other words, χ ′ is contained in the following set of 2 2N vectors where w(x) is the second closest integer for x ∈ R, as defined in Eq. 81.To see that, suppose χ ′ i is neither the closest nor the second closest integer of t ′ i , because Λ(2I 2N ) is a sublattice of Λ ′ , we can use the i-th basis vector of Λ(2I 2N ) to translate χ ′ i to either the closest or second closest integer of t ′ i .The resultant vector is guaranteed to have closer distance to t ′ , compared to the vector before the translation, hence χ ′ must be in the set S χ ′ .However, directly searching through S χ ′ is not only impractical for large N , but also unnecessary because not all the vectors in S χ ′ are in the lattice Λ ′ .For instance, although is the closest possible integer valued vector to t ′ , it is however not necessarily in Λ ′ .Instead, we have to round certain components of χ ′′ in the wrong way, similar to how we find the closest point in the D n lattice, as shown in Sec.VII B. For this purpose, we further observe that Λ ′ can be written in the following way To see that Eq. 115 holds, we note that mod(g 2 has integer valued symplectic product with all the vectors in Λ(M surf ).It follows that v is in With these observations, finding the closest point χ ′ reduces to finding a vector in S χ ′ , that is closest to t ′ The fidelity of the surface-GKP codes near the threshold.The noise strength σ is scanned from 0.596 to 0.607 with resolution 0.001, and the distances are odd integers from d 0 = 3 to d 0 = 29.Each data point is obtained from 10 7 Monte-Carlo samples, which is 10 times as large as that for the data points in other subplots.The standard error of the data points are of the order 10 −4 , and the error bars of the fidelity are all smaller than the markers of the data points.The black solid and dash vertical lines correspond to the thresholds for the MWPM closest point and log-likelihood decoders, which read 0.602 and 0.599 respectively.The red solid line corresponds to 1/ √ e = 0.6065 • • • which is an important quantity from the quantum information theory point of view because it is the value of σ at which the known lower bound to the quantum capacity of a Gaussian random displacement channel vanishes.See the main text for more discussions.Inset shows the crossings as a function of d 0 , which shows that the crossing from the MWPM closest point is always higher than that for the log-likelihood decoder by 0.002 for large d 0 .(c) The logarithm of the infidelity as a function of d 0 for σ = 0.5959.The blue circles and red squares correspond to the MWPM closest point and log-likelihood decoders respectively.The data points are fitted with linear functions and the slopes for the two decoders read −2.65 × 10 while satisfying mod(g T i Ωχ ′ , 2) = 0 for 1 ≤ i ≤ N − 1.Despite χ ′′ needs not satisfy the latter condition, it can be used as an ansatz and we enumerate the stabilizers with mod(g T i Ωχ ′′ , 2) ̸ = 0. Our goal is to round certain entries of χ ′′ in the wrong way such that the resultant vector χ ′ is guaranteed to be in Λ ′ .Since χ ′′ is the closest integer valued vector, changing certain entries from ⌊t ′ i ⌉ to w(t ′ i ) will increase the distance to t ′ , we would like to minimize the increased distance while ensuring χ ′ ∈ Λ ′ .
In the above description, the stabilizers of the surface-GKP code is not explicitly invoked, hence the conclusion can be applied to the surface-GKP code with different choice of stabilizer generators, or other concatenated GKP codes.However we emphasize that for the decoding strategy presented below to work, we do need to assume that a given shift error can induce at most two syndrome errors, a property shared by the surface-GKP code and some other concatenated GKP codes.Certain stabilizer codes, such as the color code, do not have this property, hence the closest point decoder presented cannot be applied to such cases.Also we have restricted our attention to the case where the base GKP code is a square GKP code.
With these constraints stated, finding the closest point χ ′ can be further reduced to the MWPM problem for a weighted graph Here V = {v i } contains a set of N − 1 vertices, each corresponds to a stabilizer generator g i .For any pair of distinct vertices v i and v j , they will share an edge e ∈ E if g i and g j share one or more nonzero entries.In particular, let g jk denotes the k-th entry of g j , and be the set of shared entries of g i and g j .Since we would like to minimize the total weight of a matching, the weight of the edge between vertices v i,j is assigned to be which is the increased distance if we round the k-th element of χ ′′ in the wrong way.After the weighted graph is set up, we define a set of highlighted vertices If the number of highlighted vertices is odd, we add a highlighted boundary vertex into H such that the number of highlighted vertices is always even.We then apply the MWPM algorithm for the weighted graph G to identify a set of edges, which match the highlighted vertices pairwise.The selected edges correspond to the entries in χ ′′ that need to be rounded in the wrong way, in order to obtain a lattice point χ ′ ∈ Λ ′ .By construction, χ ′ is the closest point in S χ ′ that has even symplectic product with all the stabilizers.Hence, from Eq. 112, we arrive at the desired closest point, which is simply The above algorithm can find the closest point efficiently, because the MWPM of a graph can be found in runtime that is polynomial to the number of vertices and edges [42,43].It can be further sped up for the surface-GKP code, if we use the the qqpp convention, where the generator matrix is given by surf and M (p) surf are the generators for the lattices in the q and p subspaces respectively.In this convention, the vector g i has support only in the q or p subspace if it corresponds to an X or Z stabilizer respectively.As a result, the weighted graph defined in Eq. 116 is split into two disjoint subgraphs G (q) and G (p) .The subgraph G (q) consists of N −1 2 vertices, each corresponds to a Xstabilizer, whereas the N −1 2 vertices in G (p) correspond to the Z-stabilizers.The set of highlighted vertices for the subgraphs are defined as i and g (p) i denote the vectors for the X and Z stabilizers respectively, and χ ′′(p,q) are the projections of χ ′′ onto the q and p subspaces.If either H (p),(q) has odd number of highlighted vertices, we add a boundary vertex into the set such that both H (p),(q) have even number of highlighted vertices.We then apply the MWPM algorithm separately for the two disjoint subgraphs, which yield two sets of selected edges.Upon combining the two sets of selected edges, they correspond to the entries in χ ′′ that need to be rounded in the wrong way.We present the algorithm for decoding the square-GKP code in Alg. 6.
Let us compare the MWPM closest point decoder to MWPM log-likelihood decoder studied in [40].For the latter decoder, the surface-GKP code is decoded by solving exactly the same MWPM problem, but with the crucial difference that the edge weights are not given by Eq. 118.Instead, in the MWPM log-likelihood decoder, the weights are likelihood functions that estimate the probability of the logical errors due to the given syndromes, which depend on the noise model chosen.The decoder performs well when the shift error is small; however, such log-likelihood estimation is typically unreliable if the shift error is large and close to the decision boundaries, where the crossover between correctable and uncorrectable shift occurs.On the other hand, the closest point decoder is designed to identify the closest point exactly regardless the size of the shift error.Further, because the closest point decoder does not assume the distribution of the error shifts (other than the fact that shorter shifts are more likely), it is not only more reliable but also can be applied to many other noise models.
In Fig. 5(a), we provide numerical evidence that the MWPM closest point decoder outperforms the MWPM log-likelihood decoder.We plot the fidelity of the surface-GKP codes as a function of noise strength σ and distance d 0 for the two decoders.For a given distance d 0 , the solid and dash lines represent the MWPM closest point and log-likelihood decoders respectively, which shows that the fidelity from the former is always higher than the latter.We note from the bottom right of the plot that the difference in the fidelity is larger for larger noise strength, hence the noisier the hardware, there will be more benefit for using the closest point decoder.In the left-bottom inset, we show the infidelities near the low noise regime which are almost indistinguishable for the two decoders.More interestingly, we can notice that the threshold for the closest point decoder is slightly larger than that for the log-likelihood decoder, as indicated by the solid and dash vertical lines respectively.In order to more precisely quantify the difference, we perform a more careful calculation as shown in Fig. 5(b).Here we scan through σ = 0.596 and σ = 0.607 with a resolution of 0.001, and the distances for the surface code are between d 0 = 3 and d 0 = 29.In order to better suppress the statistical fluctuations, each data point in Fig. 5(b) is obtained via the Monte-Carlo method with 10 7 samples, which is 10 times as large as that for other plots in Fig. 5.The fidelities for the MWPM log-likelihood and closest point decoders are indicated by the circular and square markers respectively, and the dash and solid lines are guides of eyes.To obtain the threshold for either decoder, we determine the crossing σ * where the fidelity of the distance d 0 + 2 surface-GKP code is larger than that of the distance d 0 surface-GKP code, for a given d 0 .Then we investigate to what value the crossing point converges as we increase d 0 , as shown in the inset.We calculate the mean and standard deviation for the crossings for d 0 > 13, and obtain σ * = 0.6025 ± 0.0004 for the closest point decoder, and σ * = 0.5996 ± 0.0004 for the log-likelihood decoder.Since the resolution of σ in our simulation is 0.001, we choose to report only three digits for the thresholds.
In the above threshold estimate, we made sure that our analysis reliably yields the threshold σ * values up to three significant digits with 10 7 samples.On the other hand, previous works have calculated the threshold σ mostly up to two significant digits and obtained σ * = 0.60 for the surface GKP code using the MWPM log-likelihood decoder [45,47], σ * = 0.58 for the surface GKP code with designed noise bias [52], and σ * = 0.59 for the color-GKP code [90].One obvious reason why we care about three significant digits is because the difference between the MWPM closest point decoder and the MWPM loglikelihood decoder can only be resolved in the third significant digit.Another reason is because the best known lower bound to the quantum capacity of a Gaussian random displacement channel (with noise standard deviation σ) is given by [91] max log 2 1 eσ 2 , 0 , and vanishes when σ ≥ 1/ √ e = 0.6065 • • • .Hence, showing that a code has a threshold higher than 1/ √ e = 0.6065 • • • has significant implications on the Gaussian quantum capacity as it means that the code would then establish a better lower bound to the quantum capacity of a Gaussian random displacement channel than what has been known in the past two decades.Although this is not the case with the surface GKP code decoded by the MWPM closest point decoder, since its threshold σ * is only 0.602, we have made important progress towards this goal.As shown in Fig. 5, the gap between the threshold of the surface GKP code and 1/ √ e (the red vertical line) has been decreased by almost a half with the closest point decoder.
In Fig. 5(c), we show the logarithm of the infidelity as a function of distance d 0 for σ = 0.5959, which is slightly below the thresholds for both decoders.The red squares and blue circles correspond to the MWPM log-likelihood and closest point decoders respectively.We notice that not only is the infidelity for the closest point decoder smaller than that of the log-likelihood decoder, the infidelity is also decreasing much faster, as we increase the code distance.We fit the data points with linear functions, as shown by the solid lines, and find that the slope for the closest point decoder is almost twice as large as that for the log-likelihood decoder.This suggests that, for σ = 0.5959, as the code distance of the surface-GKP code is increased, closest point decoder suppresses the logical error rate twice as faster as the log-likelihood decoder does.In the inset, we show the slopes as a function of σ up to σ = 0.60, which shows that their relative difference is getting smaller for smaller σ, another evidence that the two decoders perform similarly for small noise regime.
In Fig. 5(d), we further compare the runtime of the exponential-time closest point decoder (red squares) with the MWPM closest point decoder (blue circles) in a loglog plot.For the former, it is clear that the required runtime increases significantly from d 0 = 3 to d 0 = 5.Hence we only show two data points for the exponentialtime closest point decoder.For the MWPM closest point decoder, the run time is significantly reduced by three orders of magnitude for d 0 = 5, and scales like d 3.02 0 for large d 0 .

X. DISCUSSION AND CONCLUSION
In this work, we have investigated the quantum error correction with GKP codes from a lattice perspective, and there are three main results.We first reviewed that a general N -mode GKP code can be viewed as a 2N -dimensional symplectic integral lattice, and showed that decoding the GKP code is equivalent to finding the closest point in the lattice for the given error syndrome.Because the closest point search problem has been studied extensively in the classical error correction literature, we formulated a closest point decoder for general GKP Algorithm 6: DecodeSurfaceGKP(t) Input: The error syndrome t ∈ R 2N and {g i } Output: The closest point χ ∈ R 2N ; χ ′′(q) ← (⌊t1⌉, ⌊t3⌉, ..., ⌊t2N−1⌉) χ ′′(p) ← (⌊t2⌉, ⌊t4⌉, ..., ⌊t2N ⌉) G (q) ← (V (q) , E (q) , W (q) ) *Defined similarly as in Eq. 116-118* G (p) ← (V (p) , E (p) , W (p) ) *Defined similarly as in Eq. 116-118* H (q) ← vi | mod((g i ) T χ ′′(q) , 2) ̸ = 0 *Defined in Eq. 121* e (q) ← MWPM(G (q) , H (q) ) e (p) ← MWPM(G (p) , H (p) ) e[1 : 2 : end] ← e (q) ; e[2 : 2 : end] ← e (p) ; for codes.Second, we provide a proof-of-concept demonstration that it is possible to numerically search optimized GKP code from a lattice perspective.The numerically found codes, despite not optimal, exhibit better error correction properties at low error rate, compared to the known GKP codes, such as the [ [7,1,3]]-hexagonal codes, or the d 0 = 3 surface-hexagonal GKP code.Third, we show that despite the fact that closest point decoder incurs exponential time cost in the number of modes for general GKP codes, it is possible to devise efficient closest point decoders for structured GKP codes.In particular, we proposed two generalizations of the tesseract codes, namely the rep-rec N and YY-rep-rec N codes, which exhibits good quantum error correction properties, and show that they can be decoded in runtime that is linear to the number of modes.For the surface-GKP code, with the help of a MWPM algorithm, a polynomial-time closest point decoder is introduced which outperforms the previous MWPM log-likelihood decoder and yields a noise threshold of σ * = 0.602.
A few remarks are in order.Recall that in our numerical search for optimized GKP codes, we started with 10 4 initial points and perform the optimization with respect to the distance.Because of the relatively small size of the trial ansatz, the optimized codes do not necessarily have the optimal distances.In fact, for certain number of modes, we have examples of analytically constructed GKP codes that outperform the numerically optimized code either in terms of distance or fidelity.It is possible to find better GKP codes by optimizing on top of these known GKP codes, but for certain dimensions, we do not have known GKP codes with good error correction capa-bilities.Generally, one would need to scale to much larger set of initial points for finding the optimal GKP code with large number of modes, which will incur significant time overhead.There would be similar overhead costs if one chose to optimize with respect to fidelity, instead of distance, because Monte-Carlo sampling is required at each iteration step of the optimization.The bottleneck can be partially mitigated if a more efficient algorithm is used to find the closest point.In this work, we choose to adopt the algorithm in Ref. [67] as our closest point decoder because of its simplicity, but it is certainly not the most efficient algorithm.To the best of our knowledge, the best deterministic closest point search algorithm was proposed by D. Micciancio and P. Voulgaris (MV) in Ref. [80].The core of the MV algorithm is a more efficient method to determine the Voronoi cell of the lattice, such that the complexity of the algorithm is 2 O(n) .Despite the fact that it is still exponential in the dimension of the lattice (because closest point search problem is NP-hard), it improves the n O(n) runtime of the previously known algorithms [82].Subsequently, Daniel Dadush and Nicolas Bonifas gave a randomized algorithm that provides quadratic speed up compared to the MV algorithm [81].The algorithm is Las Vegas in the sense that it always gives the correct result but the runtime differs depending on the inputs.It would be interesting to implement these algorithms and use them to search for optimized GKP codes.
Note also that the efficiency of decoding a GKP code relies heavily on the basis chosen.In the main text and App.C, we discussed the KZ and LLL algorithms for finding a good basis for different applications.Further, for a given GKP code, a set of good basis vectors will help its experimental realizations.In various proposals for implementing GKP codes [37,38,92], the time it takes to stabilize a GKP code is proportional to the Euclidean length of the GKP stabilizer generators that are being measured.Thus especially for numerical optimized codes, it is important to look for good lattice generators that can speed up the closest point search algorithms and are practical for experimental implementations.As a related note, it could also be interesting to numerically optimize GKP codes using a well structured ansatz (e.g., geometric locality or bounded length of all stabilizer generators) such that it is guaranteed that the numerically found GKP code can be readily implemented experimentally.
Another interesting future direction would be benchmarking closest point decoders for other families of concatenated GKP codes.For instance, recently a MWPM decoder is proposed for decoding the color code with a Möbius geometry, which demonstrates a logical failure rate that is competitive with the optimal performance of the surface code [93].Given that we have known the closest point decoder can help to increase the fidelity and noise threshold for the surface code, it would be interesting to see if it can help in the similar manner for the color code.However, the MWPM closest point decoder cannot be directly applied to the color code because the decoder assumes a given shift error can induce at most two syndrome errors.Hence finding efficient closest point decoder for the color code and other families of concatenated GKP codes is a challenging but urgent topic in its own right.
Lastly we remark that we have only focused on the minimum energy decoding via the closest point problem, which is optimal only in the σ → 0 limit [54].A truly optimal decoding strategy is the maximum likelihood decoding which is more involved than the closest point decoding.An interesting future work would be to investigate the maximum-likelihood decoders and see if the error correction performance of multimode GKP codes can be significantly improved in the large σ regime (e.g., close to where the quantum capacity nearly vanishes).

XI. ACKNOWLEDGEMENTS
It is a pleasure to thank Arne Grimsmo, John Preskill and Mackenzie Shaw for useful discussions.ML would like to thank Péter Kómár and Eric Kessler for their supports of the project.We also would like to thank Francesco Arzani and Timo Hillmann for the very insightful discussions on constructing lattices for concatenated GKP codes.We would like to acknowledge the AWS EC2 resources which were used for part of the simulations performed in this work.In this appendix, we provide more details for constructing lattices from the concatenated GKP codes.Specifically, we will focus on concatenating a [[N, k]] stabilizer code with N single mode square GKP codes to encode k qubits.
As explained in Sec.III G, we start by constructing a separable lattice generated by N copies of the square code We will replace N − k rows in M (sq) by the set of vectors 1 √ 2 g T j , j = 1, ..., N − k , where g j are the binary vectors for the generators of the stabilizer group.To make sure the resultant matrix, denoted as M (sq) conc is full rank, for each g j , let the l-th element be the first nonzero element in g j , we replace the l-th row of M (sq) by g T j / √ 2 if it has not been replaced before, otherwise we look for the next nonzero element in g j until an appropriate replacement is done.We repeat the process for all the stabilizer generators g j and the desired M (sq) conc is arrived.This algorithm is shown in Alg. 7.
We used Alg.7 for the multimode GKP codes discussed in the main text, but it turns out for certain GKP codes, the algorithm will not give a full rank matrix.A more general approach works with the standard form of the stabilizer code.For that, we can view the set of binary vectors g T j as a (N − k) × (2N ) matrix G with components g jl .As shown in Section 10.5.7 of Ref. [1], via the Gaussian elimination and relabelling the qubits if needed, one could bring the matrix G into the standard form Here r is the rank of the left (N − k) × N submatrix of G, I is the identity matrix, and A 1 , A 2 , B, C, D, E are all integer valued matrices.With that, the generator matrix for the concatenated GKP code, in the qqpp ordering, can be constructed as conc is a valid GKP lattice, we first note that since g j corresponds to stabilizers that commute with each other, we have mod(g T j Ω qqpp g l , 2) = 0 for all j, l = 1, .., N − k.Hence the Gram matrix M (sq) conc Ω qqpp (M (sq) conc ) T is indeed integer valued.Further, the matrix has determinant 2 k , which can be seen by swapping the two columns labeled by N − k − r.Thus, we conclude that M (sq) conc is a GKP code that encodes k qubits.
It is important to note that, swapping the columns of M (sq) conc is equivalent to multiplying a non-sympletic orthogonal matrix from the right of M (sq) conc , which in general leads to a non-symplectic integral matrix M ′ .Despite M ′ has the same determinant as M (sq) conc , it cannot be regarded as a GKP code.This can also be seen from the fact that swapping the columns of G generally spoils the commutation relations of the stabilizers.
We note that since A 12 divides all the matrix elements in A (3) , the matrix R ′ 4 is unimodular.Since A (4) ′ is an (n − 2) × (n − 2) anti-symmetric tridiagonal matrix, the recursive applications of the subroutines CanonizeTridiagonal and BlockTridiagonalize will arrive at the canonical form of A, our initial given matrix.We shall collect all the unimodular matrices involved in this process as R 4 .An optional subroutine could be devised to perform additional row and column swapping such that |d 1 | ≥ |d 2 | ≥ ... ≥ |d n | for the canonical form shown in Eq.B1.
Here we provide more details for the subroutine Put-FirstRowToZero.Consider the following antisymmetric matrix A, For any integer pairs A 12 , A 13 , by the Bezout's identity, we can use the extended Euclidean algorithm to find another integer pairs (x 3 , x 13 ) such that Then we can construct an unimodular matrix such that The procedure can be proceed for the pair (g 3 , A 14 ), and we denote the corresponding Bezout coefficients as (x 4 , x 14 ) such that g 3 x 4 + A 14 x 14 = GCD(g 3 , A 14 ) ≡ g 4 .One can confirm that with the following product of unimodular matrices The resulting matrix has the desired property that the first row and column has only one nonzero entry A ′ 12 and A ′ 21 = −A ′ 12 , which is the greatest common divisor of the original row.
Below we present the algorithm CanonizeGKPLattice in Alg. 13, which canonize a given GKP code, with the help of the above subroutines.In this section, we provide more details for the closest point decoder.Given an arbitrary point t ∈ R n , and the generator matrix M for a n-dimensional lattice Λ, we describe an algorithm to compute the point χ t (Λ(M )) ∈ Λ that is closest to t [67].In Sec.IV B, the algorithm is described in two parts, the preprocessing part and the decoding part.We first describe the LLL and KZ reductions, for preprocessing the generator matrix.
For a matrix M , both LLL and KZ reduction produce a lower triangular matrix L, as given in Eq. 61, where v T k is the k-th row of M , and v kj denotes its j-th entry.For later purpose, we will set the diagonal components v kk to be all positive by multiplying −1 to the k-th row if needed.The matrix L is defined recursively to be LLL-reduced if n = 1 or if the following conditions hold and the submatrix in Eq.C3 is KZ-reduced.Clearly the two base only differ by the first conditions in Eq.C2 and C4.If L is KZ-reduced, then it is also LLL-reduced but the reverse is not necessarily true.However KZ reduction typically requires runtime that is exponential to the matrix size whereas LLL reduction operates in polynomial time.Depending on the problem at hand, sometimes it is advantageous to use one reduction over the other, which will be discussed at the end of this section.As we describe in Sec.IV B, an n-dimensional lattice can always be decomposed into layers of n − 1 dimensional sublattices.Mathematically this corresponds to decompose the generator matrix as where L ′ is an (n − 1) × n matrix which is the generator matrix for the sublattice.The n×1 vector can be further decomposed into v n = v ∥ + v ⊥ where v ∥ = (v n1 , ..., v n,n−1 , 0) T , v ⊥ = (0, ..., 0, v nn ) T , are parallel and perpendicular to the sublattices respectively.In this setup, the sublattices can be labeled by u n ∈ Z, and the distance between two adjacent layers is simply v nn .Since the decoding algorithm will be described as a recursive procedure, the subscript in u n help keep tracking the dimension of the lattice.For a given t ∈ R n , we can similarly decompose it as t = t ∥ + t ⊥ , and from which we can identify the index which include the nearest sublattice, the next nearest sublattice, and further.In Eq.C8, the nearest sublattices are ordered according to their vertical distances to t, according to the Schnorr-Euchner strategy [85].We note that the number of sublattices in Eq.C8 can be bounded if an upper bound for ρ is known, because χ t (Λ) cannot lie in sublattices with distance y n that is larger than the bound.Hence we could start the search of nearest point from the nearest sublattice, and once a candidate lattice point for χ t (Λ) is identified, its distance to x will serve as the bound ρ until the next candidate point with shorter distance is identified.We have reduced the problem of finding the closest lattice point in an n-dimensional lattice to finding it in a set of (n − 1)-dimenisonal lattices.This dimensional reduction can proceed further, and since the generator matrix is lower triangular, what we have described above also apply to all k-dimensional lattices with 1 ≤ k ≤ n − 1. Suppose we are searching a k-dimensional sublattice with the set of nearest sublattices labeled by {u * k , u * k − 1, u * k + 1, ...}.There are three possibilities.1. k = 1.Since the sublattice of a 1D lattice is a point, we have arrive at a candidate closest point namely u * 1 .If the distance between the lattice point is smaller than the bound ρ, then we update the bound and the candidate closest point; otherwise we discard the point founded.After that, we set k = 2.
2. n − 1 ≥ k > 1.We search the closest point in each subspace via dimension reduction, and keep updating the best candidate closest point and the upper bound ρ.This is done until none of the subspace in the set {u * k , u * k − 1, u * k + 1, ...} has vertical distance to t less than ρ.Then we set k to k + 1.
3. k = n.This suggests that we have searched all the subspaces in Eq.C8 that could possibly contain the closest point.Hence we output the best candidate lattice point found.
The above closest point algorithm can be significantly sped up if the (n − 1)-dimensional subspaces in Eq.C8 are as separate as possible, which minimizes the number of subspaces to be searched within the bound ρ.In the extreme case, if the spacing between the (n − 1)dimensional subspaces is much larger than that of all the lower dimensional lattices, then the closest point will be very likely contained in the nearest plane.In this case, the dimensionality of the problem is effectively reduced by one.Simiarly, the spacing between the points in the 1D sublattice should be as small as possible.If the 1D sublattice is so dense that all the higher dimensional lattices have much larger spacings, then we only need to search the closest sublattices, which again reduce the dimensionality of the problem by one.The KZ reduction yields an optimal basis that complies to the above two observations [67].From Eq. C4, we see that KZ reduction produces the smallest possible value for v 11 in Eq.C1, and hence the 1D sublattice is densely packed.Since the reduction is applied recursively, v 22 and other diagonal elements are minimized subsequently.Because changing the basis does not change the volume of the Voronoi cell det(L) ≡ n i=1 v ii , the order of minimization naturally produces v nn that is maximized, hence we conclude that the spacing between the (n − 1)-dimensional subspaces are maximized [67].Unfortunately, the runtime of KZ reduction scales exponentially with the dimensionality of the lattice.On the other hand, LLL reduction, which operates in polynomiall time in n, only produces an approximately optimal basis because the first condition in Eq.C2 is not optimal.Because of the trade-offs between the runtime and basis quality, one should choose different reduction methods for different problems.For the purpose of characterizing a GKP code, if we would like to compute its distance, we will use the LLL algorithm because the decoding algorithm will only be ran a handful of times; if the fidelity of the GKP code is the desired quantity, because it typically involves decoding a few million error syndromes or more, we will use the KZ reduction to preprocess the lattice once for subsequent decodings.For further comparison of the two reductions, readers are referred to the detailed review in [94], and the benchmarking results in [67].A detailed implementation of the algorithm presented here can also be found in [67].
If we color the scaled Z n lattice in a checkerboard fashion, we have the scaled D n lattice.For example, the scaled D 4 lattice has the following generator matrix [ respectively.Again, since the norm of their difference is 1, we simply need to determine which one of the above vectors has an even sum of components, then the corresponding lattice point is the closest point in the scaled D n lattice.
For N = 9, the quantities read One can check that the corresponding GKP code has distance 3.556, which is better than 3 1/2 √ π ≈ 3.070, the distance for the nine-mode surface-GKP code.
generator matrix of a GKP code 5 D. The logical operators of a GKP code 5 E. Code distances of a GKP code 6 F. Transformation between GKP codes 6 G.The concatenated GKP code 7 H.Examples of symplectic lattices and GKP codes 7 IV.Closest point decoder for the GKP codes 8 A. Error syndrome for GKP code 8 B. Closest point search problem 9 V. Searching for optimized GKP codes 10 A. Analysis of known concatenated GKP codes 10 B. Generalizations of the tesseract and D 4 codes 11 C. Numerical search for optimized GKP codes 12 VI.Efficient closest point decoder for structured GKP codes 14 A. Decoding a discrete set of points B. Decoding direct sums of lattices C. Decoding union of cosets D. Decoding glue lattices VII.Linear time decoder for D n lattices and their Euclidean duals A. Linear time decoder for Z n lattices B. Linear time decoder for D n lattices C. Linear time decoder for D * n lattices VIII.Linear time decoders for the rep-rec N and YY-rep-rec N codes A. Linear time decoder for the rep-rec N code B. Linear time decoder for the YY-rep-rec N code IX.Polynomial time closest point decoder for surface-GKP code X. Discussion and conclusion XI. Acknowledgements A. Details of constructing lattices for concatenated GKP codes B. Algorithm for canonizing a GKP lattice C.More details on the closest point decoder arXiv:2303.04702v3[quant-ph] 20 Dec 2023 I. INTRODUCTION

[[ 5
,1,3]] [[7,1,3]] d0 = 3 surface code I XZZX I I I XXXX XX I XX I I I I X I XZZ I XX I I XX I XX I I I I I I ZX I XZ X I X I X I X I I I I I I XX I ZZX I X I I I ZZZZ I I I I XX I XX I ZZ I I ZZ Z I I Z I I I I I Z I Z I Z I Z I ZZ I ZZ I I I I I I ZZ I ZZ I I I I I I Z I I Z TABLE I: The stabilizers for the [[5,1,3]], [[7,1,3]] and d 0 = 3 surface codes.

1 4 √
FIG. 2: (a)The distances for the GKP codes discussed in Sec.V, as a function of number of modes.The circles indicate the concatenated GKP codes introduced in Sec.V A, green diamonds and red stars are for rep-rec N and YY-rep-rec N respectively, and the purple squares are the numerically optimized codes.The dash lines are guides of eyes for the corresponding families of GKP codes.(b) The fidelites for the same set of GKP codes (indicated with the same legends).Each data point is obtained by sampling 10 6 random shift errors from the Gaussian distribution N (0, σ 2 ) with σ ≈ 0.5143.We emphasize that the numerically optimized codes are not optimal, see the discussion in the main text.

Algorithm 2 :
ClosestPointDn(t) Input: The error syndrome t ∈ R n ; Output: The optimal integer b ∈ Z n ; b1 ← χ t (Zn); *Defined in Eq. 80* b2 ← χ ′ t (Zn); *Defined in Eq. 82* if sum(b1) is even then b ← b1 else b ← b2 end end (M T b) T ((M * ) T a) = b T (M (M * ) T )a ∈ Z for arbitrary integer vectors a and b.Here we present the linear decoder for the D * n lattices, the Euclidean dual of the D n lattices.It turns out that the D * n lattice is the union of two cosets of Z n [57, 63],

FIG. 3 :
FIG. 3: The numerical results for the rep-rec N code.(a) The fidelity of the rep-rec N code as a function of the number of modes and noise strength σ.Each line corresponds to a given number of modes, which varies from N = 1 to N = 30.For σ = 0.4, 0.5714, 0.8, we have indicated the number of modes that support minimum and maximum fidelities.The top-right inset shows the infidelities between σ = 0.4 and 0.4898 for different number of modes in the log scale.The number of modes that support minimum and maximum infidelities are indicated for σ = 0.4 and 0.4898 respectively.(b) The comparison of runtimes for the exponential time closest point decoder (square) and linear time closest point decoder (circle) for increasing number of modes.For each data point, we average over all the samples for all the values of σ considered.The inset shows the runtimes of the linear time decoder for different number of modes, up to N = 30.
the stabilizer part of the rep-rec N code, and M (sq) rep is defined in Eq. 87.As discussed in Sec.III G, M (sq) conc is constructed by replacing (2N −1) rows in g j corresponds to a stabilizer generator.Because the (2N −2) stabilizers from the XX repetition codes can be separated into two disjoint blocks, as evident from Tab. II, the replacement with these vectors yield a direct sum of two Λ(M (sq) rep ) lattices.To see that Eq. 93 holds, suppose Λ 1 is generated by a set of basis vectors {r j , j = 1, ..., 4N } , then Λ(M (sq) conc ) is span by the same set of basis vectors except the last one,

FIG. 4 :
FIG. 4: (a)The fidelity of the YY-rep-rec N code as a function of the number of modes and noise strength σ.Each line corresponds to a given number of modes, which are even integers from N = 2 to N = 40.For σ = 0.4, 0.5714, 0.8, we have indicated the number of modes that support minimum and maximum fidelities.The lower-left inset shows the infidelities between σ = 0.4 and 0.4734 for different number of modes in the log scale.The number of modes that support minimum and maximum infidelities are indicated for σ = 0.4 and 0.4734 respectively.(b) The comparison of runtimes for the exponential time decoder (square) and linear time decoder (circle) for increasing number of modes.For each data point, we average over all the samples for all the values of σ considered.The inset shows the runtimes of the linear time decoder for different number of modes, up to N = 40.

FIG. 5 :
FIG. 5: Numerical results for surface-GKP codes.(a) The fidelity of the surface-GKP codes as a function of the noise strength σ and distance d 0 , which are odd integers from d 0 = 3 to d 0 = 11, as indicated.The solid and dash lines correspond to the MWPM closest point decoder and the MWPM log-likelihood decoder respectively, and the vertical lines indicate their thresholds.The bottom-left inset shows the infidelities near σ = 0.4 for different d 0 .(b)The fidelity of the surface-GKP codes near the threshold.The noise strength σ is scanned from 0.596 to 0.607 with resolution 0.001, and the distances are odd integers from d 0 = 3 to d 0 = 29.Each data point is obtained from 10 7 Monte-Carlo samples, which is 10 times as large as that for the data points in other subplots.The standard error of the data points are of the order 10 −4 , and the error bars of the fidelity are all smaller than the markers of the data points.The black solid and dash vertical lines correspond to the thresholds for the MWPM closest point and log-likelihood decoders, which read 0.602 and 0.599 respectively.The red solid line corresponds to 1/ √ e = 0.6065 • • • which is an important quantity from the quantum information theory point of view because it is the value of σ at which the known lower bound to the quantum capacity of a Gaussian random displacement channel vanishes.See the main text for more discussions.Inset shows the crossings as a function of d 0 , which shows that the crossing from the MWPM closest point is always higher than that for the log-likelihood decoder by 0.002 for large d 0 .(c) The logarithm of the infidelity as a function of d 0 for σ = 0.5959.The blue circles and red squares correspond to the MWPM closest point and log-likelihood decoders respectively.The data points are fitted with linear functions and the slopes for the two decoders read −2.65 × 10 −3 and −1.40 × 10 −3 respectively.Inset shows the slopes as a function of σ up to σ = 0.60.See the main text for more discussion.(d) Comparison of the runtimes for the MWPM closest point decoder and the exponential time closest point decoder for the surface-GKP codes.Both horizontal and vertical axes are in log scales.The red squares correspond to the exponential-time decoder whose runtime scales exponentially with d 2 0 (hence only two datapoints are calculated); the blue circles correspond to the runtime of the MWPM closest point decoder which scales as d3.02 FIG. 5: Numerical results for surface-GKP codes.(a) The fidelity of the surface-GKP codes as a function of the noise strength σ and distance d 0 , which are odd integers from d 0 = 3 to d 0 = 11, as indicated.The solid and dash lines correspond to the MWPM closest point decoder and the MWPM log-likelihood decoder respectively, and the vertical lines indicate their thresholds.The bottom-left inset shows the infidelities near σ = 0.4 for different d 0 .(b)The fidelity of the surface-GKP codes near the threshold.The noise strength σ is scanned from 0.596 to 0.607 with resolution 0.001, and the distances are odd integers from d 0 = 3 to d 0 = 29.Each data point is obtained from 10 7 Monte-Carlo samples, which is 10 times as large as that for the data points in other subplots.The standard error of the data points are of the order 10 −4 , and the error bars of the fidelity are all smaller than the markers of the data points.The black solid and dash vertical lines correspond to the thresholds for the MWPM closest point and log-likelihood decoders, which read 0.602 and 0.599 respectively.The red solid line corresponds to 1/ √ e = 0.6065 • • • which is an important quantity from the quantum information theory point of view because it is the value of σ at which the known lower bound to the quantum capacity of a Gaussian random displacement channel vanishes.See the main text for more discussions.Inset shows the crossings as a function of d 0 , which shows that the crossing from the MWPM closest point is always higher than that for the log-likelihood decoder by 0.002 for large d 0 .(c) The logarithm of the infidelity as a function of d 0 for σ = 0.5959.The blue circles and red squares correspond to the MWPM closest point and log-likelihood decoders respectively.The data points are fitted with linear functions and the slopes for the two decoders read −2.65 × 10 −3 and −1.40 × 10 −3 respectively.Inset shows the slopes as a function of σ up to σ = 0.60.See the main text for more discussion.(d) Comparison of the runtimes for the MWPM closest point decoder and the exponential time closest point decoder for the surface-GKP codes.Both horizontal and vertical axes are in log scales.The red squares correspond to the exponential-time decoder whose runtime scales exponentially with d 2 0 (hence only two datapoints are calculated); the blue circles correspond to the runtime of the MWPM closest point decoder which scales as d 3.02 0 for large d 0 .
Appendix A: Details of constructing lattices for concatenated GKP codes

Algorithm 13 :
CanonizeGKPLattice(M)Input: A GKP lattice generator M Output: The canonical basis M ′ for the GKP codeA ← M ΩM T R2 ← Tridiagonalize(A) R4 ← BlockTridiagonalize(R2AR T 2 ) R ← R4R2 M ′ ← RM -reduced.Similarly, the KZ-reduced basis is also defined recursively if n = 1 or if the following conditions hold v 1 is the shortest nonzero vector in Λ, |v k1 | ≤ |v 11 | 2 for k = 2, ..., n, (C4) idea in Sec.VII B, in order to find the closest point in the scaled D n lattice, we first find the closest and the second closest points in the Z n lattice.In the units of the lattice constants λ, they are given by the following integer-valued vectors(⌊y 1 ⌉, ..., ⌊y n ⌉) (D5) and (⌊y 1 ⌉, ..., ⌊y k−1 ⌉, w(y k ), ⌊y k+1 ⌉, ..., ⌊y n ⌉) (D6) , • • • , d N ) (or diag(d) in short) is a diagonal matrix whose elements are natural numbers, i.e., d ∈ N N .Eq. 20 means that for any valid generator matrix M of a GKP code, it is possible to find a unimodular matrix R such that M ′ = RM satisfies FIG.1:A 2D lattice as a stack of 1D sublattices.The black dots represent the lattice points of a triangular lattice, and the dashlines represent the parallel 1D sublattices.The decomposition into sublattices is not unique.The red dot is the input vector t, which can be decomposed as t = t ∥ + t ⊥ which are parallel and perpendicular to the sublattices respectively.For illustration purpose, the point is chosen to lie slightly above the center of the equilateral triangle formed by the points o, a and b.Hence the closest point is a; on the other hand, the closest line to t is the line cross o and b.The closest point algorithm will first search the ob line and randomly select o or b as the candidate closest point, which sets the upper bound for the distance between the closest point and t.Hence we can ignore the rest of the 1D sublattices, and only need to search the ac line, which leads to the true closest point, namely a.
because the nearest lattice point needs not lie within the nearest sublattice, as shown in Fig.1.Nevertheless, the distance between χ t ′ (Λ ′ ) and t ′ provides an upper boundρ ≡ ||χ t ′ (Λ ′ ) − t ′ ||, and χ t ′ (Λ) cannot lie in the sublattice with vertical distance larger than ρ.We only need to search a finite set of sublattices, and update the upper bound ρ if the new candidate point has smaller distance.The searching is complete after all the sublattices with vertical distance smaller than ρ have been visited.Once the closest point χ t ′ (Λ(L)) is identified in the basis L, we can transform it back to the original basis in M via Eq.62.This concludes the closest point searching algorithm.
[89]orthogonal matrices preserve the Euclidean length in R 2N .Further, in the qqpp ordering, a 2N × 2N orthogonal symplectic matrix can be written as a matrix exponential[89] Alternatively, since t k is furthest from its nearest integer ⌊t k ⌉, then among all the components of t, t k must be the closest to the second nearest integer w(t k ).Hence we can also write k ≡ arg min 1≤k≤n |w(t k )|.χ ′ t (Z n ) is indeed the second closest point to t as its norm is larger than χ t (Z n ), and if we were to round the other component t i̸ =k in the wrong way (and round t k the correct way), the resulting χ ′′ t (Z n ) will have a larger norm than χ ′ t (Z n ) by the definition of k above.We will now illustrate why the function χ ′ t (Z n ) can help us to find the closest point in the D n lattice. 3.
Input: The error syndrome t ∈ R 2N ; Output: The optimal integer b ∈ Z 2N ;