The vacuum provides quantum advantage to otherwise simulatable architectures

We consider a computational model composed of ideal Gottesman-Kitaev-Preskill stabilizer states, Gaussian operations - including all rational symplectic operations and all real displacements -, and homodyne measurement. We prove that such architecture is classically efficiently simulatable, by explicitly providing an algorithm to calculate the probability density function of the measurement outcomes of the computation. We also provide a method to sample when the circuits contain conditional operations. This result is based on an extension of the celebrated Gottesman-Knill theorem, via introducing proper stabilizer operators for the code at hand. We conclude that the resource enabling quantum advantage in the universal computational model considered by B.Q. Baragiola et al. [Phys. Rev. Lett. 123, 200502 (2019)], composed of a subset of the elements given above augmented with a provision of vacuum states, is indeed the vacuum state.


I. INTRODUCTION
Identifying the physical resources underlying quantum advantage -i.e., yielding the ability of quantum computers to solve computational problems faster than classical computers -is of crucial importance for the design of meaningful architectures for quantum computation (QC) [1]. Often, the resource depends on the model. For example, for architectures over finite-dimensional systems, Clifford circuits are resourceless from a computational standpoint, since they are efficiently simulatable [2][3][4] until a so-called magic resource is provided, such as the T-state, which allows universal quantum computation to be performed [5,6]. Similarly, for infinitedimensional continuous-variable (CV) systems, Gaussian circuits are efficiently simulatable [7][8][9] and to promote them to universal QC specific non-Gaussian resources [10,11] have to be provided, such as the cubic-phase state [12,13], or Gottesman-Kitaev-Preskill (GKP) states [14,15]. The cost of producing these enabling resources with sufficient quality generally requires a significant overhead and their distinct features are typically complex and in stark contrast with respect to the elements of the corresponding simulatable architectures. For example, T-states and cubic-phase states are nonstabilizer and non-Gaussian, respectively. It is a natural question to ask: are resources always complex and costly to produce?
In this work, we provide a specific example of a CV quantum computing architecture that is classically efficiently simulatable, and that becomes universal by adding the vacuum state. The latter state is widely regarded as the simplest quantum state of a bosonic field, and in particular it is a Gaussian state. The architecture considered is based on stabilizer GKP states, Gaussian operations including conditional displacements and homodyne detection. By taking inspiration from stabilizer methods developed for discrete-variable (DV) systems [2-4, 16, 17], we prove that this class of circuits is * calcluth@gmail.com classically efficiently simulatable for rational symplectic operations and arbitrary continuous displacement, thereby significantly extending [18] the class of Gaussian operations that was previously known to be simulatable in combination with GKP states [19,20]. This result is obtained despite the fact that GKP states are highly non-Gaussian and their Wigner function is highly negative [12,15,21], and hence the standard theorems based on Gaussianity [7] or on the positivity of quasi-probability distributions [8,9,22] cannot be applied. We then leverage on the results of Ref. [14], where the same architecture combined with the vacuum (or a thermal) state was shown to be universal for quantum computation, to conclude that the vacuum provides quantum advantage.
The paper is structured as follows. In Sec. II we provide an introduction to the circuit class that we demonstrate to be efficiently simulatable. In Sec. III we provide an analytic method to evaluate the PDF of the introduced circuit class. Then, in Sec. IV we provide an algorithm to evaluate the PDF of the circuit and show that it is classically efficient. We also extend our result to include adaptive circuits and show that GKP-encoded Clifford circuits are included in the simulatable class. We then demonstrate, in Sec. V, that these results are sufficient to conclude that the vacuum is a resource for quantum advantage in the context of the simulatable model we consider. In Sec. VI we also extend this result to show that realistic GKP states can be considered resourceful in the context of this model. Finally, we provide conclusions and open questions in Sec. VII.

II. GAUSSIAN CIRCUITS WITH STABILIZER GKP STATES
In this section we introduce the circuit class considered in this work, which we later show to be efficiently simulatable. We consider the circuits shown in Fig. 1, where the input states are n ideal GKP states encoding pure stabilizer states. Without loss of generality, we can consider each mode to be in the 0-logical encoded GKP state, which has a wave function in position representation given by [12] ψ 0, the total multimode input state can be compactly indicated by The input state is stabilized by any combination of the operators e 2i √ πpj , e i √ πqj with any integer power. This means that the action of these operators, or any combination of them, on the state will have the effect of the identity, e.g.
The operations we consider in this work are those which belong to the group HW(n)[Sp(2n, Q)] which is the semidirect product [23] of the Heisenberg-Weyl group HW(n) and the rational symplectic group Sp(2n, Q). The Heisenberg-Weyl group HW(n) consists of all real phase-space displacements of the form e icjqj and e −idjpj for c j , d j ∈ R and j ∈ {1, . . . , n}.
The rational symplectic group Sp(2n, Q) is the rational subgroup of the symplectic group Sp(2n, R) over the reals. It consists of all symplectic operations parameterized by a 2n × 2n symplectic matrix such that all its elements are rational numbers. The set of rational symplectic operations is dense in the set of real symplectic operations. We provide proof of this fact in Appendix A. Note, however, that the density of the rational symplectic matrices should be regarded as a mathematical property characterizing the extent of the class of simulatable operations. It does not imply that the probability distributions obtained with operations parameterized by operations that are outside the set (e.g., in its closure) are necessarily simulatable. For later convenience, we will denote a symplectic matrix M by square sub-blocks of equal dimension: Gaussian operations can always be expressed as a unitary operatorÛ in terms of symplectic operations and phase-space displacements [24,25]. The following operations form a generating set of all Gaussian operations: {e icjqj , e iθj (q 2 j +p 2 j )/2 , e −i ln sj (qjpj +pjqj )/2 , e −iqjp k } (6) where c j ∈ R, θ j ∈ [0, 2π), s j ∈ R and j, k ∈ {1, . . . , n}. These generators and also any combination of them will be shown to be simulatable so long as θ j and s j are chosen such that cos θ j , sin θ j , s j ∈ Q for all j. We will also show that adaptivity can be included as a feature of the class of circuits that can be efficiently simulated.
The circuits we consider are measured using homodyne detection, which, without loss of generality, we can restrict to position measurements. The measurement outcomes of the circuit in Fig. 1 will therefore have a probability density function (PDF) expressed as When measuring the output modes, a quantum computer will provide outputs x selected with probabilities specified by the PDF in Eq. (7). As we will clarify in Sec. V, the circuit elements (including adaptive operations) composing the universal model stemming from Ref. [14] all belong to our class of circuits except for the vacuum.

III. SIMULATION METHOD FOR GKP CIRCUITS
In order to assess the simulatability of the circuits outlined in the previous section, we introduce a novel method to evaluate the PDF of the circuit presented in Fig. 1. This method involves tracking the Heisenberg evolution of the measurement operators and then using the stabilizers of the input states to evaluate the PDF. We first provide an overview of the problem statement and a summary of the contents of the following subsections, which contain details of the proof.
A general Gaussian operationÛ belonging to HW(n)[Sp(2n, Q)] transforms, in the Heisenberg picture, the measurement operatorsq j according to [7,26] where the coefficients a (j) i = A i,j and b (j) i = B i,j are elements of the blocks of the symplectic matrix M as defined in Eq. (5). The vector ⃗ c ∈ R n , with elements c j , describes the displacement in position. As we now prove, these circuits can be simulated in the strong sense by calculating the PDF. The PDF given in Eq. (7) can be written in the Heisenberg picture using Eq. (8) as Our method is based on two main observations. First, by inserting the GKP stabilizers e 2i √ πpj and e i √ πqj into the expression Eq. (9), we can identify a periodicity relation of the PDF. Second, we can manufacture bespoke additional stabilizers, in terms of the Heisenberg measurement operators, of the form where ⃗ l is an n-vector of real coefficients l j and ϕ( ⃗ l) is a phase factor chosen such that g( ⃗ l) is a stabilizer. By inserting this bespoke stabilizer into the PDF, it is possible to identify a second constraint that provides the non-zero values of the PDF. Together, these two constraints uniquely identify the PDF. In Sec. III A we demonstrate how to derive the periodicity condition on the PDF, from the symplectic matrix M . In Sec. III B we demonstrate how to identify the non-zero points of the PDF. Finally, in Sec. III C we demonstrate that these two conditions are sufficient to construct the PDF of the circuit. A reader uninterested in the technical derivations may proceed directly to Eq. (52), whereby we provide the explicit PDF of the circuit shown in Fig. 1. This PDF will provide sufficient information to understand the next Section IV, whereby we provide the algorithm to simulate these circuits.

A. Periodicity of the PDF
In this subsection, we will evaluate a periodicity condition that will provide a restriction on the PDF of the circuits considered. This periodicity condition informs us of the points of the PDF for which the values of the PDF are equal. The PDF, as given in Eq. (9), can equivalently be written as whereby we have rewritten the measurement operator as a delta function, i.e.
Similarly to the original Gottesman-Knill theorem for qubits [2,4], by inserting stabilizers into this PDF on the righthand-side of the delta function, and then using commutation relations to move the stabilizers to the left-hand-side, we find two expressions for the PDF which are equivalent. These two expressions correspond to two separate points on the PDF, implying that the PDF is equal at these points. We start by considering the commutation of a general stabilizer with the measurement projection operators. We would like to calculate how the stabilizers e 2i √ πp k and e i √ πq k commute with the general measurement projector, given in Eq. (12). This can be calculated by using the Baker-Campbell-Hausdorff (BCH) formula [27,28] for linear combinations of quadrature operators valid for the case in which the operatorsX andŶ commute with their commutator. The commutation between the measurement projector in Eq. (12) and each stabilizer can be evaluated using Eq. (14) by first evaluating how the terms commute, without integration. For the stabilizer containingp k we find whereas, for the stabilizer containingq k , we find The first of these relations, Eq. (15), allows us to calculate the commutation between the measurement projection operator and any integer m k ∈ Z power of the momentum stabilizer e 2im k √ πp k , i.e., The second relation Eq. (16) provides us with a similar relation for any integer m ′ k ∈ Z power of the position stabilizer Now, inserting all the stabilizers to the right-hand side of full the PDF given by Eq. (11), and using the commutation relations to move the stabilizers to the left-hand side, we find that the PDF at ⃗ x is equal to the PDF at the displaced pointx ′ , which can be expressed as We also note that this periodicity condition can equivalently be written in terms of where ⃗ m and ⃗ m ′ are each n-dimensional vectors of integers. This form of the periodicity relation will be useful when combining the two conditions in Sec. III C. This provides us with the first condition for the form of the PDF. In the following subsection, we derive the second condition, which informs us of the set of points at which the PDF is non-zero.

B. Set of non-zero points
To evaluate the non-zero points of the PDF, we construct bespoke stabilizers from the Heisenberg measurement operators. Although this step varies notably from the conventional Gottesman-Knill theorem for DV systems, we can adopt a comparable approach to the DV theorem once such stabilizers have been identified. Specifically, we insert the stabilizers into the PDF and derive a set of equalities with respect to a set of points ⃗ x. These equalities lead to a contradiction unless the value of the PDF is only non-zero at this set of points.
We begin by identifying stabilizers of the set of input 0logical states, |0 GKP ⟩, expressed in terms of the Heisenberg measurement operatorsQ j . To do so, we first define an operator g( ⃗ l), which will become a stabilizer for the input 0-logical states under certain conditions. Considering a generic vector ⃗ l ∈ Q n and a generic real function ϕ( ⃗ l) : Q n → R, the operator is defined as Using the BCH formula given in Eq. (13) we find that each term in the product can be expressed as and we can therefore express the operator as We find that by choosing ϕ( ⃗ l) to be this operator will have the form Hence, g( ⃗ l) will be a stabilizer of |0 GKP ⟩ whenever Inserting the stabilizer g( ⃗ l) into the equation of the PDF, given in Eq. (11), we have an equality between the PDF in its original form, and the PDF with the inserted stabilizer. Specifically, by inserting the stabilizer between the Heisenbergevolved position quadrature basis states and the 0-logical GKP states, we find that the stabilizer will act on the basis states as where the choice of ⃗ l is constrained by Eq. (27) and ϕ( ⃗ l) is of the form given in Eq. (25). Furthermore, given that we know that the PDF will be equal, with or without the inserted stabilizer, we find that This equality can only be true if the term involving the phase equals 1, or the PDF itself is zero. Hence, the non-zero points ⃗ x of the PDF satisfy the equation for all possible choices of ⃗ l which satisfies Eq. (27). If, on the other hand, we choose a different point ⃗ x, that does not satisfy this constrained equation, the equality will result in a contradiction unless the PDF is zero at these values of ⃗ x. We can therefore reduce the problem of identifying the nonzero points to finding solutions ⃗ x of the equation Eq. (30) constrained by Eq. (27). We now provide a short summary of the steps required to solve Eq. (30) given Eq. (27), provided that A and B both contain all rational elements. The full details are provided in Appendix B.
To solve this constrained equation we first find the allowed vectors ⃗ l. This can be achieved by introducing the matrix S which is defined as Then, the constraint on the allowed values of ⃗ l is given by S ⃗ l = ⃗ b where ⃗ b is a vector of 2n integers. The Moore-Penrose pseudoinverse S + provides a method to find solutions of the form ⃗ l = S +⃗ b [29][30][31]. The solutions of ⃗ l can be found by first finding the Smith decomposition [31][32][33] of the matrix σS, where σ is the smallest integer for which the elements of the matrix σS are all integers. Note that this step assumes that the symplectic matrix, and therefore also S, is rational. We provide broader requirements for the symplectic matrix in Appendix C and discuss the relationships between these classes of simulatable operations in Appendix D. Next, using the Smith decomposition of σS = V DU we identify which integer choices of ⃗ b will provide valid solutions of ⃗ l. We find that the vectors ⃗ l can be expressed as [34] ⃗ l = R ⃗ m (32) where ⃗ m is any choice of an n-vector of integers and R is an n × n invertible rational matrix, defined as We can then rewrite Eq. (30) as a system of linear equations of the form where ⃗ t is the main diagonal of the matrix T = 1 2 R T AB T R. This form, i.e., Eq. (34), allows us to evaluate the solution to the constrained equation as Therefore, provided that the symplectic matrix is rational, we have identified that the PDF is non-zero exclusively at these points.
Combined with the insight from the previous subsection, we have now identified both a periodicity relation and the set of non-zero points of the PDF. In the following subsection, we use both these results to demonstrate that the PDF assumes the same value at all the non-zero points, i.e., those identified in Eq. (35).

C. Constructing the PDF of the circuit
We now show that the PDF is specified completely by the periodicity relation and the points at which the PDF is nonzero. For this to hold, two conditions are required. First, we need to ensure that any non-zero point displaced by the periodicity relations always results in another non-zero point. Second, we need to ensure that any non-zero point can be reached by another non-zero point using the periodicity relations.
We begin with the first condition and show that for any valid solution ⃗ x we also get a valid solution if it is displaced according to the periodicity constraint. Namely, we can check that any point specified by the periodicity constraint is included in the allowed points.
If we take a point specified by and displace it according to the periodicity relation provided in Eq. given as vectors ⃗ m and ⃗ m ′ , as ⃗ k and ⃗ k ′ , respectively. Given a displacement, specified by ⃗ k and ⃗ k ′ , we find a new point For the first condition to hold, this new point must also be a non-zero point of the PDF, and should satisfy the system of linear equations defining the non-zero points, given in Eq. (34). This can be checked by inserting ⃗ x (2) into the lefthand side of that equation, which we expect to evaluate to ⃗ t+2 ⃗ m ′ , where ⃗ m ′ is a different n-vector of integers. This can be shown by inspecting each element of the vector that is given as the third term of Eq. (38). We label this vector as ⃗ w, i.e., The elements of this vector can be found by multiplying its transpose with the unit vector We know from Eq. (32) that for any n-dimensional vector of integers ⃗ k there exists an allowed value of ⃗ l as Choosing ⃗ k to be the basis vector ⃗ k = ⃗ e (i) , which is zero in all entries except at i, we can identify one choice of ⃗ l, parameterized by i, that corresponds to the element of the vector ⃗ k, chosen to be non-zero, as We can then write the i-th element of the vector ⃗ w in Eq. (39) as Furthermore for any allowed ⃗ l, including the choice ⃗ l (i) we have The term in brackets in Eq. (43) must be an integer, and so w i must be an even integer. This means that for some n-dimensional vector of integers⃗ m and hence which is of the same form as Eq. (34). This implies that any non-zero point displaced using the periodicity condition also satisfies the constrained equation specifying the non-zero points. We have therefore demonstrated that the first condition introduced in this subsection does indeed hold.
For the second condition, we need to demonstrate that any non-zero point can be reached using the periodicity relations.
This can be proven by specifying a center point as and demonstrating that it can be displaced to any other nonzero point of the form using only displacements of the form given by Eq. (21). This is equivalent to saying that for any choice of ⃗ m, there exists some ⃗ k, ⃗ k ′ such that We can solve this equation using the pseudoinverse to find potential solutions of the form As with any pseudoinverse, we can check whether this solution is a valid solution by evaluating whether the original linear equation holds under the solution. I.e. we check which means that this solution is one possible valid solution.
Note there exists infinite more solutions but we do not need to find an expression for all of these. We have shown that no matter which non-zero point we are interested in, i.e. ⃗ x (1) , there will be at least one way -and, in fact infinite, ways -to get to that point from the center point ⃗ x (0) . This completes the proof of the second condition, introduced in this subsection.
We have therefore shown that both conditions hold, meaning that any non-zero point displaced by the periodicity relations results in another non-zero point and that any non-zero point can be reached from any other non-zero point. This implies that the value of the PDF is equal for all the non-zero points specified in Eq. (35).
This allows us to write the full and exact PDF of the multimode measurement, in terms of these allowed points, as As we will show, this method to evaluate the PDF can be implemented with an efficient algorithm -namely, an algorithm whose complexity increases at most polynomially with respect to the number of modes. The algorithm for computing this PDF, along with its complexity analysis, is provided in Sec. IV.

IV. EFFICIENT ALGORITHM FOR THE SIMULATION OF GKP CIRCUITS
In this section we provide an explicit algorithm to evaluate the PDF of the circuit shown in Fig. 1 and derive some notable consequences of this result.
Efficient classical computation of the PDF of a quantum circuit is referred to as strong simulation. We begin with a presentation of the algorithm to efficiently simulate the circuits shown in Fig. 1 in Sec. IV A in the strong sense. We, therefore, extend the simulatable class to all real displacements and all rational symplectic operations as opposed to a restricted set [35]. Furthermore, the size of the set of simulatable operations does not depend on the number of modes measured, as was the case in Ref. [20].
The complementary notion of weak simulatability means instead that a classical computer can efficiently sample the outcomes of the circuit [36]. Weak simulation is sufficient to conclude that a quantum circuit will not provide quantum advantage, as a quantum computer will, in any case, produce outcomes selected from the PDF. Following the argument of Ref. [36], and assuming the capability of sampling from the set of integers, we will demonstrate in Sec. IV B that by restricting to weak simulation, we can further extend the class of simulatable circuits shown in Fig. 1. This extended class includes adaptive circuits, whereby intermediate measurement outcomes can affect future operations.
We will later use these results to demonstrate that the routine introduced in Ref. [14] -whereby the vacuum and GKP states are used to perform universal quantum computationis efficiently simulatable when the vacuum is removed. This circuit is adaptive and contains GKP-encoded Clifford operations.
With this motivation, we demonstrate, in Sec. IV C, that GKP-encoded Clifford operations are included in the set of simulatable operations that we present in this work. As a consequence, we can also now simulate all encoded qubit stabilizer GKP states as input states, in the same sense as the Gottesman-Knill theorem [2][3][4]. This was not possible using our previous method [20]. Together, these results provide us with all the tools required to demonstrate, in the later Sec. V, that the vacuum is indeed the resource for quantum advantage in circuits composed of input GKP stabilizer states followed by Gaussian operations and homodyne measurement.
Finally, to demonstrate the practical implementation of the algorithm, we provide an example of evaluating the PDF of a simple circuit in Sec. IV D.
A. Algorithm to evaluate the PDF We now provide the algorithm to calculate the PDF of a general circuit shown in Fig. 1 by using the result of the previous section. We will also provide an analysis of the computational time required to evaluate the PDF.
To express the PDF in Eq. (52) given the symplectic matrix M , given in block form as defined in Eq. (5), and the vector of displacement ⃗ c, we need to evaluate R −T and ⃗ t. The matrix R −T is given in terms of S T and V , where V is the unimodular matrix arising from the Smith decomposition of σS. The vector ⃗ t can be evaluated from V .
First, we identify the matrix S, by simply writing it in terms of the block components A, B as it is given in Eq. (31). To find the matrix V we first need to calculate the lowest common multiple of all the denominators of the elements S. Formally we could write σ = lcm(den(S)). (53) where den(·) evaluates the denominator of all matrix elements and lcm(·) evaluates the lowest common multiple of all matrix elements.
Then we multiply the matrix S by σ to produce an integer matrix σS. We can perform a Smith normal form decomposition on this matrix to identify the 2n × 2n unimodular matrix V , the 2n × n diagonal matrix D and the n × n unimodular matrix U , We can discard the matrices D, U . The transpose-inverse of R can be directly evaluated as Furthermore, the matrix T can be calculated from V as and the vector ⃗ t is simply the diagonal entries of T . The PDF is then given by Eq. (52).
To summarize, this algorithm consists of the following steps We can assume that the 2n × 2n symplectic matrix M is stored as a matrix of numerators M num and a matrix of denominators M den such that M = M num ⊘ M den , where ⊘ denotes element-wise division.
Step 1 consists of a truncation of the 2n × 2n matrix M followed by matrix multiplication of the denominator matrix M den which in the worst case requires O(n 3 ) operations [37].
Step 2 consists of finding the lowest common multiple of every element in M den . There are (2n) 2 integer entries of this matrix M den i,j . We can find the lowest common multiple of two integers α, β by using the greatest common divisor and then calculate the lowest common divisor of more than 2 integers iteratively, i.e., If we limit the number of digits of precision in each element of M den i,j to k, we can identify that the calculation of the lowest common multiple of two integers of bit length k will require at most O(k 2 ) operations [37,38]. The size of the bit string representing the lowest common multiple will be at most 2k. Calculating the lowest common multiple of two numbers of size k, 2k has complexity in terms of the bit length of the smallest of the two numbers, k and so the complexity of calculating the next iteration will also be O(k 2 ) and the resulting lowest common multiple of the three numbers will be 3k. We need to repeat this iterative process n 2 times and so the total time complexity will be in the worst case O(n 2 k 2 ) and the size of the integer σ will have at most n 2 k bits.
Step 3 consists of multiplying every element of S by σ which will require O(n 2 ) operations and the matrix σS will contain 2n 2 elements each of maximum size n 2 k + k. Therefore, the bit length of each element of σS is polynomial in the number of modes n considered.
Step 4 consists of finding a Smith normal form decomposition which is polynomial in the size of the matrix S and the number of bits of each element [39], which we know from Step 3 is also polynomial in the number of modes n. Therefore Step 4 can be computed in polynomial time.
The remaining steps consist of linear algebra operations (i.e. matrix inversion, matrix multiplication and matrix transposition) which are all known to be polynomial in the size of the matrices considered and the bit length of each element [37].
We can therefore conclude that the entire algorithm for evaluating the exact PDF of the circuit is polynomial in the number of modes n. This means that all rational symplectic operations and all continuous displacements in the circuits of the form in Fig. 1 are strongly simulatable.
In the following subsection, we will demonstrate that our result can be extended to include adaptive circuits, when restricting to weak simulation.

B. Adaptive circuits are weakly simulatable
While in the previous subsection we demonstrated that the class of circuits shown in Fig. 1 are strongly simulatable, we will now demonstrate that this class can be extended to adaptive circuits when restricting to weak simulation. Adaptive quantum circuits contain intermediate measurements that can then either be used as parameters in future operations or can be used in a classical subroutine to decide if or where Gaussian operations are applied.
Formally, we can express adaptive circuits as beginning with a unitary operationÛ 0 , acted on the input state, followed by a series of K operations and measurements of the form where j ∈ {1, . . . , K}. After applying the initial unitary op-erationÛ 0 , we measure the mode i 1 which gives the result x 1 . Next, we act with the operatorÛ 1 (x 1 ) which is parameterized by the previous measurement result x 1 . Following this, we measure mode i 2 (x 1 ). The mode which is measured, i.e. i 2 , may also depend on the previous measurement result x 1 . This continues up to an arbitrary number K of sequences of operations and measurements. We now demonstrate that it is possible to sample from the circuits we have shown to be simulatable, even when incorporating adaptivity, in polynomial time. By the same logic of Theorem 5 of Ref. [36] we can consider each measurement as a single run of a reduced circuit. I.e., starting with the first measurement M i1 (x 1 ), where we measure the i 1 -th mode, we simulate the Gaussian circuitÛ 1 acting on the input states, followed by a measurement on the i 1 -th mode. We know, from the previous subsection, that we can calculate the PDF of this circuit. Hence, we can also sample a random measurement outcome of this circuit.
Next, we simulate a new circuit consisting of the operation using the measurement outcome of the previous simulation, to decide the Gaussian operationÛ (x 1 ). Measurement of i 1 and i 2 (x 1 ) will give a PDF of the form for which we can input the simulated measurement outcome x 1 of the previous simulation, in order to get a PDF in terms of only x 2 . Again, simulating a single measurement outcome of x 2 allows us to continue this procedure for the rest of the measurements of the circuit. Therefore the outcome of any adaptive Gaussian circuit, for which the non-adaptive circuits are strongly simulatable, is weakly simulatable. As a complementary result -albeit, not necessary to reach the conclusions of this paper -we also show, in Appendix E, that it is also possible to efficiently simulate the outcomes of adaptive circuits with modulo homodyne measurement.
In order to prove the result in Sec. V, i.e., that the vacuum is a resource for quantum advantage, we must also show that adaptive circuits containing GKP-encoded Clifford operations are efficiently simulatable. In the following subsection, we demonstrate that this is indeed the case.

C. Clifford circuits are contained in the rational symplectic operations
We now demonstrate that GKP-encoded Clifford circuits are contained within the set of operations that we have shown to be efficiently simulatable. Qubit Clifford circuits consist of stabilizer qubit states, acted on by Clifford operations, followed by measurement in a stabilizer basis. Without loss of generality, we can consider these circuits to be initialized in 0 eigenstates of the PauliẐ operator, followed by Clifford operations and measured in theẐ basis. Encoding these circuits into the GKP formalism gives circuits which consist of states initialized as 0-logical GKP states, acted on by encoded Clifford operations, followed by homodyne measurement in the position basis.
The Clifford operations acting over n modes can be described in terms of the following set of generators where the Fourier transform is defined aŝ Note that in the case of the qubit encoding, it is not necessary to introduce phase-space displacements, as the required displacements can be produced by combinations of the symplectic operations.
Inspecting the symplectic form of each of these operators provides a description of the symplectic matrices of all Clifford group operations. Analyzing the generators of singlemode Clifford group operations we havê By considering any combination of these operations we will clearly obtain only integer matrices. The set of qubit Clifford operations can therefore be described as at least a subset of integer symplectic operations. Integer symplectic operations are contained within the class of rational symplectic operations. Therefore, all encoded qubit Clifford circuits are simulatable by our method.
This concludes our analysis of the types of circuits which are simulatable with our method. I.e., adaptive circuits consisting of input stabilizer GKP states, rational symplectic operations -including GKP-encoded Clifford operations -, real displacements and homodyne measurement, are all simulatable.
In the case of non-adaptive circuits, efficient strong simulation can be performed, whereby the PDF is evaluated efficiently. In the following subsection, we will apply this result to demonstrate the strong simulation of a simple circuit.

D. Simple example
We present an example of calculating the PDF of a simple circuit. We have specifically chosen a circuit that contains a vector ⃗ t not equal to zero. We consider the circuit where P is the phase gate. Note that F 1 P 2 1 F 1 = X 1 which means we would expect the action of this operator on two encoded qubits states to be We can calculate its effect on the position measurement modesq 1 andq 2 aŝ from which we can inspect We can explicitly write the matrix S as We then find the lowest common denominator of all the fractions of S. However, in this case, σ = 1 since we already have all integers. We can then calculate the Smith decomposition of σS = S which is given by We can also calculate the pseudoinverse of S as from which we can calculate R as and R −T as Furthermore we can find T as which gives the vector ⃗ t of the diagonal elements of T as This allows us to express the PDF, which is given by The PDF can be expressed in terms of each vector element of ⃗ x as This is equivalent to measuring |1 GKP ⟩ in both modes, as we would expect, given the encoded circuit. The results from this section provide us with the tools required to conclude the main result given in the following section, i.e., that the vacuum is the resource for quantum advantage in the context of this otherwise simulatable model.

V. VACUUM YIELDS QUANTUM ADVANTAGE
We now derive a notable consequence of the findings in the previous section, when combined with the results reported in Ref. [14]. There, the circuit depicted in Fig. 2 is used as the central resource to achieve magic-state distillation, and in turn fault-tolerant universality of an otherwise simulatable (GKP-encoded) stabilizer computation. This circuit is composed of input GKP states |0 GKP ⟩, an additional CV input state (possibly the vacuum), GKP-encoded Clifford operations, homodyne measurements, displacements, and classical feed-forward of measurement results (Fig. 2). Such a circuit gadget has the effect to implement the Kraus opera-torK EC (t) =Π GKPV (−t), whereV (−t) = e itqp e −itpq and Π GKP is the projection operator onto the GKP subspace. This has the effect of "error correcting" the additional input state by projecting it onto the computational subspace of the GKP code. When the additional input state is the vacuum, this results in GKP-encoded magic states, except for a zero-measure set of the measurement outcomes t q , t p .
Circuit gadget implementingKEC(t). Mode 1 is the top mode, which takes an input state and outputs a modified state. Mode 2 and 3 below are auxiliary modes which have a fixed input and once measured can be discarded. We use the notation of Ref. [40] whereby the controlled gate with the symbol ⊖ denotes the inverse of the SUM gate, namely e iq 3p1 . The measurement outcomes are denoted as tp and tq.
Performing this gadget across multiple modes with multiple additional vacuum states will provide a number of different states which each have a high fidelity to a magic H-type state.
This gadget is adaptive and, once the auxiliary modes are measured, they can be discarded. Within the gadget, the measurement values are used to shift the input state in position and momentum. Furthermore, the measurement outcomes give an indication of which H-type state the output state is closest to. We can use these measurement results to decide a Gaussian operation which shifts the state close to the target |H GKP ⟩ state.
If we have k copies of this gadget we will have produced k different states which each has a high fidelity to the |H GKP ⟩ state. We can then apply the twirling operation to each state, which for qubits is a probabilistic Clifford operation (hence implementable by a probabilistic Gaussian operation), which projects each state onto the H-axis of the Bloch sphere. These k states are non-identical so require adaptive depolarizing operations, which are again probabilistic Clifford operations, to make all these k states identical [41]. These adaptive probabilistic Clifford operations will adjust each state to match the state with the lowest fidelity to the target H-state. These operations are adaptive since they require knowledge of each state, which can be constructed from the values of t q , t p measured for each gadget. Now we have k identical copies of states which have fidelity above the threshold for magic state distillation. The magic state distillation algorithm [5,6] involves Clifford operations, adaptive Clifford operations and probabilistic Clifford operations. Therefore, the total algorithm to produce a H-type state from the vacuum and GKP states requires the following resources: input GKP states, input vacuum states, adaptive Clifford operations, probabilistic Clifford operations and homodyne measurements. Now, consider the same procedure where instead of the vacuum state as the additional input, we only have 0-logical (or another stabilizer) GKP states.
We know from this work that circuits involving GKP states, adaptive Clifford operations, probabilistic Clifford operations and homodyne measurements are weakly simulatable. Hence, the concatenation of all these operations, including the gadget in Fig. 2, belongs to the class of circuits that we have shown to be classically efficiently simulatable, if the supply of initial vacua is not included at the input of the circuit. Therefore, in the context of the model of Ref. [14], ideal stabilizer GKP states, homodyne measurement, displacements and classical feed-forward of measurement outcomes are to be regarded as free operations, in the sense that they provide a simulatable model.
However, if we add the vacuum to this otherwise simulatable model, we find that it is promoted to universal quantum computation. We can thus conclude that the vacuum can be considered a resource for quantum advantage in this model. Note that this conclusion was not possible to draw from Ref. [14] solely, because the model considered, even excluding the additional vacuum state, was not proven to be classically efficiently simulatable therein.
The intuition behind this result is that, as already noticed in Ref. [14], the interaction with the vacuum through an entangling operation takes the GKP states outside of the computational subspace spanned by the GKP logical codewords. Measurements followed by feed-forward and displacement project the unmeasured system back onto the GKP-encoded computational subspace, now in a magic state (apart from measurement outcomes which represent a zero-measure set in the set of all possible real measurement outcomes).
In the following section, we will extend this argument to demonstrate that realistic GKP states can also be considered a resource for quantum advantage in the context of this model.

VI. REALISTIC GKP STATES ARE A RESOURCE FOR QUANTUM ADVANTAGE
Following the result of the previous section, we now demonstrate that realistic GKP states can also be considered a resource for quantum advantage. Using a realistic (i.e., finitely squeezed) GKP state as the additional input state of the gadget in Fig. 1, instead of the vacuum, also produces a magic state with fixed probability, dependent on the squeezing of the realistic GKP state.
We now explicitly compute the outcome of the circuit in Fig. 3. Here, the additional input state, to be combined with ideal GKP states, is not the vacuum state, but instead a GKP state with variable squeezing. Note that the case of vacuum is re-obtained with a very good approximation by taking the limit of no squeezing in the GKP state. We will then compute the fidelity of the output state with the closest magic |H⟩-type state.
The error correcting circuit from Fig. 2 implementinĝ KEC(t), acting on an additional input non-ideal GKP state parameterized by ∆, κ. Note that the control gate with the symbol ⊖ at the target is the inverse of the SUM gate, namely e iq 3p1 [40].
The output of the circuit of Fig. 3 will be a state of the form which can be expressed in terms of the coefficients which can be normalized as The fidelity of this state with each of the |H⟩-type states can be calculated in terms of these normalized coefficients. I.e. for each |H⟩-type state |H⟩ = a 0 |0⟩ + a 1 |1⟩ we calculate the fidelity F (|H⟩ , |ψ⟩) = ⟨H| 1 The probability density function of the measurement outcomes can be calculated, as in [14], as PDF(t) ∝ c 2 0 + c 2 1 , which is then normalized over a region periodic in 2 √ π in both t q and t p . The probability of obtaining a state with fidelity higher than a certain threshold F * can then be calculated numerically by calculating the fidelities for each value of t q , t p and integrating over the PDF(t) for those values at which F > F * .
In Fig. 5 we plot the probability of obtaining a magic |H⟩type state in the output of the circuit in Fig. 3 above a given threshold fidelity, for different values of the squeezing parameter ∆.
We see that the squeezing parameter in the auxiliary state of the circuit in Fig. 3 inversely quantifies the resourcefulness of the auxiliary state.
This result can be understood by interpreting the vacuum as the zero-squeezing limit of a GKP state. The zero-squeezing limit corresponds to setting ∆ = κ = 1. In this case, we obtain from the expression of the finitely-squeezed GKP state in the position representation Then, calculating the fidelity with the vacuum state, |∅⟩ = π −1/4 dqe −q 2 /2 |q⟩ gives This high fidelity value explains in particular why, using the zero-squeezing limit of a GKP state, we obtain the plot in Fig. 4, third panel, that is indistinguishable for the naked eye from that of Ref. [14] obtained using input vacuum. This allows us to interpret the value ∆ as interpolating from a free state, the ideal 0-logical GKP state, corresponding to infinite squeezing, and with which no magic state can be generated, to a maximally resourceful state, namely the vacuum, corresponding to zero squeezing.
We can therefore conclude that in the context of this model, realistic GKP states are also resourceful for quantum advantage. The realistic GKP states required for magic state distillation can have any non-infinite squeezing; in other words, there is no threshold required to distill a magic state.

VII. CONCLUSIONS
First, we have demonstrated that circuits with input GKP states acted on with arbitrary displacements and rational [42] symplectic operations, and measured with homodyne detection are classically efficiently simulatable. This result extends the classes of circuits previously known to be simulatable and can be understood as a CV analogue to the Gottesman-Knill theorem [2][3][4]. The Gottesman-Knill theorem provides a method to simulate circuits involving qubits initialized in ideal input qubit stabilizer states acted on by Clifford operations and measured in the computational basis. Meanwhile, our result provides a method to simulate ideal GKP states acted on by Gaussian operations and measured with homodyne detection.
Second, this result in combination to those of Ref. [14] leads to the counter-intuitive interpretation of the vacuum, or realistic GKP states with any finite squeezing, as a resource for universality. Here we can draw an analogy to DV magic state distillation [5], where it is known that "noisy" pure states that are close to (but not exactly) the points corresponding to the stabilizer states on the Bloch sphere act as a resource for universal QC [6]. Similarly, in the circuits we have considered, introducing noise in the form of vacuum or realistic GKP states promotes the circuit class we consider to universality by allowing one to produce and distill magic states.
The question of whether realistic GKP states in all modes (possibly with different squeezing levels), combined with Gaussian operations, yield a simulatable or universal model is still open. Our analysis in Sec. VI , showing that a combination of ideal and realistic GKP states yields a universal FIG. 4. Fidelity of the output state of the error correcting circuit in Fig. 3 with the closest H-type magic state, for the various possible measurement outcomes tq and tp. Note that in the limit ∆ → 1 the finitely-squeezed GKP state at the input in Fig. 3 is approximately equivalent to the vacuum state, yielding agreement of panel 3 with Ref. [14].
FIG. 5. Probability of producing, as output of the circuit in Fig. 3, a magic state |ψ⟩ with a given fidelity F = |⟨H|ψ⟩| 2 to the nearest target magic state |H⟩ when the additional input state is a GKP state with squeezing level ∆ = κ. Note that in the other modes we still assume ideal GKP states in logical state 0. Also note that ∆ → 1 is approximately equivalent to the vacuum state. F * = 1 2 (1 + 1 √ 2 ) ≈ 0.8536 is the threshold for magic state distillation [6,14]. model, can be seen as a first attempt to provide an answer to this question. Therein, squeezing quantifies inversely the resourcefulness of finitely-squeezed GKP states, when these are combined with infinitely-squeezed GKP states, in terms of their ability of producing output GKP magic states. This leads us to speculate that, even in a more realistic model with input highly-squeezed GKP states, the vacuum will retain its character as a resource, boosting the magic content at the output of the circuit.
Our work also opens the question as to whether the methodology introduced to compute the PDF, based on imposing stabilizer conditions, can be also used for other types of circuits, for which input states admit a stabilizer representation.

VIII. ACKNOWLEDGMENTS
We acknowledge useful discussions with Laura García-Álvarez and Ben Q. Baragiola. G. F. and C. C. acknowledge support from the VR (Swedish Research Council) Grant QuACVA and the Wallenberg Center for Quantum Technology (WACQT).

Appendix A: The set of rational matrices is dense in the reals
In this appendix, we will prove that Sp(2n, Q) is dense on Sp(2n, R). This is equivalent [43] to showing that the closure of the rational symplectic group cl(Sp(2n, Q)) is the real symplectic group Sp(2n, R), i.e. cl(Sp(2n, Q)) = Sp(2n, R).
We provide an overview of the steps of this proof taken from Ref. [44]. First note that the symplectic group defined over any field Sp(2n, F) is generated by the set of symplectic transvections, H F [45]. This set consists of maps f α,⃗ u with α ∈ F and ⃗ u ∈ F 2 which transforms any arbitrary vector where B is the alternating bilinear form [44,46]. The set of generators of the rational symplectic group can therefore be written as while the set of generators of the real symplectic group can be written as For any chosen generator in the set of generators for the real symplectic group, f α,⃗ u ∈ H R , it is possible to find an arbitrarily close generator from the set of generators of the rational symplectic group f α ′ ,⃗ u ′ ∈ H Q . We can demonstrate this by evaluating the norm of the difference of these generators [44] and showing that it is possible to find for any f α,⃗ u and f α ′ ,⃗ u ′ a norm such that Choosing ⃗ u ′ = ⃗ u + ⃗ u ϵ and α ′ = α + α ϵ we can make use of the triangle inequality [47] to find Note that we can write ⃗ u ϵ = ϵ u ⃗ u 1 ϵ where ⃗ u 1 ϵ is the unit vector containing the direction of ⃗ u ϵ , i.e. |⃗ u 1 ϵ | = 1, and the magni- This allows us to write Therefore, for any chosen 7ϵ ∈ R, we can ensure that the distance is less than 7ϵ by ensuring each term is smaller than ϵ, i.e.
which is equivalent to These can be rearranged into conditions and can be further simplified to the conditions For any α, ⃗ u, ⃗ v it is always possible to find α ′ , ⃗ u ′ for which α ϵ , ϵ u ∈ R are arbitrarily close to 0 such that all these inequalities hold. This follows from the fact that the rational numbers are dense on the reals [48].
We can therefore say that H Q is dense on the set H R , which can be expressed in terms of the closure of the set of rational generators cl(H Q ) = H R .

Appendix B: Solution to the constrained linear equation
In this appendix, we solve the constrained equation introduced in Sec. IV, which provides the solution for the set of allowed points of the PDF. Specifically, we find the solution of Eq. (30) given the constraint of Eq. (27). I.e., To solve this equation, we begin by identifying a method to evaluate the possible values of the vector ⃗ l.
First, in Sec. B 1, we express the constraint as an overdetermined system of linear equations, that has solutions dependent on a projection matrix 1 − SS + . In Sec. B 2, we demonstrate that this projection matrix is rational, given that the symplectic matrix is rational. Together, these results allow us to provide the solutions to ⃗ l in Sec. B 3. Then, in Sec. B 4 we demonstrate that given the solutions of ⃗ l, we can express the constrained equation in Eq. (B1) as set of unconstrained linear equations. Finally, in Sec. B 5, we provide the solution of these unconstrained equations, which are also the solutions to the constrained equation given in Eq. (B1).

Expressing the constraint as a linear system of equations
We first identify a method to express the constraining terms defined in Eq. (B1) as a system of overdetermined linear equations. We demonstrate that the solutions of ⃗ l can be expressed in terms of the pseudo-inverse of a matrix S, which is dependent on A and B, and a new vector ⃗ b, which will be solved in the following subsections. To begin, we combine the constraining terms into one equation of the form We now introduce a matrix S which is defined in terms of the two matrices A and B as where we also introduce the matrixS, which is the transpose of the first n rows of the symplectic matrix M , i.e., We introduce the vector ⃗ b which is a 2n-vector of integers. This allows us to express the constraint on ⃗ l as This gives an overdetermined system of linear equations and does not necessarily always have a solution. Whether the system has solutions or not depends on which integers are chosen in ⃗ b.
The columns of S are linearly independent. This can be seen by considering the fact that the determinant of the symplectic matrix is det M = 1 which means that it has linearly independent rows [49]. Hence, the matrixS will have linearly independent columns. Furthermore, the matrix which convertsS to S is a full rank 2n × 2n matrix. Hence, the rank of S will be the same as the rank ofS, i.e. it will have rank n which means the columns must be linearly independent [34,50].
We can express the solutions of ⃗ l in terms of the Moore-Penrose pseudoinverse [29][30][31], which is a generalization of the matrix inverse. For any matrix S there exists a pseudoinverse S + even if the matrix does not have a true inverse. A 2n × n rectangular matrix S with linearly independent columns has rank n [50]. The pseudoinverse S + is defined such that S + S = 1. The pseudoinverse of S can be found in terms of the pseudoinverse of the rank n matrixS as [51] S + =S + 1 0 0 1/2 where we have used the fact that the pseudoinverse of a nonsingular matrix is equal to its inverse [31]. This gives potential solutions of ⃗ l in the form but if this system is unsolvable for a given ⃗ b then the pseudoinverse will not provide a valid solution to ⃗ l. For it to be valid it must satisfy the original equation [31] S which gives the constraint on the integers ⃗ b as We can write this constraint as Finding the values of ⃗ b which satisfy this equation also informs us of all the possible choices of ⃗ l which satisfy the constraining equation. This is equivalent to finding the eigenvectors of the projection matrix 1 − SS + which have eigenvalues equal to zero. To solve this equation, we will demonstrate that the projection matrix 1 − SS + has a convenient eigenvalue decomposition, provided that it is a rational matrix. We first prove that this matrix is rational in the following section, given that the symplectic matrix is rational, and then we will proceed to find its eigenvectors.

The projector is rational
In this subsection, we will analyze the 2n × 2n projection matrix 1−SS + and demonstrate that it contains all rational elements. We then use the expression to find the pseudoinverse of a matrix with linearly independent columns, to identify the pseudoinverse ofS as [31] S + = (S TS ) −1ST . (B11) Note that 1 − SS + will be rational if SS + is rational. In-spectingS we can see that as long as A, B are rational, the matrix (AA T + BB T ) will be rational. The inverse of a rational matrix will also be rational, and soS + will also be rational. Therefore we know S + is rational. This also implies that 1 − SS + is a matrix of rational elements.

Evaluation of the allowed parameters provided by the constraint
As shown in the previous subsection, the matrix 1 − SS + consists of all rational elements. In this subsection, we will demonstrate that it has an eigenvector decomposition of the form where V is a unimodular matrix, also known as a unit matrix [31]. The definition of a unimodular matrix is one that contains all integers and has determinant 1 [32]. This decomposition then be used to find the solutions of ⃗ b in Eq. (B10) and therefore also ⃗ l in Eq. (B7). To find such V for a given matrix 1 − SS + we can first find the Smith decomposition of the matrix σS. We use the integer σ to multiply every element of matrix S to an integer. The integer σ can be found to be the lowest common multiple of all of the denominators of S. The Smith decomposition is given by where V is a 2n × 2n unimodular matrix, U is a n × n unimodular matrix. D is a diagonal 2n × n matrix which has the same rank as S, which has rank n. The Smith decomposition algorithm will order the diagonal elements of D in descending order. We can therefore assume that D has n non-zero entries along the diagonal. The remaining entries in the matrix D will be 0. Furthermore, we can identify the pseudoinverse of S as where we have used that the pseudoinverse of the product of two matrices AB is (AB) + = B + A + [52]. This gives a convenient expression for SS + and the projector As D is a matrix of integer entries along the diagonal, its pseudoinverse, D + , can be found by taking the inverse of each non-zero element along the diagonal and then transposing [31]. Therefore we can find DD + by inspecting its form, From this, we can immediately identify and we can express the projector as as we had anticipated. This form is an eigenvalue decomposition of the matrix 1 − SS + . The integer eigenvectors of 1 − SS + are given as the columns of V . The first n columns correspond to eigenvectors with eigenvalue 0 and the remaining n columns correspond to eigenvectors with eigenvalue 1.
We can therefore construct any integer eigenvector with eigenvalue 0 as [34] Note that in general the decomposition matrices U, V of the Smith decomposition are not necessarily unique. However, the complete set of eigenvectors ⃗ b will be the same regardless of which decomposition matrix V is found [34].
The allowed values of ⃗ l can therefore be calculated in terms of the values of ⃗ b using Eq. (B7), which can be expressed in terms of the n-vector ⃗ m as where we have introduced the n × n matrix R as Given these solutions for the vector ⃗ l, we can solve the constrained equation, given in Eq. (B1), to find the allowed values of ⃗ x.

Expressing the constrained equation as a set of linear equations
We then would like to solve the equation for which we can, without loss of generality, set ⃗ c = 0 (because we can consider a different ⃗ c to be a change of variables in ⃗ x) and it becomes We consider the term where we have defined the n × n matrix The matrix T will always give integer values. For proof of this consider the following. We know from Eq. (B9) and Eq. (B22) that which must be true for all integer vectors ⃗ m. As a consequence we have We know that is an n × n matrix so we must have This means that from Eq. (B28) the matrix T can be succinctly written as T =V (11)T V (21) .
The matrix V is unimodular meaning that it consists of all integer elements. The block matrices V (21) , V (11) must also be integer and the multiplication of two integer matrices is also integer. Hence, T is an integer matrix. We can now solve Eq. (B26) which constrains the values of ⃗ x, which can be written in terms of Eq. (B23) and Eq. (B27) as This must be true for any chosen ⃗ m. We introduce the lengthn basis vector ⃗ e (j) which has zero in all elements, except at element j for which it is 1, ⃗ e (j) = (0 1 , . . . , 0 j−1 , 1 j , 0 j+1 , . . . , 0 n ) T .
(B35) Choosing ⃗ m = m j ⃗ e (j) , for any integer m j ∈ Z, gives an equation of the form The vector ⃗ m = m j ⃗ e (j) will produce constraints for different choices of m j ∈ {1, 2, 3, . . . } as continuing for all integers m j . We know that T jj is an integer and so we inspect two cases. In the first case we consider when T jj is an even integer. These constraints can then always be simplified to Then using the fact that this must hold for any choice of m j we identify that any integer m j multiplied by 1 √ π (R T ⃗ x) j is an even integer. This means that for even T jj we have In the second case, for which T jj is odd and so T j,j mod 2 = 1, the constraints can be simplified to which will be satisfied for all choices of m j if and only if 1 √ π (R T ⃗ x) j is an odd number. Hence, for odd T jj we can write Combining these two cases we can express the two relations, which depend on whether T jj is odd i.e. T jj mod 2 = 1 or even i.e. T jj mod 2 = 0, as We can also attempt to select for combinations of these basis vectors. For example, we can choose ⃗ m = m i ⃗ e (i) +m j ⃗ e (j) for different integers m i , m j ∈ Z. These will give constraints of the form but because we know that every element T i,j is an integer, this is equivalent to linear combinations of the constraints with a single m j ̸ = 0. We already know that the constraints with single m j ̸ = 0 are satisfied and so adding combinations of such constraints do not constraint the allowed values of ⃗ x any further.
A valid solution can be found by solving where ⃗ t is an integer vector of the diagonal elements of T . Note that R T is a n × n matrix given by This system of equations will have infinite solutions. However, if R T is invertible then the system of equations can be solved by applying the inverse of R T to the left of both sides of the equation.

Solutions of the constrained equation
To solve the set of linear equations we need to find the inverse of R. We first claim that the pseudoinverse is the inverse of R. R is given in Eq. (B24) so its pseudoinverse is Now, we can check that R + R = 1 and RR + = 1. If this is true then we will know that R is invertible and R + = R −1 . First we see that and use Eq. (B16) and Eq. (B20) to write Furthermore we can check RR + is equal to the identity This time we replace the matrix with DD + , using Eq. (B16) to find where we have used that S + S = 1. This means that RR + = R + R = 1 which implies that Hence, we can write the inverse of R as and Finally, we can invert Eq. (B48) to identify the solutions to the constrained linear equation as Appendix C: Further extending the class of simulatable operations In this appendix, we will demonstrate that the class of symplectic operations simulatable using our method can be extended further than the rational symplectic matrices. Specifically, there are certain instances whereby the projector 1 − SS + , given in Eq. (B10), is rational even when the symplectic matrix is irrational.
To understand why, consider that any symplectic matrix can be expressed as [53] where the second factor is an orthogonal symplectic matrix. Using A = A 0 X and B = A 0 Y we can write Eq. (B4) as We can also express its pseudoinverse, given in Eq. (B11), as The rationality of the projection matrix 1 − SS + depends on the rationality of this matrix, which can be written as Therefore 1 − SS + will be rational as long as each of the blocks of this matrix are rational. I.e., the projector will be rational if all the elements of the matrices X T X, Y T X are rational.
The projector can therefore in certain cases still be rational when the symplectic matrix is irrational. Namely, the matrix A 0 can be irrational while the projection matrix remains rational.
There are also certain cases where the individual matrices X, Y can be irrational while the projection matrix is rational. For example, consider the case that We can rewrite these diagonal block matrices in terms of the tangent of the angles (X T X) jj = cos 2 (θ j ) = 1 tan 2 (θ j ) + 1 , (C7) (X T Y ) jj = cos(θ j )sin(θ j ) = tan(θ j ) 1 + tan 2 (θ j ) , from which we see that the projection matrix will be rational whenever tan(θ j ) ∈ Q for all j.
Provided that the projection matrix in Eq. (B10) is rational, it is possible to identify non-zero points of the PDF, by virtue of Appendix B 3. The constraint of rational symplectic matrices can therefore be relaxed. However, for simplicity, we choose to restrict to rational symplectic matrices in this work.

Appendix D: Relationships between the classes of simulatable operations
The class of operations which are shown to be efficiently simulatable in our work can be denoted by D, which contains all operations deemed simulatable in Appendix C. For simplicity, throughout this work, we chose to denote the class of simulatable operations as those which belong to the class HW(n)[Sp(2n, Q)], i.e., those for which the symplectic matrix is rational.
This class of operations HW(n)[Sp(2n, Q)] contains, in particular, all GKP Clifford operations for encoded qudits of any dimension, as was proven in Sec. IV C.
We now recall and compare classes of operations that we demonstrated to be simulatable using different techniques in our previous work, Ref. [20], with those considered here. We previously demonstrated that circuits with input GKP states acted on by operations selected from a class B and measured in all modes with homodyne measurement are simulatable. This class was defined as and The class B contains operations where the symplectic matrix can contain irrational elements, e.g. when θ j = π/4, despite satisfying the condition that cot θ j ∈ Q (2) . This implies that B ̸ ⊂ HW(n) × Sp(2n, Q).
In Appendix C we demonstrated that it is possible to extend the class of simulatable operations beyond the group HW(n)×Sp(2n, Q), to a larger set which we denote D and we show that B ⊂ D. This set contains all displacements HW(n) and all symplectic matrices such that X T X and X T Y are rational.
In our previous work [20] we demonstrated that is possible to simulate another class A which consists of symplectic operations whereby the top row of the symplectic matrix has a specific structure. That matrix does not necessarily satisfy any constraints in the other elements and so we cannot conclude that D nor HW(n) × Sp(2n, Q) contains A.
We have included a figure, Fig. 6, to show the containment of each of these classes of operations, with respect to the previous classes identified in Ref. [20].

Appendix E: Adaptive circuits with modular homodyne measurements
Although not required for the results of this paper, we provide an additional observation in this appendix. We demonstrate how to efficiently sample from a circuit that makes use of modular measurements. This method involves producing random integers selected from a finite set of integers. Quantum circuits involving GKP states often make use of modular homodyne measurements. These are measurements in position or momentum modulo some period. Formally we define some period T / √ π ∈ Q such that the recorded measurement result in position or momentum, x 1 , is recorded as x 1 mod T . For example, PauliẐ measurements in the GKP framework [12] are measurements in position modulo T = 2 √ π. If the measurement result x 1 is closest to x 1 mod 2 √ π = 0, the measurement corresponds to a measurement of the logical qubit state |0⟩. If the measurement result x 1 is closest to x 1 mod 2 √ π = √ π, the measurement corresponds to a measurement of the logical qubit state |1⟩. An adaptive circuit with feed-forward operations that makes use of modular measurements will use the value of x 1 mod 2 √ π to determine future operations. In the case of PauliẐ measurements, we can define two possible operations which could be performed on the remaining modes, depending on which outcome is measured. The PDF of a unitary non-adaptive operation U 0 followed by a measurement of mode 1 can be represented as which is equivalent to identifying that the possible measurement values of x 1 can be given by To identify the possible outcomes of x 1 mod √ πk, we calculate The matrix R is rational and so we can write [20] ((R −T )2 ⃗ m) 1 = u v m * (E4) which reduces the random vector of integers to a single integer m * ∈ Z, and a period u v ∈ Q, which depends on the first row of the matrix R −T .
This allows us to simplify the possible measurement outcomes tō where we can restrict to at most vk possible outcomes parameterized bym ∈ {0, 1, 2, vk−1}, which each occur with equal probability. Simulation of measurement consists of choosing a random value ofm from the finite set of possible integers.
Following the adaptive routine, we then choose a new operator U 1 (x 1 ) dependent on the measured value of x 1 and simulate the circuit U 1 (x 1 )U 0 . This will provide us with points of the form Choosing x 1 =x 1 and assuming no operations have been applied to the measured mode we have This expression can be simplified to a summation over n − 1 integers.