Matrix product decomposition for two- and three-flavor Wilson fermions: Benchmark results in the lattice Gross-Neveu model at finite density

We formulate the path integral of two- and three-flavor Wilson fermion in two dimensions as a multilayer Grassmann tensor network by the matrix product decomposition. Thanks to this new description, the memory cost scaling is reduced from $\mathrm{O}(\mathrm{e}^{N_{f}})$ for the conventional construction to $\mathrm{O}(N_{f})$. Based on this representation, we develop a coarse-graining algorithm where spatially or temporally adjacent Grassmann tensors are converted into a canonical form along a virtual direction before we carry out the spacetime coarse-graining. Benchmarking with the lattice Gross-Neveu model at finite density, we see that the Silver Blaze phenomenon in the pressure and number density is captured with relatively small bond dimensions.


I. INTRODUCTION
The tensor network method has been widely applied in many fields including condensed matter physics, information science, and particle physics [1][2][3][4][5][6]. In the context of the quantum many-body problem, the tensor network method can be classified namely by two streams. One is the method based on the Hamiltonian formalism, where we express a quantum state as a certain form of tensor contraction and try to optimize each tensor by some variational methods, such as the density matrix renormalization group (DMRG) [7,8]. The other is based on the Lagrangian formalism, where we represent the path integral as a tensor contraction and approximately perform the contraction employing a variant of the realspace renormalization group method, such as the Levin-Nave tensor renormalization group (TRG) [9]. One of the most remarkable aspects of the latter approach is that several TRG algorithms enable us to deal with an infinitely large lattice even in higher dimensions if systems are translationally invariant on a lattice [10][11][12]. Since the usual bottleneck part of the higher-dimensional TRG calculation is the tensor contraction, parallel computation techniques play a significant role in the practical applications [13,14]. So far, the TRG approach has been applied to several four-dimensional lattice theories [15][16][17][18].
When we try to apply the TRG approach to systems with larger internal degrees of freedom, however, one encounters several difficulties. The size of each tensor in its tensor network representation increases and a larger memory space is required in the numerical calculation. The bond dimension, which characterizes the maximal size of the tensor under the tensor network method, may have to be enlarged to reduce an approximation to the original representation as much as possible. To resolve these issues, one of the promising approaches is to find an efficient expression for such systems in the language of tensor networks. Typical examples are non-Abelian lat- * akiyama@phys.s.u-tokyo.ac.jp tice gauge theories, where naive derivations of their tensor network representation, particularly in four dimensions, requires extremely large memory costs. Recently, several attempts have been made for two-dimensional theories [19][20][21][22] and the three-dimensional SU (2) pure gauge theory [23]. These works have successfully demonstrated the applicability of the TRG approach for lattice theories with non-Abelian degrees of freedom. On the other hand, one also encounters a difficulty originating from larger internal spaces when flavor degrees of freedom are introduced to lattice fermions. This can be another challenging issue because we have to deal with a local tensor, whose size scales exponentially with respect to N f , the total number of flavors. So far, singleflavor lattice fermions have been usually employed in the previous TRG studies [16,[24][25][26][27][28][29]. 1 Therefore, how to deal with finite-N f systems with the TRG approach needs to be addressed, particularly for its future application toward the (2 + 1)-flavor quantum chromodynamics (QCD) on a lattice, for instance. To this aim, we are going to apply a tensor network decomposition for the Grassmann tensor network representation [32] from the beginning and formulate its coarse-graining procedure. The main idea is based on the so-called matrix product decomposition, which has been very familiar as the matrix product state (MPS) [33,34] within the Hamiltonian formalism. 2 We numerically test our algorithm, employing the Gross-Neveu model [40] at finite density with the two-and three-flavor Wilson fermions. 3 Using the matrix product decomposition, the model with N f = 2 and N f = 3 can be described as two-and three-layer Grassmann tensor network whose bond dimension is equal to four. Thanks to this new description, the memory cost scaling is reduced from O(e N f ) for the conventional construction to O(N f ). Based on this description, we develop a coarse-graining procedure, by which we calculate pressure and number density to verify whether the Silver Blaze phenomenon [43] can be captured or not. This paper is organized as follows. In Sec. II, we introduce the matrix product decomposition of the Grassmann tensor for the lattice Gross-Neveu model with the Wilson fermion. Based on this representation, we propose a coarse-graining procedure, taking advantage of matrix product decomposition, particularly of canonical form. In Sec. III, we first check the efficiency of the matrix product decomposition for the initial Grassmann tensor network. We secondly perform free field computation, where the free-energy density as a function of volume is calculated by our method and a naive TRG application, and show our method successfully reproduces the exact solution. Thirdly, we present numerical results of pressure and number density as functions of chemical potential with the finite coupling constant. As the validation, we also employ the bond-weighted TRG (BTRG) [44] to evaluate the original tensor network representation. Section IV is devoted to summary and outlook.

II. ALGORITHM
To explain our algorithm, we consider the Gross-Neveu-Wilson (GNW) model with N f = 3 at finite density on a square lattice Λ 2 = {n = (n 1 , n 2 )|n ν ∈ Z, ν = 1, 2}. The lattice action is where ψ (f ) n andψ (f ) n describe the Wilson fermions, labeled by the flavor index f , which are two-component Grassmann-valued fields. g 2 σ and g 2 π are coupling constants. M , µ, and r represent mass, chemical potential, and the Wilson parameter, respectively. In this study, we always assume that these parameters do not depend on f . In addition, we always set r = 1 and employ the Pauli matrices to represent the two-dimensional γ-matrix via γ 1 = σ x , γ 2 = σ y , and γ 5 = −iγ 1 γ 2 = σ z . We regard ν = 1 (2) as the spatial (temporal) direction, assuming the (anti-)periodic boundary condition.
A. Matrix product decomposition at initial stage According to Ref. [32], the tensor network representation of the path integral Z generated by the lattice action (1) is expressed in the form of where T is a Grassmann tensor, which is a multilinear combination of Grassmann numbers. 4 The size of the Grassmann tensor is determined by the lattice geometry and the hopping structure in Eq.(1). We have four adjacent sites for each site on the square lattice Λ 2 and forward and backward hopping terms in each direction, for each flavor. Therefore, T is written in the following way, 2) are single-component auxiliary Grassmann numbers. Note that the ordering of Grassmann numbers in Eq. (6) (Eq. (7)) is different from Eq. (4) (Eq. (5)). These orderings help us to integrate auxiliary Grassmann numbers in Eq. (2) under the coarse-graining procedure [32]. The explicit form of coefficient tensor in Eq. (3) can be straightforwardly obtained by the procedure explained in Ref. [32].
Since the coefficient tensor T consists of 4 4N f elements, the size of the Grassmann tensor in Eq. (3) increases exponentially with respect to N f . We now decompose the coefficient tensor into the lower-rank tensors, each of which is related to each flavor's degree of freedom. Firstly, we rearrange indices of T and auxiliary Grassmann numbers in Eq. (3) as where we have assumed that an extra sign factor coming from the rearrangement of Grassmann numbers is included in T . Φ describes Defining an eight-bit string I f as we consider the matrix product decomposition for T in Eq. (8), which is easily obtained by repeating the singular value decomposition (SVD) of T twice, Note that U 's and V 's are isometries and σ denotes the singular value. Throughout this paper, we distinguish tensors by their subscripts. χ is referred to as the flavor bond dimension. The right-hand side of Eq. (11) is known as the canonical form, where all tensors in the matrix product decomposition satisfy some orthogonality conditions. In case of Eq. (11), we have orthogonality conditions such that In the tensor network calculation based on the Hamiltonian formalism, one uses the MPS with its canonical form to represent a state vector. One of the advantages of the canonical form is that one can calculate physical quantities on a larger system just via some local manipulations. For a detailed explanation of the canonical form, see Ref. [47], for instance. In the following, we will see that the canonical form enables us to calculate some reduced density matrices easily and to develop a coarsegraining procedure of the Grassmann tensor network via some local operations with respect to each flavor. The matrix product decomposition of Eq. (11) can be rewritten as I3α2 . The right-hand side of Eq. (12) is a different type of canonical form which was first proposed in Ref. [48], in the context of the classical simulation of the quantum computations based on the Hamiltonian formalism. In this canonical form, singular values exist explicitly between all Γ tensors. For Eqs. (11) and (12) to hold, a naive choice of χ is χ = 2 8 . If we have the vanishing singular values, however, we have a chance to compress original coefficient tensors by expressing the right-hand sides of Eqs. (11) and (12). We will see later that χ = 4 is sufficiently large for these equations to hold.
Obviously, the matrix product decomposition of the coefficient tensor can be interpreted as that of the corresponding Grassmann tensor. Eq. (12) provides us with where A α1 , B α1α2 and C α2 are the Grassmann tensors, defined by contracting I f (f = 1, 2, 3) with corresponding Grassmann numbers in Eq. (8), labeled by some extra indices, α 1 , α 2 . Figure 1 schematically shows Eq. (2) with N f = 3 and its matrix product decomposition of Eq. (13). We can understand that Eq. (13) has introduced a virtual direction for the original two-dimensional Grassmann tensor network. Notice that the contractions parallel to the original two-dimensional plane are the Grassmann contractions, but those along the virtual direction are normal tensor contractions.

B. Coarse-graining procedure
Our method is basically similar to the algorithm of higher-order TRG (HOTRG) [10]. At each coarsegraining step, however, we convert two adjacent coefficient tensors, along the spatial or temporal direction, into a canonical form along the virtual direction before we carry out the spacetime coarse-graining, doing the same with the HOTRG. A schematic picture is shown in Fig. 2. In addition to the flavor bond dimension χ, we also introduce the spacetime bond dimension D to decimate the degrees of freedom under the coarse-graining. Let q be an integer to specify the iteration number of coarse-graining. Then the algorithm sequentially defines a coarse-graining transformation from two adjacent T (q) 's to T (q+1) , where The coarse-grained tensor T (q) is defined on a lattice with 2 q sites. T (0) is given by Eq. (13). In other words, repeating this coarse-graining procedure q times, 2 q numbers of T (0) are approximately contracted. We shall start with the coarse-graining along the temporal direction, so we have 2 k sites in the spatial direction and 2 k+1 sites in the temporal direction when q = 2k + 1 (k = 0, 1, · · · ). To obtain the canonical form with the fixed flavor bond dimension χ, we derive several squeezers (flavor squeezers) as explained in Appendix A 1. The SVD carried out by these flavor squeezers updates σ α2 . On the other hand, the spacetime coarse-graining is given by different squeezers (spacetime squeezers) with the spacetime bond dimension D as explained in Appendix A 2. By these spacetime and flavor squeezers, A (q) , B (q) and C (q) are updated. We emphasize that the canonical form allows us to perform the spacetime coarse-graining for each flavor segment, A (q) , B (q) , C (q) , separately. Furthermore, this coarse-graining scheme can be straightforwardly extended to arbitrary N f , keeping the number of local tensors needed to describe the Grassmann tensor network constant, N f , at all coarse-graining steps.

III. NUMERICAL RESULTS
We provide several benchmark results in the lattice GNW model at finite density. In the following, we set g 2 = g 2 σ = g 2 π for simplicity. The lattice volume is denoted by V . Firstly, we check the efficiency of the matrix product decomposition at the initial stage: How large should χ in Eq. (12) be to restore the original coefficient tensor? Secondly, we compute the path integral with N f = 3 and g 2 = 0. This computation allows us to verify our coarse-graining procedure via a comparison of the numerical result and the exact solution. Then, we employ the current algorithm to calculate the GNW model with g 2 ̸ = 0 in the finite chemical potential regime.

A. Matrix product decomposition at initial stage
To check the efficiency of the matrix product decomposition at the initial stage, we define with p = 1, 2. ||T (0) || denotes the Frobenius norm of the initial coefficient tensor and σ αp 's are the singular values in Eq. (12). By definition, R p (χ) is always less than or equal to 1. Obviously, R p (2 8 ) = 1 holds. Figure 3 shows R p (χ) as a function of g 2 , varying χ. We set M = 0 and µ = 0. We can see that χ = 4 < 2 8 is sufficiently large to make Eq. (12) hold. This means that the two-dimensional GNW model with N f = 3 can be exactly expressed as the three-layer Grassmann tensor network, whose bond dimension is four, as shown in Fig. 1 (B). This can be attributed to the fact that although each I j (j = 1, 2, 3) in Eq. (12) describes 2 8 patterns of combinations of auxiliary Grassmann numbers, most of them have vanishing contribution due to the Grassmann nature. Therefore, we expect that this kind of matrix product decomposition is efficient not only for the GNW model but also for other pure fermionic systems. We can predict that the effect of truncation by flavor squeezers in the coarse-graining procedure should depend on g 2 because Fig. 3 tells us the singular value spectrum depends on the coupling constant. Note that g 2 = 0 is a special point: when g 2 = 0, Eq. (12) holds just with χ = 1, because the coefficient tensor for N f -flavor free fermions is given by the direct product of the N f number of coefficient tensors for the single-flavor free fermion.

B. Free field computation
Next, we calculate three-flavor Wilson fermions with the vanishing interaction. As mentioned previously, there is no truncation error coming from the finite flavor bond dimension, because we just need χ = 1, so we can fo-cus on the finite-D effect originating from the spacetime squeezers. Figure 4 shows the free energy density with M = 1 and µ = 0 as a function of q, the iteration number of the coarse-graining introduced in Eq. (14). It shows that the numerical result by our algorithm with D = 16 successfully reproduces the exact solution. At V = 2 20 , the relative error of the free energy is O(10 −7 ). We also have naively employed the HOTRG algorithm with D = 64, to evaluate the original Grassmann tensor network generated by Eq. (3). 5 Although the relative error becomes O(10 −5 ) at V = 2 20 , the small volume results are inconsistent with exact ones. Note that we have to take D ≥ 64 to avoid an extra approximation for the initial tensor itself in the naive computation. Since the path integral of the free field theory with multiflavors is given by the product of free field theories with one flavor, the HOTRG can achieve much higher accuracy if one applies it to the single-flavor theory. However, such a treatment is available only for free field theories.
Similarly, Fig. 5 shows the free energy density with M = 0 and µ = 0. The relative error at V = 2 20 (C) Since the resulting network has the same geometry as it did, we can repeat the procedure.
is O(10 −4 ) by our method. Again, we have naively employed the HOTRG algorithm with D = 64 and D = 70 to evaluate the original Grassmann tensor network. Their coarse-graining histories suggest that there is little hope to restore the exact solution by the naive HOTRG computations for massless Wilson fermions. On the other hand, our method enables us to exactly carry out the tensor contractions up to V = 2 2 regardless of M , and to evaluate the path integral in the thermodynamic limit. Therefore, these numerical results imply that our algorithm could achieve better performance in multiflavor fermion models with relatively small bond dimensions compared with conventional TRG algorithms.

C. Interacting fermions at finite density
We now move on to the benchmark calculation with g 2 ̸ = 0 at finite density. We evaluate pressure P and number density n on a lattice, whose size is V = 2 20 , as functions of chemical potential. In this work, the latter is calculated by the forward differentiation of the pressure such that with ∆µ = 0.1. We consider massive fermions with M = 1 in order to observe a clear signal of the Silver Blaze phenomenon, which is a characteristic feature at finite density: bulk observables in the thermodynamic and zero-temperature limits do not depend on chemical potential up to a certain value µ c [43]. We choose g 2 = 10 as a representative. We always set D = 20    the BTRG is higher than that of HOTRG at the same bond dimension [44], it would provide us with a good reference for massive fermion calculations. Figures 6 and 7 show pressure and number density with N f = 2 as functions of the chemical potential µ. To see χ-dependence, we have varied it as χ = 4, 6, 8. Note that the GNW model with N f = 2 can be described as the two-layer Grassmann tensor network, whose bond dimension is equal to four. The results obtained by our algorithm are consistent with those of the BTRG, where the Silver Blaze phenomenon is clearly observed. Since case of lattice fermion, as shown in Ref. [52]. we are considering the lattice Wilson fermion with N f = 2, the behavior of number density is also reasonable. The endpoint of the Silver Blaze phenomenon seems around µ c ∼ 1.7. The calculations with χ = 4, 6 in Fig. 7 suggest that the finite-χ effect is more pronounced in the Silver Blaze regime.
We make a similar comparison between our algorithm and the BTRG in the case of N f = 3. In Fig. 8, we see that results obtained by both methods are consistent and capture the signal of the Silver Blaze phenomenon. As shown in Fig. 9, the maximal number of fermions per site should be 3 because of N f = 3. Again, we find that the finite-χ effect is more pronounced in the Silver Blaze regime. However, the computation is stabilized with χ = 8 and the plateau of n = 0 is restored. The endpoint of the Silver Blaze phenomenon seems around µ c ∼ 1.6 with N f = 3.

IV. SUMMARY AND OUTLOOK
Using the matrix product decomposition, we have shown that the lattice Gross-Neveu model with N f = 2, 3 can be described as the two-and three-layer Grassmann tensor network, whose bond dimension is four. Based on this multi-layer representation, we have developed the coarse-graining procedure, where we convert two adjacent coefficient tensors into a canonical form along the virtual direction before we carry out the spacetime renormalization. Thanks to the canonical form, we can obtain reduced density matrices easily and develop the coarsegraining procedure of the Grassmann tensor network via local operations with respect to each flavor. With vanishing interaction, our algorithm automatically reduces the three-flavor computation into the single-flavor one. This feature suggests that our algorithm show better performance, particularly in weak-coupling regime with lighter mass, compared with conventional TRG algorithms. As a benchmark, we have calculated pressure and number density as functions of chemical potential on the lattice with V = 2 20 . Although the finite-χ effect tends to be more pronounced in the Silver Blaze regime, the phenomena have been clearly observed with relatively small χ and D.
There are several future research directions. If we could have the right-hand side of Eq. (12) analytically in advance, not via the SVD of the left-hand side of Eq. (12) numerically, the memory cost is reduced from O(e N f ) to O(N f ). Since local Grassmann tensors are usually sparse, it can be expected that the geometry of the network can be rigorously deformed in other fermionic systems, not only the GNW model. It seems very interesting to extend this study to four-dimensional lattice fermions with two and three flavors because they are necessary ingredients of the lattice quantum chromodynamics. In addition, since our coarse-graining algorithm for d-dimensional lattice fermions is similar to the (d + 1)dimensional HOTRG, we can reduce the computational cost by considering the (d + 1)-dimensional anisotropic TRG (ATRG) [11]. Moreover, the coarse-graining of the Γ tensor for each flavor (as shown in Eqs. (A43), (A44) and (A45)) can be done separately for each flavor, which is also compatible with parallel computation. As another future work, we are also planning to investigate the Grassmann tensor network formulation for the spin-S system with S = 1/2 or S = 1. We expect that the current coarse-graining procedure is promising to study such a system with the TRG approach.

ACKNOWLEDGMENTS
We thank Tsuyoshi Okubo and Synge Todo for their valuable discussion. The computation in this work has been done using the facilities of the Supercomputer Center, the Institute for Solid State Physics, the University of Tokyo. We acknowledge the support from the Endowed Project for Quantum Software Research and Education, the University of Tokyo ([53]).
Appendix A: Algorithmic details of coarse-graining procedure We demonstrate how to perform a single coarsegraining transformation. We omit the iteration number q as shown in Eq. (14) from all tensors. Let us rewrite Eq. (12) as

Conversion to the canonical form using flavor squeezers
.
(A1) 7 We now introduce binary functions on these subscripts such that 7 When q = 0, all the upper subscripts in Eq. (A1) are the twobit strings defined via , and T ′ f = t with I = X f , T f , X ′ f , T ′ f for f = 1, 2, 3. These binary functions are helpful to restore the Grassmann calculus just within coefficient tensors [54]. Let us focus on adjacent coefficient tensors along the temporal direction and set be a coefficient tensor on a lattice site n +2, and on a site n (Fig. 10). Integrating out auxiliary Grassmann variables on the link (n, n +2), which are shared by these two tensors, we have a new Grassmann tensor whose coefficient tensor is given by (A5) Note that (−1) f T 1 (T1)+f T 2 (T2)+f T 3 (T3) is resulting from the integration over auxiliary Grassmann variables on the link (n, n +2). Let us convert Eq. (A5) into a canonical form along the virtual direction. A naive way is to directly carry out the SVD of Eq. (A5), but here we consider an indirect way, which requires less memory space. Setting we introduce an invertible matrix R such that α1,α1 where Q satisfies the following condition, To derive R, it is useful to construct a reduced density matrix from A, whose eigenvalue decomposition (EVD) provides us with R via where U diagonalizes the reduced density matrix in Eq. (A10) and λ β is the eigenvalue. Similarly, we can find an invertible matrix L such that α1,α1 with the following orthogonality condition, Again, the reduced density matrix constructed from B provides us with a possible choice of L. Defining Rα 1α1 β σα 1 σα 1 Lα 1α1γ , (A14) whose low-rank approximation is we can introduce three-leg tensors, Using W and X, we can indirectly obtain which is the SVD of Eq. (A5). We refer to these threeleg tensors as flavor squeezers. The above procedure to derive W and X are diagrammatically summarized in Next, setting we consider invertible matrices R ′ and L ′ such that α2,α2 with orthogonality conditions Constructing a reduced density matrix from C (D), one can find R ′ (L ′ ) in the same way with Eq. (A11) satisfying Eqs. (A21) and (A23) (Eqs. (A22) and (A24)). Introducing whose low-rank approximation is we can derive flavor squeezers, Note that these flavor squeezers Y and Z enable us to indirectly carry out the SVD of α2,α2 , (A29) whose singular value is σ α2 in Eq. (A26). Figure 12 sum-marizes how to derive Y and Z.

Spacetime coarse-graining
Using flavor squeezers, W , X, Y , and Z derived previously, we can express Eq. (A5) in the canonical form of where σ α1 , σ α2 are given in Eqs. (A15), (A26), respectively, and Note that the flavor squeezer X has been employed to define C as in Eq. (A19). Thanks to orthogonality conditions in Eqs. (A9), (A13), (A23) and (A24), we are allowed to carry out the HOTRG-like spacetime coarse-graining "locally" with respect to each flavor segment. For example, let us consider the following density matrix, Using the orthogonality condition of Eq. (A13) and the approximation of (T T ) in Eq. (A18), one can calculate One can define ρX 2X2X2X2 and ρX 3X3X3X3 in the same way with Eq. (A34). Using orthogonality conditions of Eqs. (A23), (A24) and the approximation of (T T ) in Eq. (A30), one obtains these reduced density matrices via A similar discussion provides us with reduced density matrices such that Since the current Grassmann tensor network is translationally invariant on a lattice, these reduced density matrices and their EVDs followed by the SVD give us squeezers for the spatial direction. Denoting spacetime squeezers as E for the forward subscriptX fXf , and F for the backward subscriptX ′ fX ′ f , we have

FX
with f = 1, 2, 3. Note that R and L can be obtained from the EVDs of reduced density matrices and U , V * . σ's in Eqs. (A41) and (A42) are given by the SVD of RL. Unlike the flavor squeezers, we always take the square root of the singular values and they are assigned for each pair of squeezers equally. We refer to them as spacetime squeezers, whose derivation is similar to that of flavor squeezers, explained previously. Note that when we derive spacetime squeezers, we need to carry out EVDs of reduced density matrices under their block diagonalized forms, which are helpful to update binary functions f X in Eq. (A2). See Ref. [54] for detail. The spacetime coarse-graining of Γ's are accomplished by where the size of X f with f = 1, 2, 3 is fixed up to D, which is referred to as the spacetime bond dimension. Although the spacetime coarse-graining is apparently carried out for each Γ, each flavor, separately, this provides us with a possible coarse-graining of (T T ) in Eq. (A5). This is a direct benefit of constructing canonical forms for (T T ) employing flavor squeezers. Redefining T f =T f , T ′ f =Ť f for f = 1, 2, 3, we obtain the coarse-grained coefficient tensor, whose form is completely same with the right-hand side of Eq. (A1) (see Fig. 13). Thus, the procedure explained above can be easily repeated under the fixed flavor bond dimension χ and the spacetime bond dimension D. At each coarse-graining step, the number of Grassmann tensors in Eq. (2) is reduced to half. Repeating the coarse-graining procedure alternately along the temporal and spatial directions, as the HOTRG does, one can evaluate the path integral of Eq. (2) on the thermodynamic lattice. Although our explanation has assumed N f = 3, the procedure is easily applicable for the case with N f = 2.
Finally, we summarize the computational cost of the current coarse-graining procedure, which consists of three tasks, (i) to derive the flavor squeezers, (ii) to de-rive the spacetime squeezers, and (iii) the tensor contraction with these squeezers. Suppose we apply the method to the d-dimensional N f -flavor lattice theory. The computational cost of (i), dominated by tensor contraction to obtain reduced density matrices, scales with O(χ 4 D max(2d+1,2N f −2) ) with the O(χ 2 D 2(N f −1) ) memory cost. The computational cost of (ii), also dominated by tensor contraction to obtain reduced density matrices, scales with O(χ 4 D 2d+2 ) with the O(χ 4 D 4 ) memory cost. The computational cost of (iii) scales with O(χ 4 D 4d−1 ) with the O(χ 2 D 2d ) memory cost. The task of (iii) is similar to the coarse-graining in the (d + 1)dimensional HOTRG. Therefore, the computational cost of (iii) should scale with O(χ 4 D 4d−1 ).