Scaling dimensions from linearized tensor renormalization group transformations

We show a way to analyze a renormalization group (RG) fixed point in tensor space: write down the tensor RG equation, linearize it around a fixed-point tensor, and diagonalize the resulting linearized RG equation to obtain scaling dimensions. The tensor RG methods have had a great success in producing accurate free energy compared with the conventional real-space RG schemes. However, the above-mentioned canonical procedure for the fixed-point analysis has not been implemented for general tensor-network-based RG schemes. We extend the success of the tensor methods further to extraction of scaling dimensions through analyzing a fixed-point tensor. This approach is benchmarked in the context of the Ising models in 1D and 2D. The proposed method accomplishes the canonical RG prescription for the tensor RG methods.


I. INTRODUCTION
The renormalization group (RG) is a powerful technique for studying physical systems where fluctuations in all scales of length are important [1]; the most famous example in statistical mechanics is critical phenomena. The main idea behind the RG is to study how a physical system changes as we go from one length scale to another. Conventional RG schemes, such as -expansion [2] and block-spin methods [3][4][5][6][7], aim at a map from the Hamiltonian of the short length scale to that of the longer one, such that the partition function is unchanged [8]. The map is known as an RG equation. A well-behaved RG equation exhibits fixed points, each corresponding to a conformal field theory (CFT) [9,10]. A critical system is described by a fixed point. By linearizing the RG equation around the critical fixed point, universal properties like scaling dimensions of the critical system can be extracted. This canonical RG prescription of analyzing a fixed point also provides a theoretical framework to understand universality in critical phenomena. However, for a systematic study with high-precision, the Hamiltonian may not be the most efficient representation of the system.
Recently, ideas from quantum information have stimulated a novel type of RG methods in tensor space. They are versatile numerical RG schemes whose approximations are controlled by an integer, χ, called the bond dimension. The RG equation is a map from a tensor encapsulating the Boltzmann weights of local configurations at a short length scale to a new tensor at a longer one. The first realization of this new paradigm is the tensor renormalization group (TRG) [11], followed by many variations [12][13][14][15][16][17]. These TRG-type techniques have excellent performance in calculations of free energy. For example, the higher-order tensor renormalization group (HOTRG) [14] estimates the free energy of the 2D Ising model with error of order 10 −7 within a few minutes in a desktop computer. The estimation error decreases exponentially as χ increases, while the computational cost only grows polynomially. With all of their success in calculations of free energy, however, the TRG-type techniques encounter obstacles in the fixed-point analysis. Early attempts [18][19][20][21] show that if the bond dimension χ of the TRG is larger than 8, the tensor will never flow to the critical fixed point of the 2D Ising model; this imposes a very strong restriction on the bond dimension in the fixed-point analysis. For χ = 2, 3, 4, either using the TRG or the HOTRG, the estimated scaling dimension of the energy density operator is similar to the old potential moving tricks [18][19][20], and that of the spin operator is more than a factor of 2 larger than the exact value [21].
Fortunately, in recent ten years, people have developed many tricks to solve the problem of the unsatisfactory tensor RG flows. In 2009, Gu and Wen [22] was the first to deal with this problem. They followed Levin's suggestion [11,23] and focused on a toy model called corner double-line (CDL) tensors, which represent systems with only local correlations. They showed that the CDL tensors are fixed points of the TRG, indicating that the local correlations at the smaller length scales will be carried to the larger ones. A crude algorithm was proposed to filter out the CDL tensors and the problem of the tensor RG flows was partially solved, followed up by an improved algorithm in 2017 [24]. From 2015 to 2017, several similar methods were proposed [25][26][27]. All of these advanced TRG-type techniques successfully produced critical fixedpoint tensors.
With a critical fixed-point tensor in hand, Gu and Wen [22] pointed out that the scaling dimensions can be extracted by diagonalizing a transfer matrix constructed from the fixed-point tensor according to a well-known 2D CFT theorem [28]. Later, Evenbly and Vidal used the tensor network renormalization (TNR) [25,26] to implement local scale transformation that maps a plane to a cylinder [29]; the spectrum of eigenvalues of a transfer matrix on the cylinder gives scaling dimensions. These methods have been applied to calculate scaling dimensions from then on, while the fixed-point analysis in tensor space  [22,28] Construct the local scale transformation [29] Figure 1. Different ways to extract scaling dimensions using tensor RG methods. The proposed method in this paper corresponds to the path indicated by the thick arrows.
has never followed up.
In this paper, we provide the missing piece of analyzing a fixed point in tensor space at a general bond dimension (see Fig. 1). After laying down the general framework for the canonical RG prescription for the tensor RG methods in Sec. II A, we point out two technical obstacles, local correlations and gauge redundancy, in Sec. II B. The HOTRG is combined with a recently-developed technique, graphindependent local truncation (GILT) [30] in Sec. II C, to generate correct tensor RG flows that will go to a critical fixed point at a general bond dimension. We call this method GILT-HOTRG. In Sec. II D, we show that most gauge redundancy in the tensor description is automatically fixed during GILT-HOTRG, leaving only tractable sign ambiguities. The linearized RG equation for the GILT-HOTRG is easy to implement and has a simple pictorial representation; in practice, it can be generated by automatic differentiation once the GILT-HOTRG is implemented. The scaling dimensions can be extracted from this linearized RG equation. In Sec. III, the canonical RG prescription in tensor space is benchmarked with 1D and 2D classical Ising models. We conclude in Sec. IV.

II. RENORMALIZATION GROUP IN TENSOR NETWORK LANGUAGE
TRG-type methods start with the fact that partition functions of all classical statistical models can be rewritten as tensor network models [11]. Take the square lattice 2D Ising model as a concrete example. The partition function is where σ i is the shorthand for the spin variable σ(r i ) located at lattice point r i and can take values ±1, and K = J/k B T . In this paper, we measure temperature in units of J/k B so it becomes a dimensionless number. The partition function in Eq. (1) can be rewritten as a tensor

network by defining a tensor
Each index of this tensor can take two values ±1 and we say the bond dimension of a leg of this tensor is χ = 2. It is now possible to rewrite the partition function of the 2D Ising model in Eq. (1) as the tensor product of N copies of A, with all their indices summed over (Fig. 2) The coarse graining of the tensor network resembles the conventional block-spin methods. We replace a patch of, say, four copies of the original tensor A with one coarse-grained tensor A c , such that the partition function is approximately described by a coarser tensor network made of N/4 copies of A c The specific procedure for obtaining A c from A will be discussed later. The map is the tensor RG equation.

A. General framework
We first define the canonical RG prescription in tensor space. To this end, it is helpful to start with a review of the old approach in Hamiltonian space (we follow the detailed review [21] and textbook [31] closely).
It will be convenient to explain in terms of a specific physical system: classical system spin variables σ ∈ {+1, −1} on a lattice, with general short-ranged interactions. The Hamiltonian (or energy) of the system can be parameterized by a set of coupling constants K = {K j }, each of which couples to a possible shortranged interaction term s j (r), For example, if K 1 is the magnetic field, s 1 (r) = σ(r) is the spin variable at lattice point r; if K 2 is the nearest neighbor interaction along x direction, s 2 (r) = σ(r)σ(r + aê x ), whereê x is the unit vector along x direction and a is the lattice constant. A conventional RG transformation maps the old Hamiltonian H to a new one H with the same form as Eq. (6) We require that the RG transformation should preserve the partition function of the system and should exhibit a fixed-point Hamiltonian H * parameterized by coupling constants K * , such that K * remains unchanged under the RG transformation, The linearized RG equation around K * is defined in the following way. We perturb the coupling constants around the fixed point K p = K * + δK and perform the RG transformation defined in Eq. (7), K p = T old (K p ). The new coupling constants K p after the RG transformation should be close to K * by continuity, so K p = K * + δK . The linearized RG equation around K * is a matrix R old telling us how δK is related to δK, The matrix R old has right and left eigenvectors {ψ α }, {φ α } with the same set of eigenvalues {λ α }, The linear combinations of δK i according to the components of the left eigenvector φ α are known as scaling fields while the linear combinations of interaction terms s j (r) according to the components of the right eigenvector ψ α are known as scaling operators Under the RG transformation with rescaling factor b for a system in dimension d, the scaling fields and the scaling operators transform in a simpler way with where x α are the scaling dimensions of the scaling operators o α (r). Equations (9) to (11) and (13) give the relation between the scaling dimensions {x α } and the eigenvalues {λ α } of the linearize RG equation, Next, we move on to the tensor approach of the canonical RG prescription. In the tensor RG approach, we skip the Hamiltonian description of the system. Instead, we use a tensor network made of copies of tensor A to represent the partition function Z of the system. The tensor RG equation is a map from the tensor A to the coarser tensor A c , as is shown in Eq. (5). We claim that the components of the tensor A can be thought of as some proxies of the coupling constants K (this claim was hinted in Ref. [22]).
To see why this claim is reasonable, note that we can map the partition function of the system with Hamiltonian in Eq. (6) to a tensor network using the method introduced in Ref. [11]. Each component of the initial tensor A is the Boltzmann weight of a given local configuration and depends on the coupling constants, where we group all legs of A to form a single index, A (i) ≡ A i1i2i3i4 . After coarse graining, the components of A c are still functions of K but with different functional forms, Now, we require that each component of the coarser tensor A c should have the same functional form as that of A, but with different coupling constants K , In the old Hamiltonian approach, we need to solve Eq. (17) for K in terms of K, which defines the RG equation from the old K to the new K . However, in the tensor approach, it is enough to know the existence of such K . Combine Eq. (16) and Eq. (17), we have At the fixed point, K = K = K * , equations (15) and (18) give Take the total derivative of tensors A and A c in Eqs. (15) and (18) and set K = K = K * , Equations (20) and (21) give the transformation law between the coupling-constant description and tensor description of the canonical RG prescription, with ∂ (n) f (i) evaluated at K * being the change of basis matrix. Under this transformation, the linearized RG equation in Eq. (9) becomes which defines the linearized RG equation in tensor space. Since Eqs. (9) and (22) are the same linear transformation in two different representations, we can equally well diagonalize the matrix R (i)(j) and find scaling dimensions according to Eq. (14). In Sec. III A, we will use the 1D Ising model as a concrete example to demonstrate the general argument above.

B. Technical obstacles
There are two major obstacles for the fixed-point analysis in tensor space. They prevent us from obtaining a fixed-point tensor satisfying Eq. (19). The two obstacles are the problem of local correlations and the gauge redundancy in tensor network language.
Levin and Nave anticipated the first obstacles when looking for fixed points of the RG equation of the TRG [23]. One of the earliest numerical evidence for the peculiar tensor RG flows of the 2D Ising model was provided by Hinczewski and Berker [18]. Their results indicate that the TRG-type techniques have difficulty in integrating out all the local correlations at short distances, so physics at the lattice scale is carried all the way to the physics at larger ones. This shortcoming of the TRG-type techniques makes identification of both non-critical and critical fixed-point tensors very difficult.
To understand how the problem of local correlations at the lattice scale arises in the TRG-type techniques, let us examine the physical picture of the tensor RG transformation. We focus on a concrete example of a tensor network made of 4 × 4 = 16 copies of tensor A shown in Fig. 2 with periodic boundary condition. The general picture of a tensor RG transformation is similar to the conventional block-spin methods. For example, we block a square of four tensors by contracting legs between them and group every two legs in the same side. Call the new tensor A c , It is enlightening to put the original spin variables back into the tensor network to get a more physical picture of what is happening under such a block-tensor RG transformation. We refrain from drawing legs of A and the dashed lines of the spin lattice in Fig. 2, and surround copies of A with squares on whose sides the spin variables sit. The big picture for the block-tensor transformation in Eq. (23) is shown schematically in Fig. 3(a). The process is similar to the decimation in the conventional approaches. After the spin variables shared by every two A tensors forming the same A c are summed over, we are left with four bigger squares, with two spin variables sitting on each side of each square. When the squares become large enough as the block-tensor transformation goes on, we expect that, roughly speaking, the spin variables on different edges are far away from each other and thus uncorrelated. The only exception is for the spin variables around the four corners. We can use a matrix C in Fig. 3(b) to capture the correlations around the corners; the matrix C must contain physics at the scale of the original lattice constant. Since the spin variables around different corners are far away from each other, the tensor A CDL corresponding to this black square should factorize into the tensor product of four corner matrices C. A tensor with the structure of A CDL is called a CDL tensor.
The CDL tensors are fixed points of the RG equations of the TRG [22,23,25,30] and the HOTRG [32]. This shows that the TRG and the HOTRG have difficulty in integrating out the local interactions among the spin variables around the corners. If we start with two temperatures T 1 = T 2 , both larger than the critical temperature T c of the 2D Ising model, either of these two methods will generate tensors flowing to two different CDL tensors , as a natural consequence of the fact that these CDL tensors depend, directly, on the bare interaction constants. At criticality, the previous numerical calculations indicate that we will never reach a critical fixed-point tensor [18,25]. Their calculations suggest tensor RG flows shown in Fig. 4(b), where the low-and high-temperature fixed points turn into two fixed lines and the critical fixed point disappears. By comparison, the correct RG flow is shown in Fig. 4(a). We will introduce a way to solve the problem of CDL tensors for the HOTRG in Sec. II C.
The second obstacle that prevents us from achieving Eq. (19) is that the tensor network representation of the partition function in Eq. (3) has gauge redundancy. If two tensorsÃ and A are related through the invertible matrices S x , S y by the gauge transformatioñ The two tensor networks formed by A andÃ represent the same partition function Z. Equation (25) defines a equivalence relation where all the elements in an equivalence class represent a same partition function Z. The gauge redundancy makes the correspondence between a tensor A and the partition function Z no longer one-toone. In the following, we refer to the equivalence class defined by the gauge transformation in Eq. (25) In general, we must fix the gauge of the tensor during a tensor RG transformation by choosing a preferred set of bases, so that the fixed-point tensor is manifestly fixed, as is shown in Eq. (19). We will show how the gauge is fixed for a HOTRG-like scheme in Sec. II D.

C. GILT-HOTRG
In this subsection, we present a HOTRG-like scheme to solve the first technical obstacle. Compared with the state-of-the-art TRG-type methods [22, 24-27, 30, 33-36] that can generate correct tensor RG flows, our scheme can be most easily generalized to dimensions higher than 2 and is convenient for the subsequent gauge fixing and linearization procedure. The graph-independent local truncation (GILT) [30] is performed to filter out the problematic local correlations before the coarse graining of the HOTRG. We call this scheme GILT-HOTRG.
The key feature of the GILT is that it is a stand-alone procedure to filter out the local correlations and does They are drawn explicitly to make the demonstration clearer.
In the first step, a low-rank matrix Q is inserted into a bond. Then we split Q into two pieces using singular value decomposition. The low-rank matrix Q is constructed so that it cuts the legs of the corner matrices C during the splitting. Finally, the pieces of the matrix Q are absorbed into the two neighboring tensors. The original GILT paper [30] presents a nice way to determine the low-rank matrix Q.
not change the geometry of a given tensor network, so it is very flexible. It has been shown that the TRG combined with GILT is able to generate correct tensor RG flows for the 2D Ising model [30] and the 2D φ 4 theory [37]. Figure 5 summarizes the basic process of the GILT. The loop containing four matrices C inside the plaquette represents the local correlations (see Fig. 3(b) and imagine putting four CDL tensors together to form a plaquette). The first step, which is the most crucial one, is to insert a low-rank matrix Q into the leg we wish to truncate. The remaining two steps are exact. We split Q into two pieces using singular value decomposition and absorb the two pieces into the adjacent two A tensors. The bond dimension of the leg is smaller and the local correlations on this leg are filtered out. The low-rank matrix Q is determined by examining the environment E of the bond and performing the singular value decomposition, where we refrain from drawing the unknown C matrices in the plaquette. The environment E of the bond should be thought of as a linear map from the vector space of all the legs with ingoing arrows to that of all the legs with outgoing arrows. We can use the tensor U and the diagonal matrix s in Eq. (27) to construct the low-rank matrix Q. To this end, we first define a vector t by contracting two ingoing legs of the tensor U , Then, we perform a soft truncation of the vector t according to where s i are the singular values and gilt is the hyperparameter of the GILT. Equation (29) says that the components of the vector t will be set to very small values if the corresponding singular values s i are much smaller than gilt . The justification for the truncation in Eq. (29) can be found in Ref. [30]. The low-rank matrix Q is constructed from the tensor U † and the truncated vector t as It is proved in Ref. [30] that the matrix Q determined in this way is able to filter out the loop of four C matrices shown in Fig. 5. Next, we move on to explain the HOTRG. The blocktensor transformation in Eqs. (23) and (24) is exact but not practical, since the bond dimension grows exponentially in the original lattice size. The HOTRG is an approximate tensor RG transformation, which can keep the bond dimension from growing. For the HOTRG in the vertical direction, we aim at the following approximation of a local patch of two copies of A put together vertically, where w is an isometric tensor to be determined and w † its hermitian conjugate. The isometry w is a linear mapping: where V χ denotes a χ-dimensional vector space, and the isometry satisfies w † w = 1. We will later see that the isometric condition of tensor w makes the gauge fixing in the HOTRG easier. It is shown in Ref. [14,26] that a good approximation can be achieved if the isometry w is a collection ofχ eigenvectors corresponding to the firstχ largest eigenvalues of the χ 2 -by-χ 2 positive semi-definite matrix M M † , with the matrix M defined as We use the approximation to replace all pairs of A tensors in the tensor network representation of the partition where in the second step, we contract two A tensors and w, w † in the dashed circle to get a coarser tensor A , Notice in the approximation step in Eq. (33), we move the two leftmost w tensors to the right because we have a periodic boundary condition. Equation (34) defines the HOTRG coarse graining in the vertical direction. We usually chooseχ ≤ χ max in Eq. (31) to prevent the bond dimension from growing. It is shown in Ref. [32] that the HOTRG in the vertical direction transforms the A CDL in Fig. 3(b) in the following way, which means that although the HOTRG can detect and project out four inner C matrices, it can do nothing about the four outer C matrices. Therefore, the GILT should be applied to filter out these four outer C matrices before the HOTRG coarse graining. To this end, we apply the GILT to the plaquettes where the loops of local correlations are drawn explicitly in Fig. 6 and insert two low-rank matrices Q A , Q B into the upper and lower bonds for each plaquette. The legs of the unwanted C matrices will be truncated after the splitting of Q A , Q B . Finally, we apply the ordinary HOTRG in the vertical direction to the local patch of tensors in the dashed circle in Fig. 6 to get the coarser tensor A In this way, we can remove all horizontal legs of C matrices: a half of them by the GILT and the other half by contraction in the HOTRG. We repeat the similar GILT and the HOTRG on A in the horizontal direction. The coarse graining steps in two directions together define the RG equation of the GILT-HOTRG, The computational cost of the GILT-HOTRG is O(χ 7 ), the same as the HOTRG. The coarse graining defined in Eq. (37) is able to simplify the A CDL tensor in Fig. 3(b) to a single number, Equation (38) shows that the GILT-HOTRG can successfully filter out the local correlations among the spin variables around the corners at the lattice scale (see Fig. 3(b)).
Since the CDL tensors are no longer fixed points for the RG equation of the GILT-HOTRG, the peculiar fixed lines in Fig. 4(b) generated by the HOTRG will collapse to fixed points; we expect the RG equation of the GILT-HOTRG is able to exhibit the critical fixed point tensor shown schematically in Fig. 4(a).

D. Gauge fixing and linearization for the GILT-HOTRG
We show how the gauge is fixed and give the explicit expression of the linearized RG equation for the GILT-HOTRG in this subsection.
Part of the gauge can be fixed if the physical model possesses a global internal symmetry. The global symmetry can be incorporated into the tensor network representation of the model [38][39][40]; it is a generalization of Schur's lemma from matrices to general tensors. For the 2D Ising model, Z 2 symmetry can be imposed. Each index of the tensor A breaks into even and odd sectors. Half of the gauge is fixed since A is in the bases where the states in the even sector transform trivially and the states in the odd sector is multiplied by −1 under the spin flip operation.
Most of the remaining gauge in the degenerate sectors of A can be fixed by going to the diagonal bases of the tensor. We show how the S x gauge redundancy in Eq. (25) is fixed. The S y one can be dealt with in the same way. Given a tensor A, we first contract its two vertical legs to produce a transfer matrix N x , We then find the eigenvalue decomposition of this matrix, where λ is the diagonal matrix encoding eigenvalues. The gauge fixing transformation in the horizontal direction is defined by acting the invertible matrix W x and its inverse on the horizontal legs of the tensor A, To see why the gauge fixing procedure in Eqs. (39) to (41) defines a preferred set of bases, let us examine how the tensorÃ in Eq. (25) transforms under this gauge fixing procedure. The contraction of two vertical legs ofÃ annihilates S y and S −1 y in the right hand side of Eq. (25b); the resultantÑ x is related to N x through It is straightforward to see that the matrixW x coming from eigenvalue decomposition of the matrixÑ x is related to W x throughW where d x is a diagonal matrix coming from phase ambiguities of eigenvectors, with its diagonal entries to be phases for general complex matrices. For a real symmetric N x , the diagonal entries of d x are ±1. After the horizontal gauge fixing, the tensorÃ becomes Compare Eq. (41) with Eq. (44), we see that the gauge redundancies in two horizontal legs are fixed except the phase ambiguities. For 2D classical statistical models with spatial reflection symmetries, for example, the 2D Ising model, the real matrix N x can be made symmetric, so the phase ambiguities become sign ambiguities. The gauge fixing procedure described in Eqs. (39) to (41) is general for all TRG-type techniques. However, this procedure is not necessary for the GILT-HOTRG applied to systems with spatial reflection symmetries like the 2D Ising model, since the RG equation of the GILT-HOTRG has a preferred set of bases. As a result, the gauge redundancy in Eq. (25) collapses into phase ambiguities (or sign ambiguities for real tensors) in the GILT-HOTRG. To make things as simple as possible, we focus on real tensors in the following discussions. The generalization to complex tensors is straightforward.
Write the tensor RG equation in Eq. (37) schematically as A c = T (A). For two real tensors A,Ã that are related by the gauge transformation defined in Eq. (25) where we further restrict S x , S y to be orthogonal matrices, the new tensors produced by the GILT-HOTRG according to Eq. (37), A c = T (A) ,Ã c = T (Ã) are equal up to sign ambiguities, where d x , d y are vectors with components ±1. Imagine that we manage to fix the sign ambiguities, then the GILT-HOTRG ensures Since the orthogonal matrices S x , S y are arbitrary, equation (46) says that the whole equivalence class [A] will be mapped into the same tensor A c . This means that the GILT-HOTRG, after incorporating the sign fixing step, will choose a preferred set of bases. It is worth to mention that the TRG has a similar property [21]. For a fixed-point tensor, equation (46) indicates that we can start with any representationÃ * of the equivalence class [A * ], and the GILT-HOTRG will bringÃ * to the proper bases; further GILT-HOTRG coarse graining will satisfy Eq. (19), Let us prove the property of the tensor RG equation of the GILT-HOTRG in Eq. (45) and explain how the sign is fixed to have Eq. (46). We focus on the equivalence relation defined asÃ where S x , S y are orthogonal matrices. It is sufficient to consider such orthogonal changes of gauge if we restrict to the representations of the equivalence class [A] with spatial reflection symmetries [26], and where O x , O y are orthogonal matrices, also with O 2 x = O 2 y = 1, and the legs' order convention is as per Eq. (2). It can be shown that, if we start with a tensor with reflection symmetry, the GILT-HOTRG will preserve the reflection symmetry and will rotate the tensor into the set of bases where O x , O y become diagonal, with their diagonal entries ±1 [41].
It suffices to discuss the first half of the GILT-HOTRG coarse graining defined in Eq. (36). We want to show that ifÃ is fed into the right hand side of Eq. (36), theÃ we obtain in the left hand is related with the original A bỹ which means that the gauge redundancy in the horizontal legs will be fixed with only sign ambiguities left during the first half of the GILT-HOTRG in the vertical direction. It follows immediately that the full GILT-HOTRG will give Eq. (45).
Let us first figure out the correctQ A ,Q B matrices in Fig. 6. The environment in Eq. (27) is multiplied by several orthogonal matrices, which will not change the singular values, sos i = s i . It is easy to check that the tensor U in the singular value decomposition becomes (the sign ambiguities coming from the singular value decomposition does not matter here) (51) The vectort is thus the same as the original t by its definition in Eq. (28), which further givest i = t i since the tilde version of the right hand side of Eq. (29) is the same as the original version. Finally, equation (30) gives The S x , S T x matrices that Q Ar , Q Al pick up will cancel those acting on the A tensor whenQ Ar ,Q Al are contracted with theÃ tensor in Eq. (48). The same argument works for Q B . Equation (53) indicates that all the S x , S T x matrices acting on the four horizontal legs of the local patch in Eq. (36) will be canceled by the low-rank matrices used in the GILT process. The above analysis shows that during the first half of the GILT-HOTRG, the gauge in the horizontal legs will be fixed with only sign ambiguities left, since the GILT favors the bases chosen by the singular value decompositions of Q A , Q B .
However, there is one more twist. In practice, we observe that the low-rank matrices are projection operators, which are highly degenerated. As a result, the gauge redundancy in the degenerate subspace will leak out, which will be seen by the subsequent HOTRG process. Luckily, the HOTRG has a similar feature as the GILT process.  50)). It is straightforward to see that the isometry w will pick up the suitable S x , S T x matrices to cancel out the gauge transformation leaking out from the GILT process. There are still concerns about whether degeneracy occurs in eigenvalues of M M † . Our result in Fig. 9(b) shows, a posteriori, that the potential degeneracy does not cause any problem for the 2D Ising model at criticality.
The sign ambiguities d x , d y in Eq. (45) can be determined by comparing the sign of the components ofÃ c and A c . For example, upon making sure (Ã c ) 1111 and (A c ) 1111 are both positive, set j = k = l = 1 in Eq. (45) to have The relative sign of (Ã c ) i111 and (A c ) i111 determines (d x ) i . However, this sign fixing method breaks down if both (Ã c ) i111 and (A c ) i111 vanish, which occurs as long as there is a symmetry. This is the reason why we first fix part of the gauge by exploiting the global internal symmetry of the physical model. Then, we can apply Eq. (54) in each degenerate sectors of the tensor. The detailed implementation of the sign fixing procedure for Z 2 symmetric tensors can be found in the source code of this paper (see Appendix A). This completes the arguments for Eqs. (45) and (46). After reaching the fixed-point tensor A * in Eq. (47), the next step is to linearize the RG equation of the GILT-HOTRG in Eq. (37). We substitute A = A * + δA into the right hand side of Eq. (37) and collect terms that are first order in δA to get δA c , The result resembles the product rule for taking the differentials in calculus. Equation (55) provides a simple pictorial representation of the linearized tensor RG equation R in Eq. (22) for the GILT-HOTRG. In practice, after the fixed-point tensor A * , the pieces of low-rank matrices Q A , Q B and the isometric tensors w, v in Eq. (37) are determined, automatic differentiation can linearize Eq. (37) around A * and generate Eq. (55) for us. There are many libraries that support automatic differentiation, including PyTorch [42] and JAX [43].

III. BENCHMARKS
We use the classical Ising model in 1D and 2D to demonstrate how to carry out the canonical RG prescrip-tion in tensor space. The Ising model in 1D serves as a concrete example to elucidate the general argument in Sec. II A. The Ising model in 2D provides more nontrivial benchmark results for our method.

A. The Ising Model in 1D
The Ising model in 1D has an exact real-space RG transformation realized via decimation. Even better, the decimation has a natural tensor network representation. This makes the Ising model in 1D a nice example to see the relation between the old and the new approaches of the canonical RG prescription.
The partition function is where the local interactions involve the nearest-neighbor term at most The decimation process is shown in Fig. 7. It is realized by summing over all the even-numbered spins and then renumber the remaining odd-numbered spins. σ1 σ2 σ9 Ac Ac Ac Ac In the tensor network language, this decimation is nothing but a matrix multiplication of two transfer matrices to form a coarse-grained matrix Ac = AA.
We denote σ i = σ 2i−1 , s i = σ 2i and sum over all s-spins in the partition function in Eq. (56) to have from which we can define the effective local interaction H through where the effective local interaction has the same form as the old one in Eq. (57) but with new coupling constants g , h , K , The partition function can be fully described by the new σ -spins, Equations (57), (59) and (60) together define the RG equation that maps the old coupling constants (g, h, K) to the new coupling constants (g , h , K ). The explicit expression of the RG equation can be found in Kardar's textbook [44]. The RG equation has two fixed points, one for high-temperature phase and the other for low-temperature phase. Let us focus on the hightemperature fixed point here, where the coupling constants are g * = log (1/2) , h * = 0, K * = 0. The linearized RG equation around this fixed point gives δg = 2δg, δh = δh, δK = 0 × δK. The matrix R is in its diagonal form with eigenvalues 2, 1, 0 for δg, δh, δK respectively. Next, we translate the above decimation process into tensor network language. We first define the tensor A sitting on the bond connecting two spins shown in Fig. 7 as (62a) After using the expression for H in Eq. (57), we have which is the familiar transfer matrix. Each component of the tensor A is a function of coupling constants g, h, K, as is claimed in Eq. (15). The partition function in Eq. (56) can be rewritten as The decimation in the tensor network language is a multiplication of two old A matrices to form a new A c matrix, In terms of the new A c matrix, the partition function is Equation (64)  (62b) to be the high-temperature fixed point g * = log (1/2), h * = 0, K * = 0 to get the fixed-point tensor, It can be checked that A * A * = A * . The linearized version of Eq. (64) around this fixed-point tensor is where in the last equal sign, we add two identity matrices. Write Eq. (67) in its component form, we have (δA c ) ab = α,β I aα (δA) αβ (A * ) βb +(A * ) aα (δA) αβ I βb . We can read off the matrix of the linearized RG equation as where we group two indices a, b as a single index (ab), and α, β as (αβ). If we put the grouped index into the following order, the matrix takes the following value This matrix R in Eq. (70) is a symmetric, and we can find its eigenvalues and eigenvectors: The eigenvalues are the same as what we get in the conventional method. The relation between the canonical RG prescription in tensor space and the Hamiltonian space can be clarified by noticing that the relation between the coupling constants and the tensor A is given in Eq. (62b). We perturb the coupling constants around the fixed point, g p = log (1/2)+ δg, h p = δh, K p = δK, substitute them into the right hand side of Eq. (62b) and Taylor expand to get the perturbed tensor, We can read off δA = A p − A * as which is Eq. (20) in practice. Recall the order convention in Eq. (69), we see the correspondence v 1 ↔ δg, v 2 ↔ δh and v 4 ↔ δK.

B. The Ising Model in 2D
There is no exact RG transformation for the Ising model in 2D, so we will use the GILT-HOTRG developed in Sec. II C to generate an RG flow in tensor space. The partition function is given in Eq. (1) and we translate the partition function into a tensor network in Fig. 2. Let us denote the initial tensor in Eq. (2) as A (0) . To prevent a rapid grow of the magnitude of the tensor during the RG transformation, we pull out the Frobenius norm of the tensor, A (0) = A (0) A (0) , to define a normalized tensor A (0) . The normalized tensor A (0) will be fed into the RG equation of the GILT-HOTRG in Eq. (37) and we denote the output coarse-grained tensor as A (1) , from which the norm A (1) is pulled out and the normalized tensor A (1) is defined the same way as the previous step. The process can be repeated so we will have A (n) = A (n) A (n) at the n-th step. The RG flow in tensor space can be conveniently visualized by examining the evolution of the norms A (n) as the RG step n increases.
The RG flows of the norms A (n) indicate the GILT-HOTRG is capable of generating a correct RG flow for the 2D Ising model in tensor space shown schematically in Fig. 4(a). For example, for bond dimension χ = 30 and the hyper-parameter of the GILT process gilt = 6 × 10 −6 , Fig. 8(a) shows several RG flows of the tensor norms A (n) at different temperatures. For a given bond dimension χ, there is an estimated critical temperature T can be determined using the bisection method, and for χ = 30 the difference between the estimated value T [30] c and the exact T c , |T [30] c − T c |/T c , is of order 10 −6 . At temperatures off by ∆T = ±10 −3 from T [30] c , the tensor flows to the high-and low-temperature trivial fixed-point tensors respectively before it comes near to (A [30] ) * cr . As |∆T | becomes smaller to order of 10 −6 , the tensor will stay in the vicinity of the critical fixed-point tensor (A [30] ) * cr for a while and then flow away to one of the two trivial fixed-point tensors. If |∆T | becomes smaller further to 10 −10 , the tensor will stay longer near (A [30] ) * cr . By comparison, the RG flow of A (n) generated by the HOTRG with bond dimension χ = 12 [45] is displayed in Fig. 8(b). The RG flow shows that the HOTRG has difficulty in exhibiting a critical fixed-point tensor or producing isolated trivial fixed-point tensors. It is interesting to mention that the RG flow generated by the TRG has a similar behavior [18] for bond dimensions χ > 8.
To make sure that the plateau in the RG flow of A (n) gives a critical fixed-point tensor (A [30] ) * cr at the estimated critical temperature T [30] c , we plot the singular values s (n) of tensors A (n) defined as The RG flow of the singular values in Fig. 9(a) indicates that we indeed reach a non-trivial fixed-point tensor. The fixed-point tensor is manifestly fixed numerically after adding the sign fixing step in the GILT-HOTRG, which can be confirmed by plotting the Frobenius norm of the difference between the normalized tensors at successive RG steps A (n+1) − A (n) , see Fig. 9(b). The norm of the difference starts to decay systematically at RG step n = 14, goes all the way down to the order ∼ 10 −2 at n = 23 and then increases when the tensor begins to flow away from the critical fixed point. By comparison, we show the RG flow of A (n+1) − A (n) without sign fixing in Fig. 9(c); the sign ambiguities in Eq. (45) prevent us from achieving a manifestly-fixed-point tensor, except at RG step n = 22, where the tensor happens to have all signs correct by accident. We use the automatic differentiation implemented in JAX [43] to generate the linearized tensor RG equation the RG prescription in tensor space gives correct scaling dimensions up to 2.125. The results at RG step n = 14 and 28 are unreliable since A (n+1) −A (n) is of order 1 (see Fig. 9(b)). The results for n = 15, 16, . . . , 27 indicates that the scaling dimensions from the RG prescription in tensor space are reliable as long as the values of A (n+1) − A (n) have order of or smaller than 10 −1 .
In Table I, we show the scaling dimensions for all relevant and marginal operators at RG step n = 22, compared with the results obtained by Gu and Wen's method [22]. Both methods have similar accuracy for scaling dimensions less or equal to 1.125, but Gu and Wen's method cannot produce higher scaling dimensions. Our method can give two out of total four scaling dimension 2 accurately, but the remaining two are overestimated and more close to 2.125. It is worth mentioning that we only use a single fixed-point tensor to construct the transfer matrix in Gu and Wen's method. If we use two or more copies of the fixed-point tensor to construct a larger transfer matrix, Gu and Wen's method is expected to produce higher scaling dimensions correctly [24].
We end this section with a few remarks on the above calculations. Firstly we impose the Z 2 symmetry of the tensors [38,39] when generating the RG flow in tensor space. There are three reasons. Only if the Z 2 symmetry of the tensor is imposed will the low-temperature fixedpoint tensor be stable under the RG. Otherwise, it will flow to the high-temperature fixed point eventually due to numerical errors, which will make the bisection search for the estimated critical temperature T [χ] c less convenient. The second merit of symmetric tensors is that half of the gauge redundancy can be automatically fixed (see Sec. II D), making the sign fixing procedure in the GILT-HOTRG easier. The third reason is to speed up the computations. However, we roll back to ordinary tensors when performing the RG prescription in tensor space, since the perturbations around the fixed-point tensor do not have to preserve Z 2 symmetry (for example, the spin operator).
The second remark is about the improvement of the accuracy as the bond dimension χ increases. There are two sources of approximation errors in the above computations. One comes from the truncations of the CDL tensors during the GILT that is necessary for producing the critical fixed point. This error is controlled by the hyper-parameter gilt . The other source is the leg squeezing step during the HOTRG to prevent the grow of the bond dimension. This error can be reduced by increasing the bond dimension χ. In general, for a given χ, the gilt should be as small as possible provided that the GILT-HOTRG can exhibit a critical fixed-point tensor. In practice, we tried χ = 10, 20, 30, and gilt goes down from 6 × 10 −4 to 6 × 10 −5 and further to 6 × 10 −6 . The estimated scaling dimensions converge to the exact results in this process.
The third remark is about the overall multiplication constant in front of the fixed-point tensor. After reaching the critical fixed point, the RG from n-th step to (n+1)-th step is the mapping A * → A * = A * A * . The shape of A * is fixed but its magnitude is still changing under the RG transformation. It has been shown in Ref. [22] that the fixed-point tensor with correct magnitude is simply given by A * inv = A * −1/3 A * , and we will have A * inv → A * inv under the RG transformation. Our numerical results have confirmed this statement. The final remark is that the problem of local correlations could be removed by other methods [22,[24][25][26][27][33][34][35][36] other than GILT. For example, the TNR [25,26] is known to be capable of exhibiting critical fixed-point tensors with its RG equation similar to that of the GILT-HOTRG, and there is a method to fix its gauge [26]. Considering the unprecedented accuracy of the TNR, the estimation of the scaling dimensions might be much better. We develop the canonical RG prescription in tensor space using the GILT-HOTRG in this paper in order to prepare for the further applications to 3D systems.

IV. SUMMARY AND DISCUSSIONS
In this paper, we show how to analyze an RG fixed point in tensor space. The general procedure is summarized as follows: reach a fixed-point tensor using a TRG-type RG equation, fix the gauge redundancy to make the fixedpoint tensor manifestly fixed, linearize the RG equation around this fixed-point tensor and finally calculate the scaling dimensions from the eigenvalues of this linearized tensor RG equation. In practice, we propose the GILT-HOTRG that carries out this canonical RG prescription in tensor space. The benchmark results of the 2D classical Ising model are comparable with the existing method. In our future work, we will generalize the GILT-HOTRG and apply the canonical tensor RG prescription to 3D systems, where there are few practical tensor network methods to extract scaling dimensions efficiently.