Circuit complexity across a topological phase transition

Fangli Liu,1,2 Seth Whitsitt,1,2 Jonathan B. Curtis,1 Rex Lundgren,1,2 Paraj Titum,1,2,3 Zhi-Cheng Yang,1,2 James R. Garrison,1,2 and Alexey V. Gorshkov1,2 1Joint Quantum Institute, NIST/University of Maryland, College Park, Maryland 20742, USA 2Joint Center for Quantum Information and Computer Science, NIST/University of Maryland, College Park, Maryland 20742, USA 3Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland 20723, USA


I. INTRODUCTION
In computer science, the notion of computational complexity refers to the minimum number of elementary operations for implementing a given task. This concept readily extends to quantum information science, where quantum circuit complexity denotes the minimum number of gates to implement a desired unitary transformation. The corresponding circuit complexity of a quantum state characterizes how difficult it is to construct a unitary transformation U that evolves a reference state to the desired target state [1,2]. Nielsen and collaborators used a geometric approach to tackle the problem of quantum complexity [3][4][5]. Suppose that the unitary transformation U (t ) is generated by some time-dependent Hamiltonian H (t ), with the requirement that U (t f ) = U (where t f denotes the final time). Then, the quantum state complexity is quantified by imposing a cost functional F [H (t )] on the control Hamiltonian H (t ). By choosing a cost functional that defines a Riemannian geometry in the space of circuits, the problem of finding the optimal control Hamiltonian synthesizing U then corresponds to finding minimal geodesic paths in a Riemannian geometry [3][4][5].
Recently, Nielsen's approach has been adopted in high-energy physics to quantify the complexity of quantum field theory states [6][7][8][9][10][11][12][13][14][15][16][17][18]. This is motivated, in part, by previous conjectures that relate the complexity of the boundary field theory to the bulk space-time geometry, i.e., the so-called "complexity equals volume" [19,20] and "complexity equals action" [21,22] proposals. Jefferson et al. used Nielsen's approach to calculate the complexity of a free scalar field [6], and found surprising similarities to the results of holographic complexity. A complementary study by Chapman et al., using the Fubini-Study metric to quantify complexity [23], gave similar results. Several recent works have generalized these studies to other states, including coherent states [8,24], thermofield double states [7,11], and free fermion fields [12][13][14]. However, the connection between the geometric definition of circuit complexity and quantum phase transitions has so far remained unexplored. This connection is important fundamentally, and it is also intimately related to the long-standing problem of quantum state preparations across critical points [25][26][27].
In this work, we consider the circuit complexity of a topological quantum system. In particular, we use Nielsen's approach to study the circuit complexity of the Kitaev chain, a prototypical model exhibiting topological phase transitions and hosting Majorana zero modes [28][29][30][31][32][33]. Strikingly, we find that the circuit complexity derived using this approach exhibits nonanalytical behaviors at the critical points for both equilibrium and dynamical topological phase transitions. Moreover, the optimal Hamiltonian connecting the initial and final states must be nonlocal in real space when evolving across a critical point. We further generalize our results to a Kitaev chain with long-range pairing, and we discuss universal features of nonanalyticities at the critical points in higher dimensions. Our work establishes a connection between geometrical circuit complexity and quantum phase transitions, and it paves the way toward using complexity as a tool to study quantum many-body systems.

II. THE MODEL
The one-dimensional (1D) Kitaev model is described by the following Hamiltonian [28,29]: where J is the hopping amplitude, is the superconducting pairing strength, μ is the chemical potential, L is the total number of sites (assumed to be even), andâ † j (â j ) creates (annihilates) a fermion at site j. We set J = 1 and assume antiperiodic boundary conditions (â L+1 = −â 1 ). Upon Fourier transforming, Eq. 1 can be written in the momentum basiŝ where k n = 2π L (n + 1/2) with n = 0, 1, . . . , L/2 − 1. The above Hamiltonian can be diagonalized via a Bogoliubov transformation, which yields the excitation spectrum: ε k n = (μ + cos k n ) 2 + 2 sin 2 k n . The ground state of Eq. (1) can be written as where tan(2θ k n ) = sin k n /(μ + cos k n ). A topological phase transition occurs when the quasiparticle spectrum is gapless [28], as illustrated in Fig. 1(a). The nontrivial topological phase is characterized by a nonzero winding number and the presence of Majorana edge modes [28][29][30][31][32][33].

III. COMPLEXITY FOR A PAIR OF FERMIONS
Since Hamiltonian (1) is noninteracting, the ground-state wave function (3) couples only pairs of fermionic modes with momenta ±k n , and different momentum pairs are decoupled. Hence, we first compute the circuit complexity of one such fermionic pair [12][13][14], and then we obtain the complexity of the full system by summing over all momentum contributions [6,23].
Let us consider the reference ("R") and target ("T ") states with the same momentum but different Bogoliubov angles: Expanding the target state in the basis of |ψ R and |ψ R ⊥ (i.e., the state orthogonal to |ψ R ), we have |ψ Now the goal is to find the optimal circuit to achieve the unitary transformation connecting |ψ R and |ψ T : where φ is an arbitrary phase. Nielsen approached this as a Hamiltonian control problem, i.e., finding a time-dependent Hamiltonian H k (s) that synthesizes the trajectory in the space of unitaries [3,4]: with boundary conditions U k (s = 0) = 1 and U k (s = 1) = U k . Here, ← − P is the path-ordering operator and O I are the generators of U (2). The idea is then to define a cost (i.e., "length") functional for the various possible paths to achieve U k [3,4,6,12]: D[U k ] = 1 0 ds I |Y I k (s)| 2 , and to identify the optimal circuit or path by minimizing this functional. The cost of the optimal path is called the circuit complexity C of the target state, i.e., Following the procedures in Refs. [12][13][14], one can explicitly calculate the circuit complexity for synthesizing the unitary transformation (4). For quadratic Hamiltonians, it is a simple expression that depends only on the difference between Bogoliubov angles (see Appendix A), Note that the complexity C for two fermions is at most π 2 /4, since | θ k | ∈ [0, π/2]. The maximum value is achieved when the target state has vanishing overlap with the reference state.

IV. COMPLEXITY FOR THE FULL WAVE FUNCTION
Given the circuit complexity for a pair of fermionic modes, one can readily obtain the complexity of the full many-body wave function. The total unitary transformation that connects the two different ground states [Eq.
where U k n , given by Eq. (4), connects two fermionic states with momenta ±k n . By choosing the cost function to be a summation of all momentum contributions [6,[12][13][14], it is straightforward to obtain the total circuit complexity, where θ k n is the difference of the Bogoliubov angles for momentum k n . In the infinite-system-size limit, the summation can be replaced by an integral, and one can derive that C ∝ L. This "volume law" dependence is reminiscent of the "complexity equals volume" conjecture in holography [19,20], albeit in a different setting. The circuit complexity given by Eq. (9) has a geometric interpretation, as it is the squared Euclidean distance in a highdimensional space [34]. The geodesic path (or optimal circuit) in unitary space turns out to be a straight line connecting the two points [i.e., H k (s) independent of s] (Appendix A). In the remainder of this paper, we demonstrate that the circuit complexity between two states is able to reveal both equilibrium and dynamical topological phase transitions.
We first choose a fixed ground state as the reference state and calculate the circuit complexities for target ground states with various chemical potentials μ T , crossing the phase transition point. The circuit complexity increases as the difference between the parameters of reference and target states is increased [ Fig. 1(b)]. More importantly, the complexity grows rapidly around the critical points (μ T = ±1), changing from a convex function to a concave function at the critical points. This is further illustrated in Fig. 1(c), where we plot the derivative (susceptibility) of circuit complexity with respect to μ T . The clear divergence at the critical points indicates that circuit complexity is nonanalytical at the critical points (see Appendix B for the derivation), and thus can signal the presence of a quantum phase transition. We emphasize that these features are robust signatures of phase transitions, which do not change if one chooses a different reference state in the same phase [see Figs. 1(b) and 1(c)].
We further plot θ k n versus the momentum k n for various target states (with a fixed reference state) in Fig. 1(d). When both states are in the same phase, θ k n first increases with momentum and finally decreases to 0 when k n approaches π . In contrast, when μ T is beyond its critical value, θ k n increases monotonically with momentum, and it takes the maximal value of π/2 at k n = π . This is closely related to the topological phase transition characterized by winding numbers, where the Bogoliubov angles of two different states end up at the same pole (on the Bloch sphere) upon winding half of the Brillouin zone if the states belong to the same phase [29]. Hence, the nonanalytical nature of the circuit complexity is closely related to the change of topological number (and topological phase transition).
Analytically, the derivatives of the circuit complexity (7) can be explicitly recast into a closed contour integral over the complex variable z = e ik (see Appendix B for detailed derivations). Depending on the parameters of the target states, the poles associated with the integrand are located inside or outside the contour. When the target state goes across a phase transition, the poles sit exactly on the contour, resulting in the divergence of the derivatives of the circuit complexity at critical points (Appendix B). Interestingly, the whole parameter space can be classified into four different phase regimes depending on which poles lie inside the contour (see Fig. 5 in Appendix B), which agrees exactly with the phase diagram shown in Fig. 1(a).

V. REAL-SPACE LOCALITY OF THE OPTIMAL HAMILTONIAN
Since the ground state [Eq. (3)] is a product of all momentum pairs, the optimal circuit connecting two different ground states corresponds to the following Hamiltonian: where τ i are the Pauli matrices, andψ k denotes the Nambu . By taking a Fourier series of the above optimal Hamiltonian, one can show that the Hamiltonian can be written in real space (see Appendix C for details): where θ (k) = 2 ∞ n=1 ω n sin(nk). One crucial observation is that when the two ground states are in the same phase, θ (0) = θ (π ) = 0 [see Fig. 1 hence the Fourier series of θ (k) converges uniformly. Therefore, the full series can be approximated by a finite order N * with arbitrarily small error. This immediately implies that the real-space optimal Hamiltonian (11) is local, with a finite range N * . In sharp contrast, if the two states belong 013323-3 to different phases, θ (π ) = π/2 = θ (0) = 0; the Fourier series of θ (k) converges at most pointwise. Thus the optimal Hamiltonian must be truly long-range (nonlocal) in real space (Appendix C), given that the total evolution time is chosen to be a constant [Eq. (5)]. Comparing to previous works on classifying gapped phases of matter using local unitary circuits [35][36][37], our results provide an alternative approach that has a natural geometric interpretation.
We take the initial state to be the ground state of a Hamil-tonianĤ i , and we consider circuit complexity growth under a sudden quench to a different Hamiltonian,Ĥ f . The reference and target states are chosen as the initial state | i and timeevolved state | (t ) , respectively. The time-dependent | (t ) can be written as [51,52] where θ k n is the Bogoliubov angle difference between eigenstates ofĤ i andĤ f , and ε k n andÂ k n are the energy levels and normal mode operators, respectively, for the postquench Hamiltonian. Similar to the previous derivations, one can obtain the time-dependent circuit complexity, where φ k n (t ) = arccos √ 1 − sin 2 (2 θ k n ) sin 2 (ε k n t ). As shown in Fig. 3(a), the circuit complexity first increases linearly and then oscillates [9,10,15] before quickly approaching a time-independent value. The steady-state value of circuit complexity increases with μ f of the postquench Hamiltonian, until the phase transition occurs [ Fig. 3(a)].

VII. GENERALIZATION TO A LONG-RANGE KITAEV CHAIN AND HIGHER DIMENSIONS
We further give an example of a Kitaev chain with longrange pairing [53][54][55][56]: where d = min( , L − ). In contrast to the short-range model, the long-range model with α < 1 hosts topological phases with semi-integer winding numbers [53,56]. As one can see, the derivative of ground-state circuit complexity only diverges at μ T = 1 [ Fig. 4(a)], in contrast with Fig. 1(c). This agrees perfectly with the phase diagram for the long-range interacting model, where a topological phase transition occurs only at μ = 1 for α = 0 [56]. Figure 4(b) shows the long-time steady-state values of the circuit complexity after a sudden quench. Again, one observes nonanalytical behavior only at μ T = 1. While we have so far restricted ourselves to 1D, the results we found can be readily generalized to higher dimensions [57], for example to p + ip topological superconductors in 2D. The ground-state wave function of a p + ip superconductor essentially takes the same form as Eq. (3), with the momenta now being restricted to the 2D Brillouin zone, and tan(2θ k ) = | k |/ε k , where k and ε k denote pairing and kinetic terms in 2D. The circuit complexity can still be written as C = k | θ k | 2 = L 2 (2π ) 2 d 2 k| θ (k)| 2 . One can show again that the derivative of the circuit complexity is given by (see Appendix E) where E (k) 2 = (k) 2 + | (k)| 2 , and θ T (k) denotes the Bogoliubov angle for the target state. It is thus obvious that nonanalyticity happens at the critical point where E (k) = 0 (Appendix E).

VIII. CONCLUSIONS AND OUTLOOK
We use Nielsen's approach to quantify the circuit complexity of ground states and nonequilibrium steady states of the Kitaev chain with short-and long-range pairing. We find that, in both situations, circuit complexity can be used to detect topological phase transitions. The nonanalytic behaviors can be generalized to higher-dimensional systems, such as p + ip topological superconductors [58,59].
One interesting future direction is to use the geometric approach to quantify circuit complexity when the control Hamiltonians are constrained to be local in real space [37,60,61], and study its connection to quantum phase transitions [25,[62][63][64]. It would also be of interest to investigate the circuit complexity of interacting many-body systems. One particular example is the XXZ spin-half chain, whose low-energy physics can be modeled by the Luttinger liquid [65][66][67]. By restricting to certain classes of gates (i.e., by imposing penalties on the cost function) [3,6], it might be possible to find improved methods to efficiently prepare the ground state of the XXZ model by calculating the geodesic path in gate space.
Note added: Recently, we became aware of Ref. [68], which used revivals in the circuit complexity as a qualitative probe of phase transitions in the Su-Schrieffer-Heeger model. In contrast to that work, we have shown that the circuit complexity explicitly exhibits nonanalyticities precisely at the critical points for the Kitaev chain. We also became aware of Ref. [57], which numerically studied the complexity of a two-dimensional "d · τ " model. By contrast, here we analytically study the "p + ip" model, and we illustrate that the closing of the gap is essential for the nonanalyticity of circuit complexity.

APPENDIX A: DERIVATION OF CIRCUIT COMPLEXITY FOR A PAIR OF FERMIONS
In this Appendix, we present a detailed derivation of the circuit complexity for a pair of fermions, i.e., Eq. (7). This expression has previously been obtained using different approaches in Refs. [12][13][14]. To be comprehensive, here we provide a detailed derivation following Ref. [13]. We note that Ref. [12] provides an alternative derivation using a group theory approach.
By taking the derivative with respect to s in Eq. (5), we get the following expression: where U (s) is a unitary transformation that depends on s, and we have omitted the label k for notational clarity. The unitary U (s) can be parametrized in matrix form: where β, φ 1 , φ 2 , ω explicitly depend on the parameter s. The above matrix can be expressed in terms of the generators of U (2), which we choose as follows: Using the relation  where φ 2 (0) is an arbitrary phase. Furthermore, we have the boundary condition at s = 1, The integrand in Eq. (A6) is a sum of four non-negative terms. Setting β(s) = φ 1 (s) = 0 and φ 2 (s) = π/2 minimizes (i.e., sets to zero) three of the four terms without imposing any additional constraints on the minimization of the remaining (dω/ds) 2 term. One can then easily check that the linear function w(s) = s θ minimizes the remaining term and yields (A10)

APPENDIX B: ANALYTICAL DERIVATION OF DIVERGENT DERIVATIVES IN GROUND STATES
In this Appendix, we provide a detailed analytical derivation to show that the first-order derivative indeed diverges at the critical points in the thermodynamic limit. We first derive how the derivative diverges when the reference state is in the trivial phase (|μ R | > 1), and then we generalize our results to show how this divergent behavior depends on the particular choice of the reference state. Throughout this Appendix, we assume the reference lies within a given phase, and we allow the target state to approach an arbitrary point in the phase diagram. Our analytical derivations show that these divergences necessarily map out the phase boundary, as illustrated in Figs. 2(a) and 2(b) and Fig. 5 below.
We begin with our general expression for the complexity as a function of our reference and target states. The Bogoliubov angle difference θ k for each momentum sector k can be expressed as and the circuit complexity is written in terms of θ k : Note that we have replaced the discrete sum in the main text with an integral for the thermodynamic limit, and written "C(| R gs → | T gs )" as "C" for brevity. Now we substitute Eq. (B1) into Eq. (B2), and we take the derivatives with respect to μ T and T . We obtain Here, we have used the fact that these functions are even in k to extend the integrals to −π . In spite of the complicated nature of these integrals, much can be learned about their analytic properties by recasting them as closed contour integrals in the complex plane. Defining the variable z = e ik , we find that the integrals take the form where the integration is taken counterclockwise over the contour |z| = 1. In this form, we may use the fact that the value of the integrals is entirely determined by the nonanalyticities of the integrand, which are located inside the contour, and that the value of the integration will only diverge if there is a divergence located on the contour. We proceed by defining the following variables: where a = R, T . From Eq. (B4), both derivatives contain simple poles at z i,T for i = 1, 2, 3, 4, while ∂ T C additionally has a simple pole at z = 0. Also, using the formula arctan(z) = FIG. 6. The deformation of the integration contour used to compute the gradients of the circuit complexity in the case μ T > 1. There is a branch cut running between the branch points z 1 and z 3 , where the imaginary part of the integrand is discontinuous and the integrand diverges near the branch points.
(i/2) ln 1−iz 1+iz , we can write the Bogoliubov angle as The important fact we will need is that the complex logarithm contains branch cuts running from the zeros to the infinities of its argument; therefore, the z ia are really branch points of the integrand. We now note that the derivatives of the complexity will only diverge if the couplings are tuned to a phase transition. This is because the z i,a can only have unit modulus if we are at one of the phase transitions, and at the phase transitions the branch points cross the contour resulting in a divergent integral; see Fig. 5. In particular, we may characterize the phase diagram in terms of which branch points are inside or outside the contour integral.
In addition, we may actually compute the integrals exactly in certain cases and limits, which allows us to obtain the exact analytic dependence of the divergence on the couplings. As a definite example, we consider the case |μ T | > 1. In this case, there is a branch cut inside the logarithm running from z 1,T to z 3,T , and one outside between z 2,T and z 4,T , and the divergences seen at μ T → 1 will be due to these branch cuts approaching the contour. In this case, we may entirely factor out the dependence on the reference state from the logarithm and focus on the terms that depend on the target state. We deform the contour so that it skirts the branch cut (see the parametrization into four contours in Fig. 6). A key point here is that the argument of the logarithm is −π upon approaching the branch cut from the bottom-half plane, while it is +π upon approaching it from the top half. Therefore, in the sum of the two contours running along the branch cut, the logarithm simply contributes a phase factor, and we may evaluate the resulting simplified integrand by elementary methods, and for small we find We perform the integral around contour C 2 by writing z = z 1 + e iθ , and integrating from −π < θ < π. At small , we find .
The computation for contour C 4 is similar, although the phase winds around the other way: Finally, taking the sum of all four contours, we find that the ln divergence in each integral cancels, and we obtain the desired result: where the function I 2 depends on μ R and R , but it is analytic as the phase transition is approached. Therefore, when approaching from μ T > 1, the quantity ∂ μ T C/L diverges as ln(μ T − 1)/8 T if T = 0, but it is analytic if one approaches the multicritical point at T = 0. Similar manipulations may be made for ∂ T C/L and in other phases. Sometimes the branch cuts take a complicated form in the complex plane so that we cannot reduce the expression into elementary integrals, but we can still deduce the form of the divergence by considering how the contour integrals behave as the branch points cross the contour.
Our final results are summarized as follows. The expression ∂ μ T C/L is always analytic unless μ T → ±1. Near these phase transitions, it diverges as so the divergence is sgn(μ T ) ln |μ T − 1|/8 T if T = 0, but there is not a divergence at T = 0. In contrast, the expression ∂ T C/L is analytic whenever T = 0. In this case, the divergence depends on whether the couplings (μ T , T ) approach the phase transitions from the topological phase or the trivial phase. If we approach 013323-7 the multicritical points from the trivial phases, we find that ∂ T C/L remains analytic. In contrast, if we approach T = 0 from the topological phases, we find In this case, we have a ln | T |/4 divergence when |μ T | < 1, but now we find that the divergence crosses over to ln | T |/2 as we approach the multicritical points.

APPENDIX C: REAL-SPACE BEHAVIOR OF THE OPTIMAL CIRCUITS
In this Appendix, we show that the real-space optimal circuit behaves differently depending on whether or not the initial and target states are in the same topological phase.
As we have derived in Appendix A, for a single momentum sector k, the circuit complexity is found to be the squared difference between the Bogoliubov angles [Eq. (7)], and the optimal circuit is generated by the following time-independent Hamiltonian: where O 1,k is the same generator given by Eq. (A3) for momentum sector k. Here, we have omitted the time label "s" for simplicity as the circuit is time-independent (and the total evolution time is fixed to be constant 1). As in the main text and following the circuit complexity literature, we have defined H k to be anti-Hermitian [Eq. (5)].
Since the ground state of the Hamiltonian is a product of all momentum sectors with k > 0, the optimal circuit that generates the evolution between two ground states can be written as We are interested in the real-space behavior of the above Hamiltonian. To discern this, we first write the above Hamiltonian in operator form, where τ i are the Pauli matrices, andψ k denotes the Nambu spinor,ψ (C4) Utilizing the particle-hole symmetry of the Nambu spinor, we can extend the sum in the evolution Hamiltonian to be over the entire Brillouin zone, where ω(k) satisfies for k > 0. In particular, only the odd part of the function contributes since the even part cancels in the τ 1 pairing channel. We now proceed by performing a Fourier series expansion of the function ω(k) over the Brillouin zone. Without loss of generality, we may consider only the odd Fourier series since the even terms will cancel. Thus, we write where the last equality is used to determine the Fourier coefficients.
Our crucial observation is that when the two states are within the same phase, the Fourier sine series for θ (k) ought to be uniformly convergent. This can be seen by considering the boundary conditions, which in this case read θ (0) = θ (π ) = 0, as shown in Fig. 1(d) in the main text. Thus, if we allow the time-evolved state | T to be within an arbitrarily small error to the real target state | T , this Fourier series can be accurately truncated to a finite order N * over the entire Brillouin zone. This is relevant because in real space, the Fourier harmonic sin(lk)ψ † k τ 1ψ k is generated by a term involving two fermionic operators separated by l sites. More specifically, as this occurs in the τ 1 channel, H must involve real-space pairing terms such that The above argument holds when the system size L is taken to be infinite. In such a case, the finite-range interacting evolution Hamiltonian can be regarded as a truly short-range Hamiltonian, and our results imply that the optimal circuit (with constant time or depth) which evolves states within the same phase region is short-range.
On the other hand, when the two states are in different phases, the boundary conditions θ (π ) = π/2 = θ (0) = 0 obstruct uniform convergence, analogous to the Gibbs phenomenon. In this case, the Fourier sine series may still converge pointwise, but for fixed error the series cannot be truncated to finite order N * over the entire Brillouin zone. In such cases, the optimal evolution Hamiltonian H that transforms states between different topological phases must be long-range when the evolution time is fixed to be a constant. Again, this is because the longest real-space distance required to generate the evolution Hamiltonian is given by the highest order of Fourier mode appearing in the momentum space series, which now cannot be accurately truncated.

APPENDIX D: NUMERICAL EVIDENCE FOR NONANALYTICITY OF QUENCH DYNAMICS
In this Appendix, we provide detailed numerical explanations for the nonanalyticity of the long-time steady-state value of the circuit complexity at critical points, as observed in Fig. 3(b). As derived in the main text, the time-dependent circuit complexity is given by where φ k n (t ) = arccos 1 − sin 2 2 θ k n sin 2 ε k n t .
Then the long-time steady-state complexity is just given by the time-averaged value of the above expression, where the overline denotes time averaging. Because φ 2 k n (t ) is such a complex expression, it is unknown to us how to derive an analytical function for the time-averaged circuit complexity. Instead, we plot φ k n (t ) numerically, and we show that the nonanalyticity indeed occurs at the phase transition.
From the expression of φ k n (t ), it is clear that its value oscillates with time, and it reaches its maximal value (envelope) for each momentum sector k n when sin( k n t ) = 1. In Fig. 7(a), we plot the maximum value of φ k n (t ) for different postquench Hamiltonian parameters. As the figure clearly shows, when the chemical potential μ f of the postquench Hamiltonian is below the critical value (μ f = 1), max[φ k n (t )] is a smooth function of k n . However, when μ f is above the critical value, max[φ k n (t )] exhibits a kink at a certain momentum k n , with its maximal value reaching π 2 . To understand this behavior, we can write down the expression for max[φ k n (t )] given the choice of parameters μ i = 0, i = f = 1: max φ k n (t ) = arccos 1 + μ f cos k n μ 2 f + 2μ f cos k n + 1 . (D4) From the above expression, it is clear that when μ f < 1, max[φ k n (t )] is always smaller than π/2; when μ f > 1, max[φ k n (t )] can obtain the maximal value of π/2 when 1 + μ f cos k n = 0. Because one needs to take the absolute value for the arguments of arccos, the quantity max[φ k n (t )] exhibits a kink when reaching π/2, in agreement with Fig. 7(a).
We plot the time-averaged value of φ k n (t ) in Fig. 7(b). Again, we see an upper bound of φ k n (t ) when quenching across the critical point. Similar to Fig. 7(a), φ k n (t ) reaches its maximal value when 1 + μ f cos k n = 0, i.e., when sin(2 θ k n ) = 1. For this special momentum sector, the expression for φ k n (t ) can be written as φ k n (t ) = arcsin sin ε k n t . (D5) Clearly, the time-averaged value of the above expression is just π/4, in agreement with the numerical results shown in Fig. 7(b). Therefore, after the phase transition takes place, the maximal value of φ k n (t ) is bounded by π/4. (This feature is independent of the parameters of the prequench Hamiltonian.) Having revealed this feature of φ k n (t ), the nonanalyticity can be understood as follows: as μ f increases but is still below the phase transition point, the integral of φ 2 k n (t ) increases smoothly with μ f . After reaching the phase transition, φ 2 k n (t ) saturates the bound, and thus the integral's (circuit complexity's) dependence on μ f takes a different form. In particular, for the parameters shown in Fig. 7 [blue line in Fig. 3(b)], the integral (i.e., the circuit complexity) becomes a constant after the phase transition. This leads to a clear nonanalytical (kink) point at μ f = 1. trivial vacuum with no particle. Upon tuning μ, the system undergoes a quantum phase transition into the topological phase at μ = 0. Taking the derivative of C with respect to μ T , we obtain