Efficient Large-Scale Many-Body Quantum Dynamics via Local-Information Time Evolution

,


I. INTRODUCTION
Simulating the time evolution of many-body quantum systems presents a significant challenge, often limiting the analysis to small-scale systems.The obstacle lies in the rapid spreading of entanglement through the system during time evolution [1][2][3].As entanglement can induce quantum correlations between arbitrarily distantin-space degrees of freedom that cannot be decomposed into local parts, the exact representation of generic quantum states demands an exponentially large number of parameters.This is related to the exponential growth of the Hilbert space with the system size.However, for states exhibiting solely local correlations, such as local product states or Gibbs states, efficient representation becomes feasible.This efficiency stems from the possibility of parametrizing them exclusively through local observables [4][5][6][7].Notably, the parametrization of the local observables entails a linear growth of the number of parameters with the system size, ensuring scalability.
In the paradigmatic case of thermalizing dynamics, local states are obtained at short evolution times when starting from an initially low-entangled (often product) state.During this time, exact time evolution with matrix product states is feasible [8][9][10][11][12][13].While at long times the full pure state has volume-law entanglement, the reduced density matrices of small subsystems are thermal, consistent with the eigenstate thermalization hypothesis [14], and are well described by local Gibbs states * These alphabetically ordered authors contributed equally.
with maximum entropy subject to constraints such as the energy density in the initial state [15,16].Such high-temperature thermal states can be efficiently described by pure states via purification that introduces ancillary degrees of freedom [17][18][19][20][21][22].At intermediate times, this representation is however inefficient, again because of large entanglement in the purified state.The hope of interpolating between these two efficient descriptions is thus complicated by the presence of an entanglement barrier at intermediate times separating the two limits, excluding efficient and exact large-scale simulations on current classical computers.
Seeking to bypass the entanglement barrier, various approximate algorithms have been proposed providing access to quantum dynamics of interacting systems beyond the system sizes accessible by exact dynamics.The essential idea connecting different approximate timeevolution approaches is to only keep track of the most relevant features in order to represent quantum states with less than exponential (in system size) degrees of freedom.In the time-dependent variational principle approach, an approximate time evolution is obtained by projecting the time-evolved state into a given fixed subspace of the Hilbert space.A common example is that spanned by matrix product states of a fixed bond dimension [23][24][25][26], or related approaches that use a neural network ansatz for the wave function [27][28][29].Other approaches adopt dynamical quantum typicality [30,31].Most related to our work are approaches based on density matrix product operators [32,33], approaches that trade entanglement for mixture [34,35], cluster truncated Wigner approaches [36], and other discussion of the en-tanglement barrier [37,38].
Here, we further develop the local-information approach for time evolution introduced in Ref. 7. Central to this approach is the fundamental question: how does the emergence of long-range entanglement during the entanglement-barrier regime impact local density matrices and, consequently, physical observables?We know that at late times local density matrices closely resemble thermal density matrices.Since there is a limited amount of information in thermal states, most of the correlations in the steady state are found on large scales of the order of the system size.As a result, during time evolution an inherent statistical drift of quantum correlations occurs, which is consistent with the principles of the second law of thermodynamics for entanglement entropy [39].This implies that information flows from smaller to larger scales, bounded solely by the system size, and generally does not return to influence the local density matrices.Essentially, the primary role of the large-scale entanglement is to make the local density matrices mixed and thermal.Since there are many different long-range entanglement structures that give rise to the same local reduced density matrices, the pivotal idea of the local-information approach is to systematically discard long-range entanglement once information has reached a sufficiently large scale.To make this more concrete, a proper definition of information is essential, allowing us to identify its location and scale.This crucial step has been pursued in Ref. 7 by introducing the concepts of local information and the information lattice (see Secs.II A and II B for a recap).
Our approach-which we refer to as local-information time evolution (LITE)-combines two essential ideas.First, we divide the system into smaller subsystems, each characterized by a scale ℓ and a center n, which denote its extent (that is, the number of neighboring physical sites it encompasses) and position on the lattice.Two subsystems, say subsystem A and B, can be independently evolved in time as long as the quantum state of the combined subsystem AB is not entangled.In this scenario, the equations of motion for the subsystems are self-contained, allowing for exact time evolution.This self-containment is achieved by reconstructing the state of AB from the individual states of A and B using recovery maps (see Sec. II C).During time evolution, the scale of those combined subsystems AB that lack entanglement continues to grow as more distant-in-space physical degrees of freedom become entangled (see Sec. II D).This growth permits the application of this reconstruction scheme only up to a time τ ∼ ℓ max /v LR , where v LR is the Lieb-Robinson speed [40] and ℓ max is the maximum manageable subsystem scale achievable through exact numerical techniques.Second, we extend time evolution beyond times of about τ by implementing a truncation scheme that maintains entanglement spread without further increasing the subsystem scale.To achieve this, one needs to time evolve the subsystem density matrices on scale ℓ max such that the subsystems' states at lower scales and the flow of information are not altered.The truncation scheme proposed in Ref. 7 accomplishes this by introducing a boundary condition for the information flow at scale ℓ max suitable for systems characterized by significant chaos and approximate translation invariance.To transcend these assumptions, it becomes necessary to construct boundary conditions on a case-by-case basis, thereby limiting the general applicability of that algorithm.
In this work, we devise a modified time-evolution algorithm that eliminates the necessity for assumptions about the information flow while maintaining it unaltered (see Sec. III A).We introduce an additional length scale ℓ min < ℓ max , and we deliberately remove information on large scales ℓ ≥ ℓ min (see Secs.III B and IV C).To correctly capture the information flow, the removal of information is done by minimizing information on scale ℓ min under certain constraints determined by the state of the subsystems at all scales ℓ ≤ ℓ max .The resulting LITE algorithm preserves all local constants of motion with a range of ℓ ≤ ℓ min physical lattice sites.By circumventing the need for assumptions on the flow of information, the algorithm can be applied across a wide variety of models, potentially including those with unknown hydrodynamic behavior.
As a benchmark example, we apply the algorithm to translation-invariant spin chains-specifically, the mixed-field Ising model.By injecting a finite amount of energy in a small spatial region of the system, we investigate the energy transport in an infinitely extended system up to timescales much longer than previously obtained (see Sec. IV).Our results are compatible with those obtained in other works for the same model [33,41,42].We carefully examine the convergence of our algorithm for relevant numerical parameters such as ℓ min and ℓ max .Importantly, LITE is also well suited to simulate the dynamics of dissipative systems governed by the Lindblad master equation in the presence of local dissipators (see Sec. V).The reason is that local dissipators selectively remove information enhancing the convergence properties of the algorithm.We demonstrate this by simulating the magnetization transport in the XX spin chain subject to local dephasing.Perfect convergence of the diffusion constant to the analytic result in Refs.[43][44][45] is achieved already with truncation scales (ℓ min , ℓ max ) for which the algorithm computational complexity is considerably smaller than the capacities of modern computers.

II. EXACT TIME EVOLUTION ON THE SUBSYSTEM LATTICE
A. Subsystems, the subsystem lattice, and the information lattice To time evolve density matrices for large systems and long timescales, we decompose the entire system into smaller subsystems and solve the corresponding time evolution on each subsystem in parallel.To achieve this task, we introduce the subsystem lattice.
Let us consider a system composed of a chain of sites, each representing some physical degree of freedom.We define the subsystem C ℓ n as the set of ℓ + 1 contiguous physical sites centered around n.In this convention, subsystems with ℓ = 0 describe single physical sites.From the pictorial representation in Fig. 1(a), we see that if ℓ is odd n is a half-integer; e.g., a subsystem constituting two sites m and m + 1 (where m ∈ Z is the physical site index) is denoted as C 1 m+1/2 .If ℓ is even, n is an integer; e.g., C 2 m indicates the subsystem composed of the three sites m − 1, m, and m + 1.
Each subsystem C ℓ n is uniquely determined by the labels (n, ℓ).We order (n, ℓ) in a two-dimensional triangular structure, shown in Fig. 1(b), that we call the subsystem lattice.Black circles represent the subsystem-lattice points (n, ℓ).The horizontal axis of the subsystem lattice corresponds to the subsystem center n, while the vertical axis increases with the number of physical sites within subsystems.In the following, we refer to ℓ as "level" or "scale".For a finite system of size L, the subsystem lattice is a triangle of base and height of length L, as there are fewer and fewer possible subsystems for increasing values of ℓ.By increasing ℓ → ℓ + ℓ ′ there are ℓ ′ fewer subsystems on level ℓ + ℓ ′ compared with level ℓ.The base of the subsystem lattice corresponds to ℓ = 0 and the topmost label to ℓ = L − 1.
The subsystem lattice is a hierarchical structure: higher-level subsystems contain a triangle of lower-level subsystems that extends all the way down to level zero.Fig. 1(b) illustrates two neighboring subsystems and the corresponding hierarchy by means of the red and blue triangles.The subsystem with label (n, ℓ) contains at one lower lever the two subsystems with labels (n−1/2, ℓ−1) and (n + 1/2, ℓ − 1).The subsystem (n − 1/2, ℓ − 1), for example, in turn contains at the next lower level the two subsystems (n − 1, ℓ − 2) and (n, ℓ − 2), which are of course also subsystems of the top level (n, ℓ).This hierarchy continues all the way down to level zero.Moreover, two neighboring subsystems at level ℓ, say (n, ℓ) and (n + 1, ℓ), share some lower-level subsystems starting with (n + 1/2, ℓ − 1) [see the red, blue, and purple triangles in Fig. 1(b)].
So far, the subsystem lattice is just a collection of labels of subsystems.We wish to endow this lattice with a further structure by associating these labels with quantum states and quantum information.To that end, we define Cℓ n as the complement of C ℓ n , i.e., the set of all the physical sites that do not belong to C ℓ n .We then define the subsystem density matrix as and the subsystem Hamiltonian as [46] H ℓ n := where Tr Cℓ n is the trace operator over the complement Cℓ n , D( Cℓ n ) is the Hilbert space dimension of Cℓ n , and := denotes "defined to be equal to".To ease the notation, let us assume that the Hilbert space dimension is the same for all the physical sites: D(C 0 n ) = d for any n.Furthermore, we define the von Neumann information of the subsystem density matrix ρ ℓ n as I(ρ ℓ n ) quantifies the amount of information in the state ρ ℓ n of the subsystem C ℓ n [47].Thus, if (ρ ℓ n ) 2 = ρ ℓ n then in principle we have access to ln(d ℓ+1 ) bits of information.Instead, if ρ ℓ n ∝ 1 d ℓ+1 (where 1 d ℓ+1 is the identity matrix of dimension d ℓ+1 × d ℓ+1 ) we have access to 0 bits of information.
We can now associate ρ ℓ n , H ℓ n , and I ℓ n with the subsystem-lattice label (n, ℓ).These quantities are all global for the subsystem C ℓ n , as they comprise (via partial trace) the same quantities on the lower-level subsystems contained in the triangle having as topmost site (n, ℓ) and as base the ℓ + 1 contiguous physical sites constituting the subsystem [see Fig. 1(b)].
One can also have local quantities that are instead exclusively assigned to a single label (n, ℓ).By knowing the value of a local quantity on (n, ℓ), one cannot infer any information about its value on any other point in the lattice.For our purposes, a central example of a local quantity is mutual information.This is defined as the information in the state of the union system AB := A∪B that is neither in the state of A nor of B: (4) with A ∩ B is the overlap region of A and B. As in Ref. 7, we are interested in the local information of the state of the subsystem C ℓ n that is not present in any of the lower-level states of C ℓ−1 n−1/2 or C ℓ−1 n+1/2 : Here, it is implicit that the von Neumann information of empty subsystems is zero.When we endow the subsystem lattice with the local information, we refer to it as the information lattice.
In Fig. 2, we plot the information lattice for two simple cases.Fig. 2(a) depicts the local information in the product state of singlets on consecutive pairs of sites (more intense colors correspond to larger amounts of local information).All the information is located on level ℓ = 1 where only every other lattice site n carries nonzero local information, reflecting the singlet coupling.In the case of random singlets where pairs are not necessarily between consecutive sites, as in Fig. 2(b), the configuration of green points containing information is rearranged.In both cases, i ℓ n = ln(4) for n = (m 0 + m 1 )/2 and ℓ = m 1 − m 0 where m 0 (m 1 ) is the physical site index of the first (second) spin of the singlet; otherwise i ℓ n = 0. Importantly, local information is additive, and one can show [7] that the von Neumann information in the state of C ℓ n is given by the sum of the local information i ℓ n on all the labels in the triangle with topmost site (n, ℓ); in equation form

B. Time evolution on the subsystem lattice
To study time evolution on the subsystem lattice we need the equation of motion for the reduced density matrix ρ ℓ n defined on the subsystem C ℓ n .This is contained in the unitary time evolution of the full system density matrix ρ given by the von Neumann equation (ℏ = 1) where [ subsystem Cℓ n from Eq. ( 7), we obtain the equation of motion for the density matrix of the subsystem C ℓ n : with Tr r L (Tr r R ) is the partial trace operator tracing over the r leftmost (rightmost) sites of the subsystem C ℓ n .Here, it is implicit that, to subtract from H ℓ+r n−r/2 (H ℓ+r n+r/2 ) the lower-level density matrix H ℓ n , one has to take the tensor product of the latter with a proper identity matrix; in this case, it reads Proper identity matrices to be used are evident from the specific equations.Thus, we use this convention throughout this work, unless identity matrices are explicitly stated.The first term on the right-hand side of Eq. (8)  at the bottom of the purple triangle), and by the Hamiltonian terms of range r = 1 that couple those physical sites with the two (left and right) nearest neighbors.Such coupling terms, in the framework of the subsystem lattice, are included in the subsystem Hamiltonians of C 4 m (topmost site of the red triangle) and C 4 m+1 (topmost site of the blue triangle).
Eq. ( 8) implies that, for solving the time evolution of the subsystem density matrix ρ ℓ n , one also needs to solve the time evolution for the higher-level subsystem density matrices ρ ℓ+r n−r/2 and ρ ℓ+r n+r/2 .This gives rise to a hierarchy of equations of motion that, in principle, only closes when one solves the time evolution of all the subsystems or, equivalently, the full-system equation (7).To make this point clearer, let us consider the situation in which we know the subsystem density matrices for any ℓ and n at the initial time t = 0, and we want to solve the time evolution of the subsystems at level ℓ * , denoted as C ℓ * .Then, for the infinitesimal time increment δt, we can use Eq. ( 8) to compute the exact, time-evolved density matrices ρ ℓ * n (δt).However, to further increase time by δt and obtain the exact ρ ℓ * n (2δt), we need to know the higherlevel density matrices ρ ℓ * +r n−r/2 (δt) and ρ ℓ * +r n+r/2 (δt) at time δt.In summary, in general, the exact time evolution of the subsystem density matrices ρ ℓ * n can only be obtained by knowing all the density matrices at each level at any time.
C. Recovery of higher-level subsystem density matrices from lower-level ones Knowledge of the density matrices for all subsystems C ℓ * on level ℓ * allows us to construct the subsystem density matrices at all lower levels ℓ < ℓ * by suitable partial trace operations.The inverse operation is generally not possible, that is, recovering the density matrices on higher levels ℓ > ℓ * only from density matrices of level ℓ * .As an illustrating example, consider a twospin density matrix ρ AB = |ψ⟩⟨ψ| with |ψ⟩ the Bell state In this case, the subsystem density matrices ρ A and ρ B are maximally mixed and take the form ρ A = 1 A D(A) , where 1 A denotes the identity matrix on the Hilbert space of spin A and D(A) = 2 denotes its dimension.Clearly, ρ AB = |ψ⟩⟨ψ| is not the only two-spin density matrix for which the subsystem density matrices ρ A and ρ B are maximally mixed.The same result is, for instance, obtained for the maximally mixed two-spin density matrix, ρ AB = 1 AB D(AB) .Therefore, in general, given only the lower-level density matrices ρ A and ρ B , the correct density matrix of the enlarged system ρ AB cannot be determined.
An important exception is the case when there is no mutual information between A and B, i(A; B) = 0.In this case, it is possible to reconstruct the state ρ AB from the reduced states ρ A and ρ B via the so-called Petz recovery maps (see App. B).An example is the twisted Petz recovery map [48], For nonzero i(A; B), the twisted Petz recovery map has the error bound [49] Tr Recovery maps can be used to compute the state of higher-level subsystems on the subsystem lattice, as sketched in Fig. 3(a).Let us assume that we know the subsystem density matrices ρ ℓ * n at level ℓ * for any n, and that there is no local information on level ℓ * + 1: Then, all the density matrices at level ℓ * + 1 can be computed by using the twisted Petz recovery map as Eq. ( 11) involves three subsystem density matrices: ρ ℓ * n and ρ ℓ * n+1 that are known by assumption, and ρ ℓ * −1 n+1/2 that is easily obtained by tracing out either the leftmost physical site from )). Provided that there is no local information on higher levels as well, that is i ℓ * +1 = i ℓ * +2 = • • • = i ℓ * +r = 0, one can iterate recovery maps reconstructing all the density matrices at level ℓ * + r.

D. Exact time evolution on the subsystem lattice via Petz recovery maps
Recovering higher-level density matrices from lowerlevel ones allows us to close the subsystem equation of motion (8) at level ℓ * < L−1.Through the Petz recovery maps, in fact, we can perform the exact time evolution of the subsystem density matrices.Let us consider again the situation described at the end of Sec.II B in which, by knowing the state of the full system at the initial time t = 0, we can compute ρ ℓ * n (δt) exactly.As we have discussed, to time evolve further, we need knowledge of ρ ℓ * +r n−r/2 (δt) and ρ ℓ * +r n+r/2 (δt).Now, provided that at time δt there is no information on levels = 0, thanks to the recovery maps we can obtain the higher-level density matrices at time δt [see Fig. 3(a)], and use them to compute ρ ℓ * n (2δt), as schematically shown in Fig. 3(b).
After a few time steps, the assumption of zero information on levels ℓ * +1, ..., ℓ * +r will generically no longer hold.Typically, in ergodic quantum dynamics, correlations build up and spread throughout the system, adhering to the principles outlined by the Lieb-Robinson bounds [40].On the information lattice, this is visualized by the fact that increasing levels acquire nonzero local information as time progresses.In Fig. 4, we illustrate a prototypical example of this generically expected behavior.A system of 14 spins is initialized in a product state all sites n | ↑⟩⟨↑ |.Consequently, all local information at t = 0 is located on the physical sites, i.e., at the information lattice sites with ℓ = 0 [see Fig. 4(a)].Subsequently, we evolve this system in time by applying random unitary matrices from the circular unitary ensemble that pairwise couple neighboring sites U = U odd U even , where U odd (U even ) acts on all pairs of spins with index m and m+1 where m is odd (even) [50,51].A single application of U on the state of the system defines one random unitary evolution cycle (for each cycle we apply a different random unitary U ).In each cycle, the largest level with nonzero local information can increase by a maximum of 4. This induces a quick buildup of local information at increasing scales [compare Fig. 4(b)-(c)].Note that here, the boundaries only experience U odd , which is why the spread of local information at the boundary happens slower as compared to the bulk of the system.
In this example the system is relatively small and the dynamics can therefore be solved exactly.Nonetheless, the same exact dynamics could be described by using the subsystem time evolution based on recovery maps.At each time t, we fix a level ℓ * such that i ℓ = 0 for any ℓ ≥ ℓ * .Then, the scheme starts by performing the exact time evolution of the states of the subsystems C ℓ * having zero local information [see Fig. 5(a)].Their equations of motion are closed via the recovery maps, as described at the beginning of this section.Whenever an infinitesimal amount of local information reaches level ℓ * due to spreading correlations, we update ℓ * to the next smallest level with zero local information [see Fig. 5(b)].As an example, in the random-unitary evolution described above, ℓ * can increase by a maximum of 4 in each cycle.Therefore, the exact subsystem time-evolution scheme requires a substantial updating of ℓ * towards increasing levels and, eventually, ℓ * = L − 1 that corresponds to the topmost site of the subsystem lattice, that is, the entire system.As the level ℓ * increases, the computational resources needed to perform the time evolution grow exponentially.This is how the challenge of capturing entanglement growth appears on the information lattice.
Obviously, given a limited amount of computational time and resources, employing the exact subsystem timeevolution algorithm for large systems at long timescales becomes infeasible.To circumvent this problem, once ℓ * has reached a maximum value ℓ * = ℓ max set by the resources at our disposal, we require a suitable truncation method to continue the time evolution at ℓ max without further increasing ℓ * .

III. APPROXIMATE TIME EVOLUTION ON THE SUBSYSTEM LATTICE: THE LITE ALGORITHM A. Introduction to LITE
Given a maximum level ℓ max on which the numerical resources allow us to perform the time evolution, one might be tempted to adopt the naive approach to closing the equation of motion ( 8) by simply ignoring the error term in the Petz recovery map (10).One would then continue to apply Petz recovery maps to compute the density matrices up to level ℓ max + r while keeping ℓ * = ℓ max even though there is nonvanishing local information on ℓ max .This approach, however, is problematic.The first problem is that the density matrices at level ℓ max + r obtained by recovery maps, which we denote as ρℓmax+r n , may not preserve the lower-level density matrices (for instance, Tr r L (ρ ℓmax+r n ).As a result, errors are introduced on all length scales.In consequence, the algorithm fails to preserve the local constants of the motion, which leads to unphysical results.
To remedy this problem, we can use the projected Petz recovery map (see Ref. 7 and App.B 2), which projects the recovered density matrix to have the correct reduced density matrices, but this gives rise to two equally severe issues.First, the projection step may violate the positivity of the density matrices, leading to an immediate breakdown of the time evolution.Second, the density matrices produced by recovery maps generically have almost minimal local information.This leads to an underestimation of local information at levels larger than ℓ max and, in turn, to an underestimation of the local information current from level ℓ max to higher levels [7].Then, the spuriously small information current leads to a significant buildup of local information at scale ℓ max , which eventually distorts the dynamics.
In Ref. 7, the issue of the buildup of local information at level ℓ max was addressed by fixing the local information current from level ℓ max to higher levels through an information-current boundary condition that was empirically motivated by translational invariance and ergodic diffusive dynamics.Here, we aim to develop a timeevolution algorithm that does not depend on any specific assumptions about the information current and is suitable for diverse hydrodynamic behaviors.Therefore, instead of attempting to generalize the boundary condition ansatz of Ref. 7, we adopt a different strategy that involves removing local information directly.The removal of information is justified by statistical reasons: we expect that during the out-of-equilibrium dynamics information flows from smaller to larger scales and does not return to influence the lower-level density matrices.This guiding principle accounts for the second law of thermodynamics for entanglement entropy [39].The removal of information must be approached with caution, as it has the potential to alter the true dynamics of the system.Any modification to the local information distribution will necessarily change the local information currents and thus impact the dynamics.Our goal is to remove local information in a way that artificial effects on the overall dynamics are minimal.FIG. 6.Schematic of the local information distribution when the minimization on ℓmin is performed.(i) Minimization under the only constraint that lower-level density matrices are unaltered [see Eq. ( 13).]When the minimization is performed, the azure distribution is modified into the yellow one.After a typical time τ , local information accumulates on levels ℓ < ℓmin.(ii) Minimization under the additional constraint of leaving unchanged local-information currents from ℓmin − 1 to ℓmin.When the minimization is performed, one obtains the yellow distribution.Notice that in this case local information on ℓmin is, in general, larger than in panel (i).After time τ , local information tends to accumulate on ℓmin.
The key to overcoming this issue is the introduction of a new length scale ℓ min < ℓ max on which we remove local information.In order to keep the distribution of local information as stable as possible while removing parts of it, we minimize local information on level ℓ min with the constraint of keeping the density matrices at lower levels ℓ < ℓ min and the information currents on ℓ ≤ ℓ min fixed.In practice, this is achieved by defining the new FIG.7. Schematic of the LITE algorithm.The color intensity encodes the local information on the information-lattice sites.The green connections between the dots illustrate the information currents.We use projected Petz recovery maps (PPRM) (see App. B 2) to evolve subsystems until ℓ * reaches ℓmax.Once a small amount of information has accumulated at level ℓmax, we jump back to level ℓmin where we minimize the information by preserving all density matrices on ℓ < ℓmin, as well as the currents flowing in between level ℓmin − 1 and ℓmin.The minimization constraints are highlighted in the picture by the red contours.subsystem density matrix [52]: where χ ℓmin n is a Hermitian matrix acting on the Hilbert space associated with C ℓmin n .First, we impose that [53] These two conditions ensure that shifting the density matrix by χ ℓmin n does not change the density matrices on lower levels.One can in principle try to minimize information solely under the conditions (13).The minimization would consist in finding the optimal χ ℓmin n satisfying conditions (13) such that the corresponding local information i ℓmin n is minimized.However, while this naive minimization scheme does not distort the distribution of local information on levels ℓ < ℓ min at the time it is applied, this is not guaranteed at later times since such minimization modifies the local information currents.Specifically, just after the minimization (up to the characteristic timescale τ of the system), the local information currents flowing into level ℓ min are suppressed, which causes an erroneous buildup of information on lower levels.This is similar to the erroneous buildup of information caused by the Petz recovery maps discussed above.Fig. 6(a) schematically depicts the distribution of local information before (azure) and after (yellow) the minimization under only the constraints (13), as well as after evolving for an additional time τ (yellow).
This problem is circumvented by imposing a second constraint on χ ℓmin n : we enforce that the local information currents between ℓ min −1 and ℓ min remain unchanged under the minimization (see Sec. III B below for details).In this way, after the current-constraint minimization, there will be no buildup of information at levels ℓ < ℓ min [see Fig. 6(b)].The reason is that local information tends to accumulate at level ℓ min .However, since the local information has just been minimized at level ℓ min , the accumulation does not induce strong artificial local information gradients between different levels; therefore, the dynamics on lower levels is much less affected.
The current-constraint minimization drastically suppresses local information on levels ℓ > ℓ min .In turn, projected Petz recovery maps performed to compute the density matrices on levels ℓ > ℓ min become more accurate.This allows us to continue the subsystem time evolution on level ℓ * = ℓ min by applying recovery maps up to ℓ min + r.As the dynamics proceeds and entanglement spreads, local information reaches higher and higher levels.As described before, we increase ℓ * as soon as a nonnegligible amount of local information has reached it (this amount should be set as small as possible; see Sec.IV C below for more details) up to ℓ * = ℓ max .When local information has again substantially spread up to level ℓ max , we need to repeat the minimization of local information at level ℓ min .The use of this two-level scheme endures throughout the entire time evolution.Importantly, it can be shown that this algorithm conserves local constants of motion at ℓ < ℓ min by construction (as discussed in App.A).A schematic of the algorithm is depicted in Fig. 7.
Given the pivotal role of the local-information time evolution in the design of the algorithm, we denote it LITE.The remaining part of this section is devoted to providing mathematical details of the LITE algorithm.Readers not interested in these details can proceed directly to Sec.IV A.

Time evolution of information and information currents
To formalize the above discussion, we require a notion of information currents.The dynamics generated by time-evolving subsystem density matrices according to Eq. ( 8) leads to a corresponding dynamics of the information (5).The time derivative of the von Neumann information of the state of the subsystem C ℓ n [see Eq. ( 3)] reads where By expanding the righthand side, we decompose this current into two parts, one flowing left and one flowing right.From Eq. ( 8) and the cyclic property of the trace, we obtain The first term on the right-hand side originates from terms in the Hamiltonian within C ℓ n only; therefore it cannot contribute to the information flow into or out of (n, ℓ).On a technical level, it vanishes because [∇ ρ S l n , ρ l n ] = 0.The second term originates only from terms in the Hamiltonian within the subsystem C ℓ+r n−r/2 ; thus it cannot contribute to the change of the von Neumann information I ℓ+r n−r/2 of the state of the subsystem C ℓ+r n−r/2 .Pictorially, it does not change the von Neumann information in the union of the red and blue regions of Fig. 8(a).We conclude that it must be the left current from the red to the blue region.By a similar argument for the third term, we get (16) and We schematically show J r L and J r R in Fig. 8(b).The currents are linear functions of the higher-level subsystem density matrices ρ ℓ+r n−r/2 and ρ ℓ+r n+r/2 , respectively (which motivates the notation).Similar to the subsystem von Neumann information I ℓ n , the von Neumann currents J r L (ρ ℓ+r n−r/2 ) and J r R (ρ ℓ+r n+r/2 ) take values on the subsystem lattice.They are global properties of the subsystem, which implies that they are not assigned to individual lines that connect lattice points but to the full subsystem triangle on the subsystem lattice, and flow between subsystems.

Projection onto a constrained subspace
The constraints on χ ℓ n discussed in Sec.III A can now be condensed into four equations: If Eqs. ( 19) and ( 20) are fulfilled, the density matrices ρ ℓ n := ρ ℓ n + χ ℓ n and ρ ℓ n have the same lower-level subsystem density matrices and identical currents on all levels below ℓ.Conditions (19) and ( 20) are schematically depicted in Fig. 9 when applied on level ℓ min and r = 2.If constraints (19) are satisfied, all density matrices for ℓ < ℓ min are kept fixed (indicated by the gray background).Furthermore, constraints (20) guarantee that the currents that flow out of the red region (black arrows) into the blue [see (a)] and green [see (b)] regions remain unchanged.Contributions to the currents at ℓ < ℓ min are already preserved due to the imposed trace condition (specifically, the currents within the gray background).Thus, effectively, the current constraints (20) add only an additional condition on the currents that flow directly into level ℓ min (ensuring that no extra currents are generated by the shift matrix χ ℓ n ).Importantly, Eqs.(19) and (20) represent linear operations on χ ℓ n .Thus, we can impose Eqs. ( 19) and ( 20) by projecting χ ℓ n onto the respective kernel of the linear operators Tr 1 L , Tr 1 R , J r L and J r R .Given an arbitrary matrix χ ℓ n , the projectors onto the (partial) trace-free spaces are defined as To merge the projections of Eqs. ( 22) and ( 21) with the desired conditions on current (20) in a combined projector, we use the definitions ( 16) and (17).The current J r L of an arbitrary, (partial) trace-free matrix P Tr R P Tr L χ ℓ n with subsystem coordinates (n, ℓ) is given by The Hermiticity of the operators (including P Tr R P Tr L ) and the cyclic property of the trace allows us to rewrite the right-hand side as where Combining the projectors to the kernels of Tr 1 L , Tr 1 R and J r L , denoted P J L , we get Finally, the current J r R of P J L P Tr R P Tr L χ ℓ n is given by where Thus, the concatenated total projector P to the kernel of the system of equations defined by Eqs.(19) and (20) reads Given an arbitrary χ ℓ n , P projects it to the subspace of matrices that preserve all subsystem density matrices on ℓ < ℓ min and information currents on ℓ ≤ ℓ min .

Minimization of local information under constraints
Given P that projects onto the subspace of interest, the next step is to find the optimal χ ℓ n that minimizes the information, or, equivalently maximizes the von Neumann entropy, at level ℓ.We want to find For ease of notation, we drop the indices n and ℓ in this section.Expanding the von Neumann entropy up to second order in ξ yields (see App. C for a detailed derivation) where ∇ ρ S = − ln(ρ) − 1 is the entropy gradient, and is the entropy Hessian at ρ, U ρ diagonalizes ρ, that is, its columns are given by the eigenvectors of ρ, and h is a matrix with elements where κ i are the eigenvalues of ρ.Finally, A ⋆ B denotes the elementwise multiplication (or Hadamard product) of two matrices A and B. Importantly, since the eigenvalues of the density matrix ρ are always positive, κ i ≥ 0, from Eq. (32) we see that all the elements of h are strictly negative.Thus, we can apply Newton's method for optimization to find χ * such that the von Neumann entropy S(ρ + Pχ * ) is maximal (or, equivalently, such that the von Neumann information is minimal) given the imposed constraints.The optimal shift χ * that maximizes the entropy is where ( • ) + denotes the pseudoinverse [54].To avoid the nontrivial computation of the pseudoinverse, instead of using Eq. ( 33) we numerically solve the related linear equation via the preconditioned conjugate gradient method [55] (see App. C 2 for details).Therefore, we iteratively update χ until convergence is reached and Eq. ( 34) is satisfied.To minimize the information at level ℓ min on the subsystem lattice, we perform the above scheme for each lattice coordinate (n, ℓ min ) individually.Since the lower-level density matrices on ℓ < ℓ min and information currents on ℓ ≤ ℓ min are kept fixed, the order in which the subsystem density matrices ρ ℓmin n are minimized does not change the result.Finally, to lighten the notation, we replace ρ ℓmin n with ρ ℓmin n .This ensures that the symbol ρ consistently represents the density matrices used in the approximate time-evolution scheme.

IV. NUMERICAL SIMULATIONS A. Local observables: energy distribution and diffusion coefficient
Time evolving with a time-invariant Hamiltonian ensures energy conservation.However, if the initial state's energy distribution lacks translational invariance, energy tends to be redistributed as time progresses.This energy redistribution trajectory hinges on the model's hydrodynamics, primarily characterized by the Hamiltonian H.
Energy transport can range from localized to subdiffusive, diffusive, ballistic, or even superdiffusive.By initializing the dynamics from a state featuring localized energy accumulation, one can distinguish the diverse transport regimes from the energy distribution's variance over time.For instance, in a diffusive system, the energy distribution's variance grows as ∝ t, an outcome deduced directly from the diffusion equation.On the other hand, ballistic transport is expected to yield a variance proportional to t 2 .Assuming that the system Hamiltonian H is local with a maximum interaction range r, we can write it as the sum of local terms: with m a physical site index.Note that this partition is not unique.Note furthermore that H ̸ = n H r n , with H r n defined in Eq. ( 2).Indeed, the H r n are global quantities on the subsystem lattice covering a range of r+1 physical sites; hence, different subsystem Hamiltonians overlap.A local partition of the Hamiltonian has the advantage that the expectation value ⟨H⟩ becomes the sum of local expectation values, and E r m := ⟨h r m ⟩ quantifies the local energy.
Given the distribution of the local energy, the corresponding variance is given by where m = m mE r m /⟨H⟩ is the first moment of the distribution.Furthermore, the diffusion coefficient is Even though for time-invariant Hamiltonians ∂ t ⟨H⟩ = 0, the local energies are in general not constant.We obtain thus, In diffusive systems, D is expected to be constant as σ 2 E ∝ Dt and is typically referred to as the diffusion constant.Generically, the scaling of σ 2 E is not linear in time and the diffusion coefficient D depends on time.
Note that similar quantities as those defined in this section can be introduced in the presence of other conserved charges, for instance magnetization.

B. Initial states
For a thorough investigation of the transport properties through the temporal scaling of the diffusion coefficient, the energy diffusion should occur unimpeded throughout the system, in particular, without boundary reflections.Consequently, the analysis necessitates working with either very large systems or, ideally, systems that are infinitely extended.In the general case where the Hamiltonian is not translation invariant, managing infinitely extended systems is only feasible when the initial state approaches thermal equilibrium asymptotically in space, thus remaining static [56].A simple state in this family of states is given by where ρ m,∞ = 1 d /d is the infinite-temperature single-site density matrix located at site m, and d is the Hilbert space dimension of the physical sites.If the Hamiltonian is traceless, the product state formed solely from ρ m,∞ possesses zero total energy; if it is not traceless one can always redefine the energy by subtracting the trace such that this condition holds.The initial state (40) thus contains a finite amount of energy located within a range of ℓ + 1 physical sites centered at n (determined only by ρ ℓ n,init ).In practice, Eq. ( 40) allows us to effectively simulate a finite system at all times while performing time evolution of the infinitely extended one.Indeed, the initial state ( 40) is asymptotically time-invariant, as infinitetemperature density matrices do not evolve in time.We stress that this important property holds true for any Hamiltonian, including those that are non-translationinvariant or disordered.Consequently, the algorithm exclusively performs time evolution within the central region of the system, encompassing a finite number of sites around n.This effective region includes both the physical sites whose state deviates from the infinite-temperature background at time t, and a finite number of physical sites at the boundaries in the infinite-temperature state.After each time-evolution step, we check the state of the boundary sites.If the difference (in norm) between their single-site density matrix and the infinitetemperature single-site density matrix exceeds a chosen tolerance, we enlarge the effective system by adding further physical sites (at infinite temperature) at each end of it (see App. D 1 for more details).By increasing the size of the effective region, we ensure that energy, while spreading over time, can freely propagate throughout the system.Given ℓ max , utilizing Eq. ( 40) makes the required computational resources scale linearly with the effective system size.
One convenient choice for ρ ℓ n,init (used in Sec.IV D) is a thermal density matrix with respect to the subsystem

C. Numerical parameters of LITE
To numerically implement the two-level scheme of LITE depicted in Fig. 7, one needs to introduce some threshold parameters.Let us imagine we initialize the system in a state with local information only on small levels ℓ where ℓ < ℓ * < ℓ min < ℓ max .We then start the time evolution from density matrices on level ℓ * by closing the equation of motion (8) via the projected Petz recovery map (see App. B 2).As soon as the local information on ℓ * reaches a small critical threshold value, q * , we update ℓ * to ℓ * +r by computing the higher-level density matrices via the recovery map.When ℓ * reaches ℓ max and a critical amount of information has accumulated at ℓ max , q max , we perform the current-constraint minimization at level ℓ min while keeping all lower-level density matrices and information currents unaltered.Subsequently, we continue the time evolution at level ℓ * = ℓ min as described above.We refer to the application of one minimization as one evolution cycle.
While ℓ min and ℓ max should be chosen as large as possible (and such that ℓ min < ℓ max ), and q * should be set as small as possible, a similar extremal condition should not be applied to q max .Since the minimization removes information over time, the total information in the system (specifically, the sum of local information over all the information-lattice sites) shrinks with the number of evolution cycles.Thus, we define q max as the percentage of local information that we allow on level ℓ max measured with respect to the total information currently in the system.This implies the need to recompute the total information in the system at the beginning of each cycle.Choosing too large values of q max is problematic as local information starts to accumulate at ℓ max , making Petz recovery maps less accurate and possibly distorting the dynamics on the lower levels.On the other hand, infinitely small values for q max trigger overly frequent minimizations of local information at ℓ min .In this case, local information has no time to travel beyond ℓ min , and the intrinsic dynamics of the system is altered.Thus, there is an optimal range for q max that has to be determined empirically for the system under consideration.

D. Diffusive dynamics in the mixed-field Ising model
To demonstrate the efficiency of the LITE algorithm, we apply it to an established model for which the expected dynamics is known.We show that LITE is able to access previously unreachable long times with excellent convergence properties.Specifically, we consider the one-dimensional, mixed-field Ising spin chain with Hamiltonian where σ η m (with η ∈ {x, y, z}) are Pauli matrices acting on the physical site m.Hamiltonian (42) represents a nearest-neighbor, translation-invariant model.A number of recent works have discussed the same Hamiltonian in the context of diffusive dynamics [7,25,33].For Hamiltonian (42), we define the local energy as Given that energy is a globally conserved quantity, and considering an initial state of the form (40), at late times one expects the variance of the energy distribution to scale as σ 2 E = Dt, with D a constant.The exact value of D can only be determined in large-scale systems and at very long times.
We set J = 1, h T = 1.4,and h L = 0.9045, ensuring fast entanglement growth and chaotic dynamics [57].We initialize the system in the state (40) where the part that deviates from the infinite-temperature density matrices ρ m,∞ is spanned by three sites and given by Eq. ( 41) with β = 0.05.Using the LITE approach, we time evolve this infinitely extended initial state for different configurations of the algorithm parameters.the corresponding variance according to Eq. ( 36), and the diffusion coefficient according to Eq. ( 39).In Fig. 10, we depict the diffusion coefficient as a function of time for different values of ℓ min with fixed ℓ max = 9 and q max = 0.5%.At short times, up to the first occurrence of minimization, all the curves are identical.In this first regime of the dynamics, the system has a ballistic behavior and D(t) increases with time approaching its final plateau value.The saturation process is however longer than the timescales at which minimization appears; thus, we find a dependence of the saturated value of D(t) on ℓ min : increasing values of ℓ min are associated with an increasing (average) plateau value of D reached at times Jt ≳ 20.Another noticeable feature of Fig. 10 is the oscillations of D forming at increasing times.We attribute these oscillations to algorithmic artifacts emerging from the minimization and the removal of information at ℓ min that can affect information flow.As ℓ min is increased, the magnitude of oscillations shrinks consistently.Therefore, they can be interpreted as finite-scale effects associated with ℓ min .Intuitively, this depends on the fact that expectation values of operators with support on a few sites should not be influenced by the dynamics of local information happening at much larger scales.
Importantly, we find a clear convergence of the average plateau values of D as a function of 1/ℓ min .This convergence is better analyzed in Fig. 11 depicting the diffusion coefficient D averaged in the interval t ∈ [20,100] as a function of 1/ℓ min for different ℓ max , and q max .Using a linear extrapolation of the asymptotic value of D for ℓ min → ∞, we find that lim ℓmin→∞ D ≈ 1.55.This value is within a 5−10% margin from the values 1.4−1.46found for the same model in recent works [33,41,42].These discrepancies can be attributed to different reasons.For instance, we consider longer timescales than Ref. 33 that predicts the value 1.4; while we time evolve up to Jt ∼ 100 and average over t ∈ [20,100], in Ref. 33 the maximum evolution time is Jt ∼ 20 at which the diffusion coefficient has not yet converged to its long-time plateau.In addition, other works [41] do not provide an asymptotic extrapolation as that in Fig. 11.As the exact value of the diffusion constant for the mixed-field Ising model is not known, in Sec.V B below we further benchmark our approach by applying it to a model for which the exact value of the diffusion constant has been E (t) as a function of the window time 1/tw for several ℓmin, at ℓmax = 9 and qmax = 0.5%.By linear fits, we extrapolate γ∞ = limt w →∞ γ. (Inset) γ∞ as a function of 1/ℓmin.By a linear fit, we extrapolate γ∞ in the limit of ℓmin → ∞, obtaining γ * ∞ = 0.999 ± 0.004, compatible with 1 which is the expected power-law exponent for a diffusive system.

derived.
Clearly, ℓ min is the most decisive parameter of our algorithm as it determines the scale above which the conservation of local constants of motion is not guaranteed (see App. A).Given a fixed value of ℓ min , the remaining parameters ℓ max and q max only have a (similar) weak influence on the resulting value of D, as exemplified in Figs.11-12.There is, however, a caveat: while ℓ max sets the maximum value of ℓ min and should be chosen as large as possible, no such line of reasoning exists for q max .In fact, lowering q max too much makes the removal of information via the minimization scheme ineffective, while too large values of q max lead to an accumulation of information at level ℓ max before the minimization at level ℓ min is triggered, which might distort the information flow and the system dynamics.Thus, we expect the presence of an optimal range of values for q max .Empirically, we do not find significant differences in the range q max ∼ 0.5 − 2% (see Fig. 11).
Fig. 10 contains a barely visible yet important subtlety: even in the saturated regime (Jt ≳ 20), the diffusion coefficient D is not entirely constant over the time of evolution.Instead, we observe a slight timedependent increase in D. To quantify the drift, we assume the functional form σ 2 E (t) ∝ Dt γ and apply a power-law fit to different time windows in the saturated regime of the time evolution.We fix the window length to ∆t = 10, and let t w denote the initial time of the window.The power-law exponent γ as a function of the time window is shown in Fig. 13 for different values of ℓ min .We find that the flow of the exponent γ supports diffusive dynamics at late times and large ℓ min : lim ℓmin→∞ lim tw→∞ γ = 0.999 ± 0.004.Importantly, this shows that the algorithm is able to capture the correct long-time behavior of local observables.Indeed, for the current model, any deviation from purely diffusive behavior would hint at a systematic error in the algorithm that distorts the dynamics.

V. LINDBLAD DISSIPATIVE DYNAMICS VIA LOCAL-INFORMATION TIME EVOLUTION
A. The Lindblad master equation within LITE The von Neumann equation ( 7) and the subsystem equation of motion (8) describe the dynamics of closed quantum systems.However, in many practical cases, achieving a reliable description of a quantum-system dynamics requires considering its interaction with the external environment [58].This interaction introduces quantum dissipation.Unlike closed systems, the dynamics of open quantum systems cannot be represented by unitary time evolution.Nevertheless, one can often formulate it in terms of a quantum master equation.Among those, the Lindblad master equation [59,60] (or Markovian master equation) holds a significant role.It characterizes the evolution of a system that is coupled to a thermal bath which is memoryless, implying that its timescale is considerably shorter than any other timescale in the problem.The Lindblad master equation is a first-order linear differential equation for the system's density matrix; it reads Here, { • , • } is the anticommutator, and L j , L † j are the Lindblad jump operators modeling the dissipative dynamics.The jump operators describe how the environment influences the system and must in principle be derived from the full microscopic Hamiltonian that accounts for the system and the environment.The coupling constants γ j ≥ 0 quantify the strength of dissipation.
The LITE approach can be readily extended to open quantum systems governed by the Lindblad master equation.This extension is particularly straightforward when dealing with Lindblad jump operators acting on individual physical sites.In this scenario, the subsystem equation of motion takes the form: Notice that m denotes the physical sites within the subsystem C ℓ n .Remarkably, the second term on the right- For each γ, we plot the diffusion coefficient of magnetization transport in Eq. ( 49) at (ℓmin, ℓmax) = (3, 5) (solid line) and (4, 6) (dashed line), for fixed J = 1.The curves at different (ℓmin, ℓmax) differ at most by ∼ O(10 −8 ), indicating the fast convergence of LITE in the presence of dissipation.After a short-time ballistic behavior, D(t) saturates to a plateau dependent on γ.The long-time diffusion coefficient perfectly agrees with the exact value in Eq. ( 48) (dotted lines) (see also Fig. 15).hand side exclusively involves ρ ℓ n , allowing for the seamless inclusion of onsite dissipators.In fact, this addition does not necessitate any modifications to the LITE approximate time-evolution scheme.
As previously discussed, the fundamental idea of the LITE approach is to selectively remove local information while preserving the local dynamics.This strategy enhances the accuracy of recovery maps, facilitating the closure of the subsystem equation of motion (8).From this perspective, the introduction of Lindblad dissipators is expected to improve the convergence properties of the algorithm.Consequently, LITE emerges as a particularly well-suited approach for addressing systems with dissipative dynamics.In a concurrent (mainly experimental) work [61], some of the present authors applied this approach to reproduce experimental results for driven nitrogen-vacancy centers in diamonds.Specifically, in Ref. 61 thermalization is used to engineer and stabilize mesoscopic shell-like spin textures characterized by spins exhibiting opposite polarization on either side of a critical radius.These textures encompass about O(100) spins and are stable for several minutes.In the following section, we discuss in detail another application of LITE to open quantum systems.49) averaged over the time window t ∈ [30,220] for (ℓmin, ℓmax) = (4, 6) and qmax = 1% as a function of 2J 2 /γ (purple circles).Data points correspond to J = 0.1, 0.2, 0.3, 0.4, 0.6, 0.7, 0.8, 0.9 for γ = 0.1, and γ = 0.1, 0.2, 0.3, 0.4, 0.5 for J = 1.The long-time diffusion constant, D, agrees with the exact result in Eq. ( 48) (orange circles); they differ at most by ∼ O(10 −6 ).

B. Numerical results for diffusive dynamics in the presence of onsite dephasing
To showcase the efficiency of LITE for open systems within the Lindblad framework, we employ it for a model for which the exact diffusion coefficient has been analytically derived [43][44][45].Our analysis shows that in this case LITE is able to obtain the diffusion coefficient with high precision, even at small scales ℓ min and ℓ max .Specifically, we simulate an XX spin chain subject to onsite dephasing Lindblad operators.The dynamics of the system is determined by the Lindblad master equation (45), with Hamiltonian and dephasing Lindblad operators acting on each physical site with coupling strength γ: The presence of local dephasing effectively introduces interactions in the system that change the transport characteristics from ballistic to diffusive, as previously observed via matrix-product operator techniques [43,62,63].The exact value of the diffusion coefficient for the open chain in Eqs. ( 46)-( 47) was derived to be [43][44][45] For the system described by Eqs. ( 46)-( 47) we conduct a series of simulations using LITE to study magnetiza-tion transport.We initialize the system in the state (40) where ρ ℓ n,init is a product state spanned by 11 sites with a finite negative magnetization, ρ ℓ n,init = m ρ 0 m with p m := ⟨σ z m ⟩ = −0.2.As in Sec.IV D, we time evolve this effectively infinitely extended system for different parameters of the model.At each time-evolution step, we compute the diffusion coefficient according to where P = m p m .Results are reported in Figs. 14 and 15.In Fig. 14, we show the diffusion coefficient as a function of time, D(t), for different coupling strengths γ, (ℓ min , ℓ max ) = (3, 5) (solid lines) and (ℓ min , ℓ max ) = (4, 6) (dashed line).The diffusion coefficient D(t) shows remarkable features.First, the results are almost independent of the scales (ℓ min , ℓ max ), even at those modest values.Moreover, the artificial oscillations present in the closed case (see Fig. 10) are now completely absent, and the diffusion coefficient is highly stable and constant in the long-time limit.Finally, we recover the analytical result in Eq. ( 48), shown as dotted lines.In Fig. 15, we further demonstrate the high agreement of the LITE results (purple circles) with the analytical ones (orange circles) by computing D, that is, the diffusion constant averaged over the time window t ∈ [30,220], as a function of 2J 2 /γ.The points are aligned on a straight line with slope 1, as implied by Eq. (48).The difference between the LITE results and the exact ones is at most about O(10 −6 ).
Our findings demonstrate that onsite Lindblad operators enhance the convergence properties of LITE.We attribute this enhancement to the removal of information operated by dephasing, which in turn improves the accuracy of the subsystem time evolution via recovery maps.In the simulations, we have indeed observed that the minimization protocol is activated only at short timescales, due to the presence of information at level ℓ max .At intermediate and long timescales, the total amount of information in the system is very small.Consequently, efficient time evolution at a reduced scale ℓ max = 3 is feasible without the need for additional minimization steps or increasing time-evolution scales ℓ max .Thanks to this, Figs. 14 and 15 have been obtained by using very modest computational resources.

VI. CONCLUSION AND OUTLOOK
We have proposed a novel algorithm (LITE) for the approximate time evolution of generic local many-body quantum Hamiltonians.Our approach is based on statistical arguments concerning the unidirectional flow of quantum information, which primarily progresses to larger scales without returning to smaller scales to influence local observables.By leveraging the concepts of local information and information currents, we systematically discard long-range correlations in a controlled manner.This allows us to obtain an accurate description of local states at any given time.
LITE operates by decomposing the system into subsystems and solving their von Neumann equations in parallel.For closing the (in principle infinite) hierarchy of subsystem equations of motion, we have introduced two scales, ℓ min and ℓ max .Parameter ℓ min defines the scale at which we systematically remove local information while preserving lower-level density matrices (for ℓ < ℓ min ) and information currents (for ℓ ≤ ℓ min ).Parameter ℓ max is the maximum scale on which time evolution is performed, and is constrained by available computational resources.The knowledge of the subsystem states at the maximum scale ℓ max is used to accurately determine the information flow on smaller scales.By construction, the LITE approach conserves all local constants of motion at scales ℓ < ℓ min .Scale ℓ min also controls the accuracy of the results.While the computational complexity of LITE scales exponentially with the subsystem level ℓ max , it increases only linearly with the total system size, which allows the investigation of large-scale systems and long timescales.Crucially, LITE does not require any external assumptions on the information currents nor does it require the presence of symmetries, such as translation invariance.Therefore, this approach is highly versatile and suitable for exploring the thermalization dynamics (or its absence) across various physical contexts, encompassing systems with diverse hydrodynamic behaviors.
Within the LITE framework, we can initialize time evolution from diverse initial states, including domain walls in finite-size systems or infinitely extended translation-invariant states [7].Here, we have demonstrated its excellent convergence properties when starting from asymptotically time-invariant states (that is, states in which the state of the asymptotic region commutes with the Hamiltonian), particularly those near infinite temperature.This enables effective simulations of infinitely extended systems.Starting with such states, we have investigated the dynamics of the mixed-field Ising model for a set of parameters in which the system is highly chaotic; thus far, other time-evolution methods, such as matrix product states with finite bond dimensions, have not obtained concluding results.Remarkably, we have been able to perform time evolution up to very long times and get an accurate estimate of the power-law exponent for energy diffusion and of the energy diffusion constant.
The LITE approach is especially well suited for investigating Lindblad dissipative dynamics with local dissipators.In that case, the algorithm converges even faster than for closed systems due to the additional removal of information operated by the dissipators.Harkins et al. [61] demonstrated this by reproducing experimental results for the magnetization transport driven by nitrogen-vacancy centers in diamonds.In addition, here we have shown results for an open XX spin chain subject to onsite dephasing that presents diffusive transport in the long-time limit and for which the exact value of the diffusion constant has been analytically derived [43][44][45].The diffusion constant calculated with LITE perfectly agrees with the exact value, even when computed for small scales ℓ min and ℓ max .This shows that, in the case of Lindblad dissipative dynamics, LITE provides accurate results at extremely modest computational costs.This open doors for unprecedented accurate investigations of long-time and large-scale open many-body quantum systems.
Since LITE relies solely on quantum thermalization and the consequent entanglement growth, we expect it to also be particularly appropriate for simulating subdiffusive and superdiffusive transport in quantum systems.The investigation of these transport regimes is a growingly interesting research direction at the forefront of theoretical and experimental physics [64,65].Moreover, while here our focus has primarily been on onedimensional nearest-neighbor Hamiltonians, the LITE approach can be applied to generic finite-range Hamiltonians and potentially extended to higher-dimensional systems.In the future, we anticipate exploring connections between the LITE approach and tensor network algorithms for time evolution, with the potential for mutual insights and efficiency improvements.Additionally, combining a tensor-network ansatz with the LITE approach for compressing high-level density matrices could yield further algorithmic enhancements.Furthermore, we anticipate the possibility of gaining valuable insights into the spatial and temporal behavior of entanglement in many-body systems by employing the framework of the information lattice.The information lattice could indeed provide additional information on dynamical heterogeneity observed in localized systems [66] and, more generally, on the complex structure of bipartite quantum entanglement in both ergodic and localized systems.Finally, we expect that the LITE approach will make valuable contributions in the field of quantum computation, such as facilitating classically optimized Hamiltonian simulations on quantum hardware [67,68].
[52] Notice that the projected Petz recovery maps discussed above can also be interpreted as a redefinition of the density matrices similar to Eq. ( 12).
[53] Notice that the condition Tr To remedy this problem we add a projection step, exemplified here for r = 1.We compute ρ ℓ * +1 n via the projected Petz recovery map as where Importantly, ϱ ℓ * +1 n+1/2 → 0 as i ℓ * n+1/2 → 0 and the Petz recovery map becomes exact.It is easy to verify that, ∀ n, The recovered density matrix ρ ℓ * +1 n+1/2 serves as an approximation to the exact density matrix of the subsystem and it is used to perform the subsystem time evolution on level ℓ * .Note that throughout this work the symbol ρ is used to denote density matrices employed in the time-evolution scheme of LITE.
Appendix C: Details on the minimization of local information under constraints

Gradient and Hessian of the von Neumann entropy
We wish to compute the gradient ∇ ρ S and the Hessian H ρ of the von Neumann entropy at ρ, which are defined through the Taylor expansion: where λ ≪ 1 is a small perturbation parameter and ξ is a Hermitian matrix.For ease of notation, we drop the indices n and ℓ in this and the following section.Let us write where κ i are the density-matrix eigenvalues, and Π i = |ψ i ⟩⟨ψ i | are the projectors onto density-matrix eigenstates |ψ i ⟩.Given Eq. (C2), we write the von Neumann entropy as where k i are the eigenvalues of the shifted density matrix ρ + λξ, obtained from standard non-degenerate perturbation theory: The Taylor expansion of the von Neumann entropy around λ = 0 reads By using Eqs.(C1) and (C4) in Eq. (C5), we find where If ξ is traceless, as imposed in the minimization scheme of LITE by the constraints (19) [Tr(ξ) = Tr(Pχ) = 0], the identity term when inserted in Eq. (C6) vanishes.Moreover, we defined ξij := ⟨ψ i |ξ|ψ j ⟩ and used that i,j i̸ =j ξij ξji We can further simplify Eq. (C6) by writing i,j i̸ =j ξij ξji where δ ij = κ i − κ j , ∆ ij = κ i + κ j , and in the second sum on the right-hand side we have exchanged i ↔ j.Using arctanh We now want to use Eq.(C10) in Eq. (C6).Notably, lim x→0 arctanh(x/y) x = 1 y allows further simplifications.Then, By defining we arrive at The second-order term of the expansion can be rewritten as where ⋆ represents elementwise multiplication.Given that ξ = ij Π i ξΠ j := U † ρ ξU ρ , where U ρ is the unitary matrix having as columns the eigenvectors of ρ (i.e., |ψ i ⟩), and by using the cyclic property of the trace, we obtain where is the Hessian of the von Neumann entropy at ρ, applied on ξ.
FIG. 16.Schematic of the effective system used in the numerical simulations with asymptotically time-invariant initial states and r = 1.The color intensity of the information-lattice points quantifies the local information located on a given site.(a) The effective system at t = 0 comprises a central region (blue area) deviating from the infinite-temperature state and containing local information (as indicated by the green dots).On each boundary, there are ℓinit + 1 infinite-temperature physical sites (red areas).If i ℓ init n < q ℓ * we set ℓ * = ℓinit (otherwise, we need to extend the effective system to higher levels, see Fig. 17).(b) Before performing a time-evolution step, we enlarge the effective system by adding r infinite-temperature physical sites at each boundary.(c) We perform a time-evolution step of length δt by numerically integrating the subsystem von Neumann equations (Eq.( 8) in the main text).We then check whether after the time step some physical sites that were in the red areas deviate from the infinite-temperature state.We remove unnecessary physical sites at the boundaries of the effective system in order to always keep ℓ * + 1 infinite-temperature physical sites at the boundaries.
as domain-wall states.Furthermore, in scenarios involving translation-invariant Hamiltonians, the method can be efficiently used for performing time evolution starting from translation-invariant initial states, as explored in Ref. 7 for diffusive systems.Nonetheless, the focus of this work lies in the development of a comprehensive framework for capturing the time evolution of generic Hamiltonians.This inclusivity encompasses non-translation-invariant and disordered Hamiltonians.Consequently, it becomes convenient to initialize the dynamics in states that commute with the Hamiltonian asymptotically in space.As, by construction, the state of the asymptotic region does not change in time, we refer to these states as asymptotically time invariant.This feature allows us to simulate infinitely extended systems, as detailed in the following.Thanks to the absence of boundaries, one can obtain a robust extrapolation of the time-dependent behavior of diffusion coefficients, as the propagation of conserved quantities over time and space remains unaffected by boundary reflections.
Out of the family of asymptotically time-invariant states, we consider those given in Eq. ( 40) in the main text, reproduced here for completeness Since infinite-temperature density matrices are invariant under time evolution (as can be easily seen from Eq. ( 8) in the main text), by initializing the system in the state (D1), we only need to solve the subsystem time evolution for those subsystems in the central region whose state deviates from the infinite-temperature density matrix.This sets an effective system on which time evolution is performed.Since information spreads over time, the effective region needs to be enlarged during time evolution.
To avoid any boundary-induced distortions in the dynamics, we perform the following procedure.We initialize the effective system at t = 0 as composed of 3(ℓ init + 1) physical sites, where ℓ init is the correlation scale of ρ ℓinit n,init deviating from the infinite-temperature state.As depicted in Fig. 16(a), while the central region (blue area) is in the state ρ ℓinit n,init ̸ = 1 d ℓ init +1 /d ℓinit+1 and contains local information (green filled circles), the boundary states (red area) are at infinite temperature and do not contain local information (open circles).The states in the gray areas are obtained by recovery maps of lower-level density matrices.Given the presence of the central region deviating from infinite temperature, they are, in general, not thermal.Let us assume that local information on sites (n, ℓ init ) is smaller than the threshold value q ℓ * (see App. D 2), such that we can perform time evolution on level ℓ * = ℓ init .This assumption makes the following discussion valid for any level ℓ * on which time evolution is performed and for any extent of the effective system.Fig. 16(b) shows that, before performing the time-evolution step, we enlarge the effective system by r infinite-temperature physical sites at each boundary.This ensures that we can recover all the necessary density matrices on level ℓ * + r and perform time evolution on all subsystems on level ℓ * that, after the time-evolution step, can deviate from the infinite-temperature state.Notice that the additional higher-level density matrices up to level ℓ * are infinite-temperature density matrices by construction.We perform the time-evolution step at ℓ * as prescribed by Eq. ( 8) in the main text.We then check whether at the later time δt some physical sites that were in the red areas at t = 0 are in a state that deviates from the infinite temperature one, as sketched in Fig. 16(c) by the broadening of the blue area.If so, we need to enlarge the effective system.Note that at most r physical sites may deviate from the infinite-temperature state after a time-evolution step.To prevent any boundary-induced distortion, we keep ℓ * + 1 physical sites at infinite temperature at each end of the effective system.Therefore, if needed we remove excess infinite-temperature physical sites at the boundaries.Numerically, we determine whether a physical-site state is at infinite temperature by computing the norm of the difference between the actual state and 1 d /d and check if it is smaller than p.We set p = 10 −12 .

Runge-Kutta integration scheme
We time evolve each subsystem density matrix ρ ℓ * n by integrating the subsystem's equation of motion [see Eq. (8) in the main text] by means of Runge-Kutta integration methods: Here the b i are the Runge-Kutta parameters, the κ ℓ * ,i n are the derivative functions given by Eq. (A4) evaluated for different density matrices [69], and K is the order of the truncation error.We use an adaptive fifth-order Runge-Kutta method with an embedded fourth-order method used to estimate the time-step error [70].Such adaptive methods are characterized by a dynamic time step set by the time-step error ϵ; we set the time-step error to be ϵ < 10 −8 .
Runge-Kutta methods, although particularly well suited for the LITE approach since they preserve the local constants of motion (see App. A), can fail due to the presence of small eigenvalues of ρ ℓ * n (t).In fact, in this case, the matrices needed to compute the derivative functions κ ℓ * ,i n can have negative eigenvalues.Since the subsystem time evolution (8) is only defined for positive semidefinite matrices, this can cause the failure of the Runge-Kutta integration step.The smallest eigenvalues typically increase when there is a flow of information from small to large scales.Therefore, this failure is more likely to occur early in the time evolution or when there is suppressed flow of information from small to large scales (as might happen in disordered systems).
In the simulations presented in this article for the asymptotically time-invariant initial states in Eqs.(D1) and (D2), small eigenvalues can be remedied by shifting the density matrices to be time evolved, ρ ℓ * n , by the infinite-temperature state multiplied by a factor α: (D4)

FIG. 1 .
FIG. 1.(a) Schematic of the subsystem C ℓ n extending over ℓ + 1 physical sites and centered around n. (b) Schematic of the subsystem lattice.The vertical axis labels different levels ℓ, while the horizontal axis labels the subsystem centers n: n takes integer values on even levels ℓ and half-integer values on odd levels.The red and blue colored regions depict the triangles associated with the subsystems C 4 m and C 4 m+1 , respectively, where m ∈ Z labels the physical site index.Such triangles remark that a subsystem C ℓ n contains the lower-level subsystems defined on a subset of the contiguous physical sites C ℓ n includes.The two neighboring subsystems C 4 m and C 4 m+1

FIG. 4 .FIG. 5 .
FIG. 4. Four different snapshots in time of the information lattice for a system of 14 spins evolved with random unitary matrices U = U odd Ueven, where U odd (Ueven) acts on all pairs of spins with indexes m and m + 1 where m is odd (even).A single application of U defines a cycle.The amount of local information in a site is indicated by the color intensity.The system is initialized in a product state all sites n | ↑⟩⟨↑ | where local information is located only the physical sites at ℓ = 0 [see panel (a)].As U is applied, local information builds up on increasing scales [see panels (b),(c),(d)].Notice the presence of finite-size effects at the boundaries which only experience U odd .

FIG. 8 .FIG. 9 .
FIG. 8. Schematic of the von Neumann information currents of the subsystem C 4 m for r = 2. (a) Schematic of the subsystems involved in the von Neumann information currents J r L (ρ 6 m−1 ) and J r R (ρ 6 m+1 ).The regions of interest are marked by colors.(b) The currents out of the subsystem C 4 m flow into the blue and green regions that belong to the subsystems C 6 m−1 and C 6 m+1 .

8 FIG. 10 .
FIG.10.Diffusion coefficient as a function of time D(t) for different ℓmin, at fixed ℓmax = 9 and qmax = 0.5%.At short times, the system has a ballistic behavior, and D(t) increases with time.D(t) then saturates to a plateau for t ≳ 20.The oscillations of D(t) along the plateau are associated with algorithmic artifacts and shrink to zero as ℓmin increases.The (average) plateau value of D(t) increases with ℓmin, and tends to an asymptotic value for ℓmin → ∞ (see Fig.11).

FIG. 11 .
FIG. 11.Diffusion coefficient D averaged over the time window t ∈ [20, 100] as a function of 1/ℓmin, for different ℓmax [panel (a)], and qmax [panel (b)].The dashed lines show a linear extrapolation of the asymptotic value of D for ℓmin → ∞, obtained considering the three data points at largest ℓmin at fixed ℓmax [panel (a)], and all the data points at fixed qmax [panel (b)].This gives the rough estimate lim ℓ min →∞ D ≈ 1.55.

FIG. 12 .
FIG. 12. Colormap of the diffusion coefficient D averaged over the time window t ∈ [20, 100] for different ℓmin and ℓmax, at qmax = 0.5%.Colors remain almost constant by moving on the horizontal axis, while they differ by moving on the vertical axis.This implies that D does not depend largely on ℓmax, while it strongly depends on ℓmin.Moving both from left to right and from up to down, D increases and approaches the asymptotic value.
where ρ m,∞ = 1 d /d is the infinite-temperature single-site density matrix located on the physical site m with Hilbert space dimension d.In particular, we can choose (as introduced in the main text, Eq. (41))