Quantum Kolmogorov-Sinai entropy and Pesin relation

We discuss a quantum Kolmogorov-Sinai entropy defined as the entropy production per unit time resulting from coupling the system to a weak, auxiliary bath. The expressions we obtain are fully quantum, but require that the system is such that there is a separation between the Ehrenfest and the correlation timescales. We show that they reduce to the classical definition in the semiclassical limit, one instance where this separation holds. We show a quantum (Pesin) relation between this entropy and the sum of positive eigenvalues of a matrix describing phase-space expansion. Generalizations to the case where entropy grows sublinearly with time are possible.


II. CLASSICAL PESIN RELATION FOR THE SIMPLE-MINDED
One way to introduce the Kolmogorov-Sinai entropy is as the entropy production around a trajectory, x ≡ (q, p), of a system coupled to a weak auxiliary noise h(t): with H being the system's Hamiltonian, {·, ·} the usual Poisson brackets, and the index α runs over the components of a 2N −vector in phase-space.
Consider a single classical trajectory. The Kolmogorov-Sinai entropies h KS (for each q) for the perturbed trajectory is defined as [15] th (q) where the histories are all the trajectories starting from the same point x(t 0 ) and ending in a point x f [h], that depends on the noise realization. This definition may eventually be supplemented with averaging over initial conditions. We shall be interested in the limit in which the random perturbation is very weak.
Equation (2) is not the most easily generalizable to the quantum case, so we shall rewrite it as follows. First, we replicate a = 1, ..., q where the delta functions impose the trajectory equations. Then, because these trajectories are casual, we may integrate away all the intermediate coordinates x a except the last, to obtain: What we have shown is that because of causality, integrals over powers of probabilities of trajectories may be traded for integrals of powers of probabilities over the (noise-dependent) endpoint of the trajectories. In other words, we have gone from a description of 'entropy of trajectories' to one of 'production of the usual entropy at the end of the trajectory'.
Starting from initial configurations within a small ball of radius a, and considering different noise realizations, we have at the end of the process at some time t a 'probability cloud'. Its distribution is a function of the size of the initial set, the noise amplitude, and the stability properties of the trajectory. Denoting the probability distribution at time t as P (x, t), we have the Rényi entropy of order q S q ≡ 1 1 − q ln dyP q (y, t) , (5) and the corresponding Kolmogorov-Sinai entropy in terms of its exponential growth: where t o is some short reference time. The advantage of this change of point of view is that it is easy to generalize the entropy production caused by weak noise to the quantum case.
Let us now proceed with the classical example. Consider the situation as in Figure (1): in the absence of noise, in the directions that have exponential expansion between nearby trajectories, the displacement is ∆ + ∼ ae λ α t with λ α being the positive Lyapunov exponents of the unperturbed system. In an Hamiltonian system, there will be exactly as many contracting modes that will compensate the effect of the expanding ones, ∆ − ∼ ae −λ α t , and the volume of the set is unchanged. This is just Liouville's theorem, and is a manifestation of the fact that without some form of coarse-graining there is no entropy production in Hamiltonian dynamics. Let us consider now the effect of the noise: Roughly, expending and contracting directions are described by∆ ± ∼ ±λ α ∆ ± + h(t), and the time evolution then gives ∆ ± (t) = t 0 e ±λ α (t−t ) h(t )dt . One can see now that the integral which determines ∆ + is dominated by initial times, t ∼ 0, and the displacement induced by the noise will be amplified after a time t as ∆ + ∼ e λ + α t . However, now the contracting modes are compensated by the action of the noise at late times t ∼ t, which gives a contribution ∆ − ∼ / λ + α . Thus, if there are modes exponentially expanding in time, and we are interested in the phase-space volume V = ∆ + ∆ − of the 'probability cloud' at the end of the process -up to logarithmic accuracy in time -we have ln V ∼ t λi>0 λ i + O(ln ). This is Pesin's relation. Let us see this in more detail.

i) 'Democratic' dimensions
We consider a 2N -dimensional phase-space (N coordinates and N momenta). We normalize it with the canonical transformation,x = (bq, p/b), with b a parameter with the appropriate dimensions (and for simplicity from now on we drop the˜notation). Hereafter we take b = √ mω, where m and ω being mass and frequency of an auxiliary oscillator Hamiltonian that will determine the initial state (see more details in the semiclassical derivation in Sec. IV below). Thus, both position and momentum have units of (action) 1 2 .

ii) White Noise limit
The distribution of the noise fields h(t) is taken as where τ is a typical time-scale for the noise and has units of (energy/time) 1 2 . That is, we can think of the random field as taken from a Normal distribution N (0, 2 ) independently at each time step τ , or a Gaussian noise of correlation time τ . Assuming that this timescale is shorter than any other relevant one in the system, we have the 'white noise' limit, which is enough to yield the classical Pesin relation, even if the timescale τ does not disappear as a parameter of the problem. We shall assume the shortness of τ throughout the Paper, also for the quantum problem. Note however, that when doubt arise, one has to restore the noise correlations. In Sec. VII we discuss how the derivation can be generalized to account for a correlated noise.

B. Entropy of trajectories' end points
Deviations from a given trajectory can be described by the Poisson brackets (tangent space), which are given in terms of the second derivative matrix: The effect of the noise may be computed by considering the deviation it produces from a trajectory y α (t) ≡ The tangent space dynamics, to first order in the noise amplitude, then reads The y α (t) are Gaussian variables with zero average, since they are sums of the h α . Let us now compute the distribution of the last point P (y, t), at the end of the process. We start with writing the distribution as a pathintegral over realizations of the noise, P (y, t) = Π α . Then, we Fourier transform the delta-function and solve the Gaussian integral (after completing the square) over the fields h to find The integral over the Fourier variableŷ then gives: Eq. (11) thus describes how the weak noise spreads a single trajectory to a probability cloud whose distribution at time t is Gaussian with a variance determined by the 2N × 2N matrix A-the main object here. We note that the matrix B(t, t ) is symplectic, it is the instantaneous contribution from time t to the expansion at time t, while A(t) (which is generally not symplectic) is the total expansion. Taking the final expression for P (y, t) we can calculate the Rényi entropy and its exponential growth rate, according to Eq. (5) and (6) respectively: where M is an irrelevant constant, and t o is a short reference time. For the moment we have not averaged over initial conditions.

C. Pesin relation
In the case in which the second derivative matrix {x δ , {x α , H}} is constant in time, one can easily confirm the claim above: if there is an exponential expansion in a mode of B, then the initial times t ∼ 0 dominate the time-integral and the expanding directions are of order e λαt . Contracting modes of B are actually stopped from contracting completely by the action of the noise at late times t ∼ t, which then give a contribution / λ β . The general case is similar but slightly less trivial, as the tangent space vectors rotate in time. A rigorous approach is given by defining covariant Lyapunov vectors, as we explain now (a different derivation is also given in Appendix A 1).
The entropy production in Eq. (12) depends on the determinant of the matrix A. First, we use the facts that Then, symplecticity of R, namely R −1 = Ω −1 R T Ω, gives us As a final step, we use the notion of covariant Lyapunov vectors [17][18][19]: for any time t 1 we have 2N vectors [20] γ k (t 1 ) such that ||R(t + t 1 , t 1 )γ k (t 1 )|| → exp(λ k t ) as t → ∞, with λ k being the Lyapunov exponent. Let us define the 2N × 2N matrix Γ(0) whose columns are the covariant Lyapunov vectors at time t = 0 and write Therefore, at times t large enough within the integral we have a matrix of the form leading to |A(t)| → e λα >0 λαt at t → ∞.

D. Anticipating the quantum case
We now go back to the classical result in Eq. (12). The left hand side of this equation is also valid even in the absence of exponential divergences, although in that case the total entropy production does not scale linearly with time, and the KS entropy defined as a logarithmic entropy production per unit time goes to zero. This is also true for the quantum formula derived below. Two additional comments will be important for the quantum case: a) Coupling to the noise: Is it necessary to couple noise to all variables? The question is relevant classically because we usually couple noise only to the coordinates, and it appears in half of Hamilton's equations. It will be much more serious quantum mechanically, where there is considerably more freedom. Let us see the case when the noise is coupled to a single variable, say x 1 . We may redo the calculation above, to obtain a new matrixÂ given bŷ Once again, if we consider only the initial time t = 0, the corresponding matrix will have rank one, and we shall obtain only one expansion direction. However, as we integrate over many t , provided the system is such that the eigenbasis is rotating with dynamics, we have that the rank of the matrix will increase as time passes. This applies to the 'recent' times as well, which also have to be integrated so that they contribute fully to the contracting directions. We hence conclude that if at times t for which the KS entropy is computed, the tangent space has rotated ergodically, then it is not important how many noises we have coupled to the system. We shall see below that quantum mechanically the situation is more subtle, and also more unavoidable. Let us note also that applying noise to a local (in space) operator, and seeing how entropy builds up, is an interesting question in itself.

b) Initial distribution width.
Suppose the initial condition y 0 , rather than being a single configuration, is spread over an isotropic Gaussian of width a in phase-space, as in Figure 1. A simple modification of the steps leading to Eq. (11), namely, taking In the absence of noise 2 A(t) = 0 and the rhs is a constant, because B is symplectic and has determinant one. Note that a 2 , being the volume of a (p, q) cell, has dimensions of action. We obtain the correct time-dependence only if the expansion due to noise is large compared to the initial volume: 2 τ a 2 e λt 1. This gives us a warning of trouble ahead for the quantum case, where initial packet width is unavoidable, and we are not allowed to take times to infinity.

III. QUANTUM KOLMOGOROV-SINAI ENTROPY: A DEFINITION
Motivated by the classical case, we consider a quantum evolution coupling the system to a weak, auxiliary bath, which can be introduced from first principles as a coupling to an infinite set of oscillators, exactly the way one derives the classical Langevin equation [21]. We note that this line of reasoning was applied for understanding entropy production and decoherence in open quantum systems [2, 4,6].
If the temperature of the bath is infinite, friction may be neglected, otherwise forward and backward propagations are with different fluctuating fields, as may be seen from the Schwinger-Keldysh formalism [22,23]. In this limit the bath becomes effectively classical. We can thus consider, for infinite auxiliary bath temperature, the Hamiltonian with the time-dependent random field h(t), which has probability distribution P ext [h(t)] given in Eq. (7). For simplicity, as in the classical case, we shall consider that the correlation time of this bath τ is much smaller than the relevant timescales of the problem at hand (see discussion above concerning the white noise limit).
We are interested first in the Rényi entropy generated in the process, starting from an initial density matrix ρ 0 .
For the moment we restrict ourselves for q = 2, in Appendix B 2 we generalize for all integer q. The expression for S 2 (t) involves path-integral over two different realizations of the noise, h 1 and h 2 : where we work with the interaction picture U h, Our approach to evaluate Eq. (18) in the weak noise limit is as follows: we first approximate ln Tr ρ 2 (t) for a given random field, up to second order in h 1,2 , and then we average over the noise amplitude. The same procedure can be carried out for the analogous classical problem, which then reproduces the classical result in Eq. (17); this is shown explicitly in Appendix A 2. Let us then define and perform time-dependent perturbation theory around h 1 , h 2 = 0, up to second order. The first order terms vanish, whereas the second order terms depend only on the difference η = h 1 − h 2 , we find (see derivation in Appendix B 1): We are left with solving Eq. (20) as a Gaussian integral over the fields η α (t), recalling that, in the limit of small timescale τ of the bath: . Next, we obtain this solution by taking a simplified initial state ρ 0 , and employing the Matrix Determinant Lemma.
For simplicity we work hereafter with a Hilbert space which is represented by a real, orthogonal basis {|µ }. Let us consider the case where the initial density matrix is given by a pure state, ρ 0 = |φ 0 φ 0 |. We shall generalize in Sec. VI to any initial condition. Developing the trace in the exponent of Eq. (20) we can write: where We thus find: where we recall that the wavefunctions |µ (and the noise) are real, and the subscripts Re and Im refer, respectively, to the real and imaginary parts of a complex number, e.g., φ 0 |x α c (t )|µ Re ≡ Re ( φ 0 |x α c (t )|µ ). We now simplify the solution to the Gaussian integral in Eq. (23) by using the Matrix Determinant Lemma: for an invertible matrix K of order k × k, and a matrix U of order k × m we have: where I m is the m × m identity matrix. If we now discretize time, 0 = t 1 < t 2 < · · · < t T = t, we can rewrite the path-integral as an integral over a vector η α where we define the 2N T × 2N T matrix K αβ tntm ≡ δ αβ tntm dt and the 2N T × L matrices U α;µ tn ≡ 2 φ 0 |x α c (t n )|µ Re dt and V α;µ tn ≡ 2 φ 0 |x α c (t n )|µ Im dt, with L being the size of the Hilbert space and N the number of degrees of freedom. The solution to the Gaussian integral involves a determinant of a 2N T × 2N T matrix, which, by employing the Matrix Determinant Lemma can be simplified to a determinant of a 2L × 2L matrix: Inserting back the definitions of K, U, and V, with taking back the continuous time limit we find Note that the determinant is of a matrix that: i) has, at this stage, twice the dimension of Hilbert space 2L × 2L, ii) because it is a sum over tensor products it has positive eigenvalues. As a final step we may multiply the matrix within the determinant left and right with Z ≡ 1 and Z −1 respectively to find: where we still assume that |φ 0 and the Hilbert space bases are real.
Equation (28) with definition (29) is our first main result: a formula for the entropy production starting from |φ 0 φ 0 | due to a weak coupling to a bath. It is valid even when there is no exponential expansion, but if and when there is, we deduce a generalization to the classical Kolmogorov-Sinai entropy. In Appendix B 2 we repeat the calculation for a general Rényi entropy: the result is that these entropies differ when averaged over different starting situations ('multifractality', just as in the classical case), but the effect of quantum fluctuations alone is subdominant.
There is no multifractality of quantum origin for a single trajectory.
A general comment is in order here: The fluctuating field is coupled only to the phase-space operators x α . However, as discussed at the end of Sec. II, this can be generalized easily to any coupling: if we consider a set of observables {O α }, and the Hamiltonian H(x) = H 0 (x) + α h α (t)O α , then the expression (29) remains the same, with the {x α c } replaced by {O α c }. So far this situation is very much like the classical case, but there is however a catch: quantum mechanically we may in principle couple noises to operators that are not simple functions of the physical operators x i , for example the projector onto any specific wavefunction, so one could have an exponential number of independent noises applied to the system. We are not choosing to do this here, the noise terms are acting on physical observables, and are in number of the same order as the degrees of freedom. This will have the important consequence that some final expressions may be written in lower-dimensional spaces.
For the rest of the paper we further discuss our quantum entropy production Eq. (28), and the KS entropy that it defines in the presence of an exponential growth.

A. Additivity
The KS entropy is a rate of entropy production. As such, our definition in Eq. (28) must admit the property of additivity. To verify this, let us consider two uncoupled systems in the Hilbert space H (1) ⊗ H (2) of size L 2 . The matrix A in Eq. (29) is now constituted of terms of the form: where the superindices denote the space to which each z belongs. These may be written as Consider now a basis for each space {|φ 0 , |µ } where the |µ are orthogonal to |φ 0 . In the tensor product of the two bases we have vectors like i) |φ 0 ⊗ |φ 0 , ii )|φ 0 ⊗ |µ , iii )|µ ⊗ |φ 0 , iv )|µ ⊗ |µ . Within subspaces i and iv all terms are zero. The l.h.s. terms of Eq. (31) are zero in subspace ii and equal to z Similarly, the r.h.s. terms of Eq. (31) are only non-zero in subspace ii, where they take the value z c . The determinant of Eq. (29) is the determinant of a matrix with two different blocks, each of size 2L − 1 × 2L − 1, and the identity elsewhere. The determinant is then a product of the ones associated with each space, so that the logarithm is additive.

B. Timescales
We will check below in Sec. IV that our definition of the KS entropy gives the correct result in the semiclassical limit. However, for a truly quantum system, Eq. (29) seems problematic: if we let → 0 at finite , we will not get a finite limit for h KS plus logarithmic corrections in , but rather Tr ρ 2 (t) ∼ →0 2 τ 2 Tr A. And yet, as we have seen when discussing additivity, the identity term within the determinant is essential to obtain a meaningful result, so we ought to understand what is its meaning. We have a dilemma: we need to be very small because we have assumed this in all our expansions, but we now see that in the limit → 0 the expression becomes meaningless. There is an apparently obvious way out to this: choose times that are long enough, so that the exponential expansion of A compensates for the smallness of . However, we expect there will be a limit to this, the Ehrenfest time at which a minimal quantum packet will expand throughout phase-space. We argue that this is an inherent problem, shared also by the very definition of quantum Lyapunov exponents themselves.
Let us take the dimensional factors away from the phase-space coordinates (which we had normalized to have dimensions of (action) 1 2 ), writing x = 1 x and define A o as in Eq. (29) but with x , to get: 'quantum cell' sizes. Thus, the comparison with the first term is just expressing the fact that expansion of a direction over a fraction of one quantum cell size does not contribute to the determinant.
Let us assume that there is, at least during some time, a Lyapunov expansion with a largest exponent λ 1 , meaning that A o will have some elements growing as 1 λ1 e λ1t . This expansion may last up to the Ehrenfest/scrambling time t E , which we may estimate as: where k depends on the system, and in particular its number of degrees of freedom. (We note that its value is rarely specified in the literature -we shall not break with this tradition, but just specify that it has to be related to the unperturbed system, it does not have to involve 2 τ ). Looking at (32), we conclude that we may hope to have a meaningful expression, valid at times just below the Ehrenfest time, if: It is now clear that the semiclassical limit, taken before → 0, is a possibility to have a well-defined KS entropy. But there is more: there are cases when the constant k is expected to scale as the number of degrees of freedom k ∼ N , while the ordinary two-point correlation does not. For these kind of systems, where there is an hierarchy between Ehrenfest/scrambling and correlation times (the reader will find a discussion of this in Ref. [24]), we may take → 0 with 2 τ k large, and the Kolmogorov-Sinai entropy is well defined, and a non-zero value for it is possible.

IV. SEMICLASSICAL
Let us now evaluate our quantum definition for the KS entropy in the semiclassical limit. This can be done starting with the earlier expression Eq. (20), as we show in Appendix A 3; or, working with Eq. (29), as we show now. We need to evaluate the expectation value φ 0 |x α c (t )|µ in the semiclassical limit, where, for simplicity of presentation we assume that the system evolves with the unperturbed Hamiltonian of the form H 0 (x) = p 2 2m + V (q). The unitary evolution in the semiclassical limit can be obtained in a standard way, developing the path-integral around the classical trajectory: we have x = x cl (t) + y(t), with x cl (t) being the classical trajectory, and its quantum perturbation is described by the operator y = (δp, δq) through the time-dependent Hamiltonian This is an harmonic oscillator with time-dependent frequencies: the evolution of y is linear and it coincides with the classical counterpart. Therefore, y α (t) = R αβ (t, 0)y β 0 , with Since, by construction, x cl is diagonal and φ 0 |y|φ 0 = 0, we have φ 0 |x α c (t )|µ = φ 0 |y α (t )|µ . The vectors which build the operator in Eq. (29) then read φ 0 |x α c (t ) = R αβ (t , 0) φ 0 |y β (0).
As described in the classical case, we normalize the operators such that they have the same dimensions, (action) 1 2 , taking x = (bq, p/b) with b = √ mω, which we shall assume in what follows. The oscillator Hamiltonian thus reads: We take the ground-state of this Hamiltonian to define the initial density |φ 0 = |0 . This choice of basis and initial conditions is appropriate to obtain the semiclassical limit since the initial distribution is localized in phase-space.
Let us now calculate the elements in Eq. (37) which gives the matrix A in the semiclassical limit. The Hilbert space breaks into subspaces of different total numbers of boson excitations of the oscillators Hamiltonian H ref . The y α (0), being linear, create, when acting on |φ 0 , states of one boson. Hence, in the space having two or more bosons the matrix A is zero, and the determinant in Eq. (28) is determined by a 2N -dimensional matrix within the doubled one-boson subspace (recall that the dimensions of A is twice the size of the Hilbert space). This matrix is constructed where |1 n is the state of one boson for the n−th degree of freedom and zero for all the others. Note the crucial role played by the identity in Eq. (28) outside this subspace. Since we are interested in the determinant we can multiply the above matrix from left and right withZ −1 ≡ 1 We thus show that in the semiclassical limit the second Renyi entropy is given by ln Tr ρ 2 (t) = 1 2 ln det 1 + 2 τ t 0 dt R T (t , 0)R(t , 0) .
As a final step, we can take, inversely, the steps which bring us from Eq. (13) to (14), employing the symplecticity of R. We multiply Eq. (40) from left and right with R(t, 0)Ω −1 and ΩR T (t, 0), respectively, to obtain: This is to be compared with the classical expression (17), where the quantum initial state playing the role of the initial packet, with a 2 ↔ .
If we now let → 0 before → 0, we are in the same situation as the limit of small size a of the initial packet in the classical case. If, on the contrary, we make → 0 after → 0 the result is not the Lyapunov expansion.

V. QUANTUM PESIN RELATIONS
Several quantum generalizations of the KS entropy were suggested previously [9][10][11][12]. In particular, Ref. [12] defined an entropy via Quantum Lyapunov exponents which are in turn defined by the Out-of-Time-Order Commutator (OTOC) [25]: the square of a commutator of two operators at different times, [O α (t), O β (0)] 2 . We now make a connection between this notion and our formula (29).
Let us split A into early and late times as follows: such that: Now, because both A + + 1 and A − are positive semi-definite, we may apply Minkowski's determinant inequality: which gives Let us use this inequality to find a lower boundh KS , which we shall later argue may be exact in some cases. We proceed as follows: we choose the timet such that fort < t < t we may consider that A + (t ) ∼ A + (t). The timē τ ≡ t −t is of the order of the autocorrelation of the x α . Then, we defineh We can then use the Matrix Determinant Lemma (Eq. (24)) to obtain, under these assumptions, an expression in where we have used the relation in Eq.
with eigenvalues e λit , then thanks to the δ αβ term we havē exponents', although it is very tempting to interpret them this way. Indeed, if there are exponential expansions, we may expect the inequality to be saturated, andh (2) KS : this is because the smallness of the time-intervalτ is compensated by the exponential expansion. We shall check this in the semiclassical limit.

A similar relation of the form h
(2) KS = λi>0 λ i was already proposed in Ref. [12] as a definition of the quantum KS entropy, where the λ i defined instead on the basis of the eigenvalues of the matrix with |ψ being an eigenstate of the Hamiltonian. (Another different definition of Lyapunov spectrum was given in Ref. [29]). Let us insist that the relation we derive here is not a definition, but rather a derivation in terms of the entropy production.

Check of the semiclassical limit
One can check the semiclassical limit on second equality of Eq. (47): As shown in Sec. IV, we have x α The matrix φ 0 |y γ (0)y δ (0) + y δ (0)y γ (0)|φ 0 is simply the identity matrix times : recall that for the semiclassical limit we choose |φ 0 as the oscillator vacuum, therefore, if the indices correspond to different degrees of freedom, the expectation breaks into a product of two expectations both vanishing, if they correspond to a p a and its partner q a they also vanish, and φ 0 |[y γ (0)] 2 |φ 0 = . Then: which is the correct classical result, and thush (2) KS in this case.

VI. AVERAGING OVER INITIAL STATES
In the analysis above we have assumed that: (i ) the Hilbert space is represented by real and orthogonal eigenfunctions |µ , and (ii ) that the initial state is given by one of these eigenfunctions ρ 0 = |φ 0 φ 0 |. These assumptions facilitate the derivation of the KS entropy in Eq. (28), and were suitable for taking its semiclassical limit. We now discuss how to generalize the result beyond these assumptions.
Concerning point (i ), it is important to extend the result to a non-real eigenbasis since the derivation, as well as the proof of additivity, assumes a basis which is orthonormal to |φ 0 . We can always construct such a basis, however, depending on |φ 0 , it might not be real. The derivation can be readily extended to any choice of eigenbasis {|µ }.
The assumption of real eigenbasis was only to facilitate the notations for ( φ 0 |x α (t )) * = φ 0 |x α * (t ). For the general case, the matrix A, which is constructed by four blocks, reads We are left with relaxing assumption (ii): let us now show how we can average over initial conditions, generalizing the result beyond the case of ρ 0 = |φ 0 φ 0 |. The state |φ 0 may be chosen as the ground state of an oscillator, translated by (q i o , p i o ). In other words, |φ 0 is the coherent state |z : where z a = q a o + ip a o and α † a = q a − ip a . This immediately suggests how to average over initial conditions. Consider for example the Boltzmann-Gibbs measure ρ GB . Like any operator, it may be expressed in its P -symbol representation [30] ρ GB = dzdz ρ P GB (z,z) |z z|, where ρ P GB (z,z) is a phase-space function representing ρ GB . Then, a Rényi entropy averaged over the Gibbs-Boltzmann entropy reads: and Tr z ρ 2 (t) = det 1 + 2 τ with A z defined as A in Eq. (29), replacing |φ 0 with |z . The same can obviously be done for all Rényi entropies S q .
Note that it is in this averaging over z that the nontrivial dependence on q ('multifractality') enters.

A. Other groups, localized initial state
Writing the Rényi entropy with averaging over coherent states, the generalization to spin systems seems natural.
Suppose we have a spin-j chain. We construct the coherent states for each site in this representation in the standard way: Next, we consider two independent operators per site, for example J x i and J y i , and construct in this way the corresponding operator A = dt α i J α i (t )|z z|J α i (t ), where α = x, y. All the previous steps follow, including the semiclassical limit for j → ∞.
For fermion systems, a similar strategy may be envisaged, replacing the spin operators by bi-fermionic operators

VII. SUMMARY AND OUTLOOK
We have discussed a Kolmogorov-Sinai entropy of a closed quantum system by coupling it weakly to an auxiliary bath. The construction here allows one to derive (rather than assume) a quantum version of the Pesin relation. The external bath is classical in the sense that it is Markovian. However, the same steps of the analysis may be generalized to a truly quantum bath with a finite temperature and friction through the Schwinger-Keldysh formalism. We note that adding noise correlation can simply be done through the matrix K in Eq. (25). However, the case of friction is less trivial, since within the Schwinger-Keldysh formalism, it means that forward and backward trajectories are not identical, and the second Renyi entropy in Eq. (18) will thus involve four realizations of the noise.
In this paper we considered the instability generated in the system by the bath when coupled to a set of operators that is of the order of the number of degrees of freedom, or a subset of them. This has the important consequence that some final expressions may be written in lower-dimensional spaces. One could of course have used a much larger set of operators, including projectors onto pure states, and the result would have probably been different. Our procedure is thus an arbitrary, if physically motivated, choice.
A possibility that is physically interesting is to couple noise to a set of operators that are local in real space. As noted above, definition (28) is not restricted to systems where there is an exponential growth of OTOC-like quantities, and some, though not all, of the results in this paper carry through when there are no exponential growths in time, as is believed to be the case in local spin systems.
It would be also interesting to establish contact with the mathematical literature on the subject, in particular with the Connes-Narnhofer-Thirring entropy [10]. Note that this entropy is defined in the thermodynamic limit, thus, connection between the two definitions shall be done in view of the macroscopic limit.
The recent bound on the quantum Lyapunov exponent states that the exponential growth rate of the OTOC cannot be larger than 2πk B T / , with T the temperature [24]. This result has simulated an enormous body of theoretical and experimental studies in various fields. An interesting future direction will be to study a quantum bound on the KS entropy. Indeed, it is natural to have a temperature-dependent bound on the growth rate of an entropy, which, unlike the Lyapunov exponent, is an extensive thermodynamic quantity. Finally, it will be interesting to understand whether the production rate of entanglement entropy in a closed quantum system, which might be driving thermalization [1], follows the KS entropy. Several works have already established such a relation in semiclassical setups [5][6][7][8] or by projecting the quantum unitary evolution to an effective semiclassical one [9].

ACKNOWLEDGMENTS
TG and JK are supported by the Simons Foundation Grant No. 454943.

Appendix A: Classical derivation
In this appendix we provide (i ) rigorous grounds for the classical Pesin relation given in Sec. II, (ii ) a classical derivation involving Poisson brackets, which is completely analog to the quantum one, and (iii ) an alternative semiclassical derivation.

Classical Pesin relation: more details
In order to see how contributions from the noise at different times add, the natural objects to consider are the p forms Λ i1,...,ip (t), which represent the volume elements of dimension p. These expand according to [31] Σ i1,...,ip Λ 2 i1,...,ip (t) ∼ e tΛ (p) Σ i1,...,ip Λ 2 i1,...,ip (0) ∼ e (λ1+...+λp)t Σ i1,...,ip Λ 2 i1,...,ip (0). For a physicist, the easiest way to represent this is introducing fermions a i , a † i and their vacuum state |0 , and writing the contribution from all the t as [32,33]: In fact, introducing another family of fermions b i , b † i , and the vacuum of both species, we have Now, the existence of Lyapunov exponents is the statement that the integral T e − t t Ĥ (t )dt ∼ e (t−t )Ĥo is extensive to exponential accuracy in time, so that T e − t t Ĥ (t )dt ∼ e (t−t )Λ (p) |p p|, where Λ (p) and |r are the lowest eigenvalue and corresponding eigenvector of the subspace with p fermions of each kind.
We may thus evaluate the integrals to within exponential accuracy to get The identity added in the central bracket comes precisely from the times t ∼ t and encompasses the effect of the late noise on the contracting modes. We may now distribute the expression for the determinant, which becomes a development in minors: where the c p are time-independent overlaps. The sum is dominated by the exponential of largest of the all the exponents, say Λ (p + ) , which we hence identify as the KS entropy. Similarly, the expansion of p forms, with p ≤ p + is dominated by Λ (p) , so we may identify Λ (p) as the sum of the first p Lyapunov exponents. Then, h KS = Λ (p + ) which is precisely the sum of the positive Lyapunov exponents. For a dynamics averaged over a strong noise, it is easy to make this into a rigorous proof.
2. Classical formula with Poisson brackets: direct analogy with the quantum case.
We now derive a classical formula which is an analog of Eq. (20). We show how it yields the classical result given in Eq. (17). The way to go from classical to quantum mechanics is Thus, we expect that for classical Hamiltonian system Eq. (20) transforms as Let us now see this in detail.
Classically, we have dx dt = −iLx, and for the density matrix dρ dt = iLρ, where −iL ≡ − {·, H}. The L superoperator is Hermitian. We can define the following superoperator Now, let us turn to our problem. We have the following time-dependent Hamiltonian we wish to calculate the Rényi entropy at time t, starting with some initial distribution of points in phase-space ρ 0 .
Thus we look at with Expanding to second order around h 1 , h 2 = 0 is easier if we write Then, for example, the term which is proportional to h 1 (t )h 1 (t ) goes as For the first equality we use the identity dyf (y)e iLt g(y) = dỹe −iLt f (ỹ)g(ỹ), where we unitarily change basis y → e iLt y; for the second equality we used the identity: U {f, g} = {Uf, Ug}; and for the third equality the identity: {f, g} k = − f {k, g}. Therefore, in analogy to the quantum case we have and one can conclude that the classical KS entropy follows Eq. (A7). Next, let us show how this alternative classical formula relates to the one derived in the text, Eq. (20).
Let us assume some Gaussian initial condition Plugging this into the argument of the exponent in Eq. (A7) one finds where we assumed that where it is assumed that ρ 0 (x 0 ) and x α (t) are simple functions of x 0 such that in the semiclssical limit they are given by their Weyl transform [34]. Eq. (A19) tells us that the semiclassical limit of Eq. (20) is the classical expression in Eq. (A7).
In summary, we find that which gives Eq. (20).

All the Rényi entropies
We can generalize the derivation to the Rényi entropy of any order. Taking ρ 0 = |φ 0 φ 0 | we can write The second derivatives do not depend on r (replica symmetry), so that we obtain a tight-binding problem with r sites. This may be treated as the case of q = 2 discussed in the main text, and one easily gets: where the factors sin 2 πr q are the eigenvalues of the matrix with ones in the diagonal and minus one-half between nearest-neighbors, and periodic boundary conditions. Note that we have not yet averaged over different initial conditions ρ 0 .
All the determinants differ only in a prefactor multiplying A µν , so we conclude that, at least for q of order one, there is no multifractality of quantum origin, i.e. given one initial quantum condition. But of course there will be once we decide to add over different initial conditions, just as there is in the classical case. [2] W. H. Zurek and J. P. Paz, Decoherence, chaos, and the second law, Phys. Rev. Lett. 72, 2508 (1994).