Quantum Information Dimension and Geometric Entropy

Geometric quantum mechanics, through its differential-geometric underpinning, provides additional tools of analysis and interpretation that bring quantum mechanics closer to classical mechanics: state spaces in both are equipped with symplectic geometry. This opens the door to revisiting foundational questions and issues, such as the nature of quantum entropy, from a geometric perspective. Central to this is the concept of geometric quantum state -- the probability measure on a system's space of pure states. This space's continuity leads us to introduce two analysis tools, inspired by Renyi's information theory, to characterize and quantify fundamental properties of geometric quantum states: the quantum information dimension that is the rate of geometric quantum state compression and the dimensional geometric entropy that monitors information stored in quantum states. We recount their classical definitions, information-theoretic meanings, and physical interpretations, and adapt them to quantum systems via the geometric approach. We then explicitly compute them in various examples and classes of quantum system. We conclude commenting on future directions for information in geometric quantum mechanics.


I. INTRODUCTION
When connecting theory to experiment both classical and quantum mechanics (CM and QM) must cope with the emergence of randomness and uncertainty.However, the nature of randomness, and its dynamical emergence, can differ.Building on previous results [1,2] that exploit geometric parallels between classical and quantum state spaces (both with state spaces equipped with symplectic manifolds), we extend several tools for analyzing out-of-equilibrium classical systems to the quantum domain.This strengthens the parallels and provides a novel paradigm for investigating far-from-equilibrium open quantum systems.Specifically, following Kolmogorov and Sinai's use of Shannon's information theory [3] to quantify degrees of deterministic chaos [4][5][6][7][8][9], we show that the parallels go even deeper and lead to new descriptive and quantitative tools.This is done using Geometric Quantum Mechanics (QGM), an approach to quantum mechanics based on differential geometry that removes physical redundancies in quantum states intrinsic to the standard, linear-algebra approach.QM, in point of fact, is grounded in a formalism in which the states of a discrete system are vectors in a complex Hilbert space H of generic finite dimension D. However, it is well-known that such a formulation is redundant: vectors differing only in normalization and global phase are physically equivalent.Implementing this equivalence relation leads to the space where quantum states live: the complex projective Hilbert space P(H) ∼ CP D−1 .This is GQM's starting point [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27].It is important to stress that, while the mathematical formulation differs, the phenomena addressed are precisely the same as standard quantum mechanics.Nonetheless, the geometric approach has proven (i) to be a rich source of fundamental insights into the nature of quantum phenomena and (ii) to lead to powerful analysis tools.Our goal is to advance this perspective to investigate the outof-equilibrium phenomenology of open quantum systems.GQM works with probability measures on P(H).These are interpreted using ensemble theory, as noninteracting copies of pure states for the same quantum system, distributed according to some measure.This leads to the concept of a geometric quantum state (GQS) as an ensemble of pure states.(For an extensive analysis we recommend Ref. [19].)This is a more fundamental notion of quantum state than the density matrix, as the latter can be computed from the former, but not vice versa.Recent work provided a constructive procedure to compute the GQS of an open, finite-dimensional, quantum system interacting with another one of arbitrary (finite or infinite) dimension [1].This revealed why the GQS provides a more accurate description than available with a density matrix: The GQS retains the details about how a specific ensemble of pure states emerges from the structure of correlations between the system and its surroundings.Starting from this foundation, the following introduces two information-theoretic concepts to characterize geometric quantum states.The first is the quantum information dimension.This borrows from Renyi's notion of the effective dimension of a continuous probability distribution, developed in the setting of efficiently transmitting continuous variables over noisy communication channels.Interestingly, for classical variables there are distributions for which the dimension is not an integer-these are the well-known fractals [28].It also has an operational interpretation within communication theory, as the upper bound on the lossless compression rate for transmitting GQSs.The second GQS characterization uses the (related) concept of dimensional geometric entropy.Accounting for GQS dimension, this entropy quantifies the information a GQS stores about a quantum system.The development unfolds as follows.Section II gives a brief summary of geometric quantum mechanics and the notion of geometric quantum state.Section III defines the quantum information dimension, while Section V introduces the dimensional geometric entropy.Sections VI A to VI D then analyze several examples, evaluating these quantities exactly.The first is an open quantum system interacting with a finite-dimensional environment.The second is an open quantum system interacting with another with an infinite-dimensional Hilbert space (continuous degrees of freedom).The third shows how to evaluate these quantities for discrete-time chaotic quantum dynamics.The fourth shows how to evaluate these quantities in the thermodynamic limit for a condensed-matter system in its ground state.Finally, Section VII discusses the results and Section VIII draws forward-looking conclusions.

II. GEOMETRIC QUANTUM MECHANICS
References [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27] lay out the mathematical physics of GQM.Here, we simply recall the aspects most relevant for our purposes.Throughout, we only address quantum systems with a Hilbert space H of finite dimension D. In GQM, the pure states of such systems are points Z in the complex projective space P(H) ∼ CP D−1 .Given an arbitrary basis {|b α ⟩} α of H, the pure state Z has the vector representation: where Z α ∈ C and Z ∼ λZ, with λ ∈ C, giving Z ∈ CP D−1 .This space has a rich geometric structure [29].In particular, there is a well-defined metric-the Fubini-Study metric g F S -and a related notion of Volume dV F Sthe Fubini-Study volume element.These are directly connected, up to an overall positive multiplicative scalar, by dV F S = √ det g F S dZdZ, where the overbar is the complex conjugate and dZ is Lebesgue measure.While a full explication is beyond our current scope, we simply give its form in a particular coordinate system, specified by Z α = √ p α e iϕα : dV F S = D−1 α=1 (dp α dϕ α /2).On P(H), one considers ensembles distributed according to a probability density function or, more generally, a probability measure µ.The simplest example is the basic definition of the uniform measure: dν F S := dV F S /V D−1 , where the total Fubini-Study volume of P(H) is V D−1 = π D−1 /(D − 1)!.This determines the basic notion of uniform measure on P(H).Calling A an element of P(H)'s Borel σ-algebra and adopting the De Finetti notation, we have: In general cases, the measure µ is not uniform and one has: Looking at the measure-theoretic definition, if µ is absolutely continuous with respect to ν F S , then there is a probability density function q(Z = z) such that: This is interpreted by saying that µ q (dν z is the infinitesimal probability of a realization z-i.e., a system pure state-belonging to an infinitesimal volume dν z F S centered at z ∈ P(H).Thus, we can think about the pure state of a quantum system as a realization z of a random variable Z with sample space P(H).Here, z is distributed on P(H) according to its geometric quantum state q(z) or µ.Through an abuse of language, we often refer to both the measure µ and the density q (when it exists) as geometric quantum states.This is acceptable as they convey the same kind of information.Together, the triple (P(H), Z, µ) defines a random variable, in the classical sense, in which the sample space is continuous and encodes the underlying quantumness of the physical system we aim to describe.We call this a random quantum variable (RQV).Following standard notation, if Z is an RQV, we denote a realization with a lowercase corresponding letter z. (This is not to be confused with the notation Z α , with a Greek label, that refers to the α-th component of the vector |Z⟩ ∈ H.To aid intuition, Fig. 1 displays an example of a geometric quantum state of a qubit, using the (θ, ϕ) coordinates to represent P(H) as the familiar Bloch sphere.

III. QUANTUM INFORMATION DIMENSION
One GQM advantage we exploit is that it allows the use of classical measure theory.The price paid is working with an underlying manifold of nontrivial geometry, as just noted.Intuitively, GQM directly encodes "quantumness" in the underlying geometry of the sampling space.Building on this, we now formalize the dimension of a geometric quantum state, extending Renyi's information theory [30] to the quantum domain.
Given P(H) and a measure µ on it, we uniformly discretize the manifold and coarse-grain µ to obtain a discrete distribution.This is accomplished as follows, using "probabilities + phases" coordinates: Z α = √ p α e iϕα .In this, {p α } D−1 α=0 lives in a probability simplex ∆ D , while the phase coordinates {ϕ α } D−1 α=1 live on a (D − 1)-dimensional torus T D−1 .Given this coordinate set, we partition both ∆ D and T D−1 separately, in a uniform fashion.More accurately, this is a partitioning of the σ-algebra on P(H) into a finite and discrete collection of sets.Since both ∆ D and T D−1 are manifolds of real dimension Here, ⃗ j = (j 1 , . . ., j D−1 ) is the multi-index label that runs over The reason to partition P(H) using this coordinate system is the resulting factorization of dν F S in "dp × dϕ": where dP (∆ D ) = (D − 1)!δThis allows us to separately discretize the probability simplex and the high-dimensional torus, while also resulting in cells Q( ⃗ j, ⃗ k) with uniform Fubini-Study volume: where ϵ sets the scale of the partition elements.Note that, despite the discretization's specific coordinate system, the value of V D−1 (ϵ) does not depend on the coordinate system, thanks to the invariance of dν F S .Therefore, defines a good uniform partition of P(H).
Calling z ⃗ j, ⃗ k a generic point in Q( ⃗ j, ⃗ k), we can now perform the coarse-graining procedure to get a discrete random variable Z ϵ , for Z ∈ CP D−1 : where the probability where (only) in the last equality we assumed the existence of a density q(Z) and, in that case, z ⃗ j, ⃗ k is a point inside Q( ⃗ j, ⃗ k) defined implicitly by q(z ⃗ j, ⃗ k ) = For explorations in discretizing continuous random variables see the lectures in Ref. [31] and relevant developments in Refs.[32,33].
Given that Z ϵ is a discrete random variable, we can now use Shannon's functional to evaluate its entropy: Substituting Eq. (1) into Eq.( 2) and calling D the real dimension of the submanifold of P(H) on which µ has nonzero support, then as ϵ → 0 one has that H(Z ϵ ) is linear in − log ϵ [30]: Note that µ can also have support on the whole manifold, in which case D = 2(D − 1)-the real dimension of P(H).Thus, D's value in this scaling is the information dimension while, as we show shortly, the offset H provides a workable definition of differential entropy that accounts for the dimension of µ's support.With this in mind, we are now ready to define a geometric quantum state's quantum information dimension.
Definition 1 (Quantum Information Dimension).Given a finite-dimensional quantum system with state space P(H) ∼ CP D−1 and geometric quantum state µ, the latter's quantum information dimension D is: While alternative definitions are possible, see Ref. [34] and references therein, they all aim to rigorously ground the same idea.And, the alternatives often provide identical values, assuming the validity of some regularity conditions.
The key point is that the essence and the result of the development do not change under these alternatives.For a detailed explication of information dimension see Refs.[30,34,35].
Note that there are useful theorems for explicitly calculating the information dimension.We use these, and develop some other ones, in the upcoming sections.Before proceeding to the geometric dimensional quantum entropy, though, we briefly discuss a connection between D and analog information theory, where D's classical counterpart has a direct interpretation.

IV. D'S INFORMATION-THEORETIC UNDERPINNING
Quantum information theory takes inspiration from the information theory of classical discrete sources.However, it is well known that quantum states need real numbers to be faithfully represented.In fact, they require several complex numbers or, equivalently, elements of R 2n .So, an approach inspired by information theory is appropriate [36] if we can identify a natural extension to situations where the random variables at hand have a continuous sample space.As such, one can also appeal to analog or continuous information theory.An example, relevant for our purposes, of a result from analog information theory is the quasi-lossless compression theorem.Loosely speaking, this answers the question "How much can we compress the information emitted by a continuous source, using continuous variables?"Rather than giving the full result-for which see Ref.
[34]-we simply discuss the essential point.Consider a continuous source emitting realizations ⃗ x ∈ X of a random variable X.We desire to compress its information.The dimension of X is arbitrary, but we assume X ⊆ R n , for some n.
Compression can be achieved using (N, K)-codes-a pair of encoder-decoder functions.The encoder function f : X N → Y K converts the continuous message into appropriate discrete symbols, belonging to the space Y.
The decoder function g : Y K → X N performs the inverse.Take the probability of making an error as that the (N, ⌊RN ⌋) code has δ ≤ ϵ error.Assuming a linear form for the encoder and decoder, one establishes that there is a fundamental limit to the amount of quasi-lossless (up to ϵ) compression one can reach.This limit is achievable and it is given by the source's classical information dimension: R(ϵ) ≤ D(X).
Here, with a slight abuse of notation, we use the same symbol D to also identify the classical information dimension.We also stress that this is only a brief and simplified summary of the comprehensive analysis performed in Ref. [34].What is relevant for our purposes is the fact that, despite its simplicity, it is directly applicable to quantum systems.In particular, it addresses encoding a quantum source emitting pure states Z ∈ P(H) with a classical continuous distribution given by the geometric quantum state µ.Moreover, as quantum states themselves are points on a manifold described by continuous variables, it can also be applied to the inverse problem of representing a continuous classical source with quantum states.While this begs further exploration before making rigorous statements, we believe it hints at the fact that there is an alternative way, inspired by analog information theory, of conceptualizing quantum computing and information theory.Before finally moving to dimensional quantum entropy, we highlight a point about D. While the understanding based on encoding and communication theory strengthens the argument for relevance, D's general role in investigating properties of geometric quantum states stands on its own, as it is independently and rigorously defined.

V. DIMENSIONAL QUANTUM ENTROPY
For a given geometric quantum state, D gives a notion of effective dimension.It is therefore natural that its value affects the definition of entropy one assigns to a geometric quantum state.
The standard example comes from comparing discrete and continuous probability distributions.In the discrete setting there is a unique entropy definition, given by Shannon's functional: Its extension to the continuous domain, however, is not unique and its construction, use, and interpretation require care.On the one hand, Shannon's original definition of differential entropy for a continuous variable X with probability distribution p(x) provides a meaningful and interpretable quantity [3]: On the other, it is well known that it presents its own challenges and that alternatives are possible.For example, it is well-known to be sensitive to rescaling of the measure.
When dx → kdx we have that Thus, when a physical measure is defined up to an overall scale factor, this quantity is defined up to an overall additive factor.This is an issue that can often be disregarded as it does not carry physical consequences, in analogy with the classical notion of energy, defined up to a constant.Practically, this can be bypassed by fixing the zero point of the entropy to be given by the uniform distribution.This is realized by taking the measure to be the normalized volume of the space.In this way, a uniform density simply has constant value equal to 1, giving a differential entropy H unif = log 1 = 0.
Note, too, that H continuous can be negative, as − log p(x) can be negative when p(x) is a density.This is not a concern, since correctly interpreting this quantity relies on the asymptotic equipartition property, which holds for both discrete and continuous random variables, irrespective of H continuous 's sign; see Ch. 8 of Ref. [36].
Moreover and finally, the differential entropy is appropriate only when the distribution has integer topological dimension.This is not true, for example, in nonlinear dynamics, in which time-asymptotic statistical states often live on fractals due, for example, to chaotic behavior.These objects do not have integer dimension.However, it is possible to define an entropy that takes this rich phenomenology into account.Again, for the classical result we point to Refs.[30,34].Here, we extend this into the quantum domain as follows.
Definition 2 (Dimensional quantum entropy).Given a finite-dimensional quantum system with state space CP D−1 , geometric quantum state µ with quantum information dimension D, we define µ's dimensional quantum entropy H D [µ] as: Note that this entropy is parametrized by the quantum information dimension.To provide intuition, consider two simple cases.Shortly after, Secs.VI A to VI D present a series of examples, with detailed calculations.
First, if D = 0, we see that H 0 [µ] is simply the continuum limit of Z ϵ 's entropy H(Z ϵ ).Second, imagine we are looking at the uniform distribution over Bloch sphere CP 1 .As this is an absolutely continuous distribution, it has quantum information dimension D = 2 and, therefore, the appropriate notion of entropy should take that into account.
We also find that when is equal to the notion of geometric quantum entropy introduced, as far as we know, by Ref. [37].See also Refs.[1,19,[37][38][39][40][41][42].In the simple case of a qubit with continuous geometric quantum state q(Z) this is: We now discuss two different but related interpretations of H D .The first one, of purely information-theoretic nature; the second one, of physical nature.

Information-theoretic interpretation of H D
Even in the classical setting, there is no unique definition of entropy for continuous variables [43,44].From a resource-theoretic perspective, one can argue that various definitions address slightly different resources.Thus, indirectly, their interpretation can be given by identifying appropriate operational meanings.
In our quantum setting, if q is absolutely continuous, then D = 2(D − 1) and H D [Z] provides the most straightforward definition: the differential entropy functional, see Ref. [36].This is essentially Shannon's functional Eq. ( 5) adapted to apply to a probability density, in which the sum changes into an integral.This can be proven directly from its definition in Eq. ( 7), using the assumption that q is absolutely continuous.In this case we have Therefore: While the integral extends to the whole of CP D−1 , since lim x→0 x log x = 0 only q(Z)'s actual support contributes in a nontrivial way.As with classical continuous variables, the information-theoretic interpretation of H D [Z] hinges on the asymptotic equipartition property (AEP) and on the fact that it characterizes the "size"-probability decay rate-of the stochastic process' typical set.
In short, the geometric formalism facilitates importing, mutatis mutandis, the tools of analog information theory (continuous variables) into the quantum domain.This holds since we can use classical measure theory to discuss the information-theoretic aspects of quantum states.
The price paid is that the arena where this occurs, which usually is an arbitrary sample space, is a manifold with geometric rules dictated by quantum physics.However, from the geometric standpoint, there is nothing special or uniquely challenging about complex projective spaces.Thus, one can appeal to standard results, simply by providing the correct setup.We will argue now in more detail that this holds for the independent and identically distributed (i.i.d.) random variables we consider.While somewhat restrictive, the AEP for i.i.d.random variables is a fundamental resultone that lays strong and rigorous foundations for more advanced investigations.For present purposes, a geometric version of the quantum AEP gives the informationtheoretic interpretation of Results on the classical differential entropy are found in Ref. [36].Here, we provide the proper setup and discuss the results for geometric quantum states.First, we examine more closely the i.i.d.assumption.The projective space of quantum states of identical systems is not the tensor product of the projective spaces: where H ⊗N D is the Hilbert space of N qudits, D = dim H D and P(H D ) ⊗N is the manifold of tensor product states of N qudits.This is directly seen since Second, and the key point, the i.i.d.assumption guarantees that a geometric quantum state on P(H ⊗N D ) is the product of N identical geometric quantum states on P(H D ).More precisely, given homogeneous coordinates Z α1,...,α N on P(H ⊗N D ), the submanifold of N i.i.d.quantum states is described by N homogeneous coordinates {X αi } N i=1 , with X αi on the i−th element P(H), such that Z α1,...,α N = N i=1 X αi .Together with the i.i.d.assumption, this implies that q(Z α1,...,α N ) = N i=1 q(X αi ).Geometrically, then, i.i.d.processes live on tensor products of the Segre variety embedded in P(H ⊗N D ).In this way, using the tools of classical continuous-variable information theory, one can easily prove the weak law of large numbers.The details are not particularly insightful, in that they simply reproduce a particular proof of the weak law of large numbers, and so are given in App. A. In turn, this guarantees that the following geometric asymptotic equipartition property holds for random quantum variables.
Theorem 1 (G-AEP for i.i.d.quantum processes).Let Z 1 , . . ., Z N be a sequence of i.i.d.random quantum variables drawn from CP D−1 according to q(Z), then: The limit converges weakly in probability; see the proof in App. A. The net result establishes that H D is a well- defined quantum information-theoretic entropy, with clear operational meaning, directly imported from continuous information theory.
Moreover, it is a tool of practical use as an i.i.d.sampling of the quantum state space produces an ergodic process.Hence, state-space averages can be evaluated using sequential time averages and vice versa.Here, we do not dwell more on this matter.However, we mention that a deeper and more comprehensive analysis of the use of geometric quantum mechanics to describe quantum stochastic processes is possible and will be reported elsewhere.

VI. EXAMPLES
This concludes our technical development of the quantum information dimension and geometric entropy.The next four subsections show how to compute them in several concrete physical cases, using a combination of analytical and numerical techniques: • A quantum system in contact with a finite environment; • An electron in a two-dimensional box; • Chaotic quantum dynamics and quantum fractals: Baker's and Standard maps; and • The thermodynamic limit.
Before moving to the actual analysis, to appropriately picture the quantum state space Fig. 2 gives a visual aid-the representation of the full quantum state space of a qutrit, i.e., D = 3. α=1 a basis within H E , we can always write [1]: α=1 be an arbitrary set of projective measurements on E. Then p E α = ⟨ψ| Π E α |ψ⟩ is the probability of finding the environment in |e α ⟩.And, |χ α ⟩ are the system's post-measurement states, upon finding the environment in state |e α ⟩.This implies that we can always write the system's reduced density matrix ρ S := Tr E |ψ⟩⟨ψ| as: One can interpret Eq. ( 9) as a Schmidt-like decomposition in which the sum runs from 1 to d E -the dimension of the larger of the two systems.Note that states χ S α do not generally form an orthogonal set.This environmentinduced decomposition of the globally pure state |ψ⟩ provides a geometric quantum state: where δ χα is the Dirac measure with support on χ α -the element of P(H) corresponding to |χ α ⟩.
From this we can extract two general results for when a system interacts with a finite-dimensional, albeit arbitrarily large, environment.
Theorem 2. Given a finite-dimensional quantum system S interacting with a finite-dimensional quantum environment E, S's quantum information dimension D = 0.This is easily seen from Eq. ( 11), which is a finite sum of Dirac measures, thus having support on a finite number of points, which has dimension zero.This is always true for a system interacting with a finite environment.We can then draw a general result about the dimensional quantum entropy.
Theorem 3. Given a finite-dimensional quantum system S interacting with a finite-dimensional quantum environment E, S's dimensional quantum entropy is: where p E α = ⟨ψ| Π E α |ψ⟩ is the probability of finding the environment in state |e α ⟩.
Two comments are in order.First, the dimensional quantum entropy is invariant under unitary transformations operating on the system.This is easily seen as it depends only on p E α , which has the required behavior.Second, H 0 [q S ] in general does (but does not have to) scale with the size of the environment: While counterintuitive, this dependence is physically consistent.Indeed, here we are addressing how the state of a quantum system of size N S = log d S results from its correlations with the state of an environment of size N E = log d E .Since (i) there are d E = 2 N E distinct environmental states (say, |e α ⟩) and (ii) via Eqs.( 9), (10), and ( 11) each specifies a pure state S, the geometric entropy of S scales, at most, with the environment's size.Moreover, we can also extract a lower bound, provided by ρ S 's von Neumann entropy.Indeed, among all the geometric quantum states with a given ρ S there is one corresponding to its spectral decomposition ρ S = j λ j |λ j ⟩ ⟨λ j |.Therefore: where we emphasize the dependence of ρ S on q, given by [ρ S (q)] ij = E q Z i Z j .The choice of |e α ⟩ reflects physical information about the specific problem being analyzed.For example, in a thermodynamic setting with Hamiltonian H = H 0 + H int , with H 0 = H S + H E , we can choose |e α ⟩ to be the eigenstates of H E while |a i ⟩ are the eigenstates of H S .In this case, if the interaction is weak, the environment acts as a thermal bath.It settles on a distribution p E α quite close to a thermal equilibrium distribution p E α ∝ e −βeα , where e α is the eigenvalue of H E corresponding to the eigenvector |e α ⟩: H E |e α ⟩ = e α |e α ⟩.In a quantum computation, in which the environment performs nontrivial operations on the system of interest, |e α ⟩ can be chosen to be given by the computational basis.This first example of calculating the quantum information dimension and dimensional quantum entropy provides basic intuition about what these quantities convey about a system's overall behavior resulting from its correlations with an environment.

B. Case 2: Electron in a 2D Box
Let's now consider a second case in which a finite quantum system interacts with a quantum system with continuous variables.A concrete example is an electron confined to move in a 2D rectangular box where the position and spin degrees of freedom are assumed to be entangled.The scenario we have in mind is that of an electron confined to a certain region in which there is a nonhomogeneous magnetic field ⃗ B(x, y) generating an interaction potential V (x, y) = (µ B g s /ℏ) ⃗ S • ⃗ B(x, y).
We follow Ref. [1]'s treatment.Let {|x, y⟩} x,y be the eigenbasis of the position degrees of freedom and {|0⟩ , |1⟩} a basis for the spin degree of freedom, Ref. [1] showed that a generic state can be written as: Thus, the spin degree of freedom is described by f (x, y) and {p s (x, y), ϕ s (x, y)} s=0,1 .
The partial trace over the position degrees of freedom, for a generic |ψ⟩, gives rise to a continuous geometric quantum state, parametrized by the coordinates x and y: where |p, ϕ⟩ = √ 1 − p |0⟩+ √ pe iϕ |1⟩.The second equality above implicitly defines a distribution on the qubit's projective Hilbert space.Reference [1] details the procedure.
The following simply summarizes the final result.Given an operator O, acting only on the Hilbert space of the spin, we have the following: where q is a geometric quantum state that depends on f and dν = dpdϕ/2π indicates the uniform Fubini-Study measure with coordinates (p, ϕ).The details of how q depends on f and on the Fubini-Study metric are not immediately relevant, but can be found in Ref. [1].Here, though, we provide an explicit example to illustrate computing D[q] and H D [q].To be concrete, let The estimation incrementally decreases the coarse-graining scale ϵ and, at each step, calculates H(Z ϵ ).Then, excluding initial points to avoid saturation, it performs a least-square fit to extract the H(Z ϵ )'s growth rate as a function of log 1/ϵ.We estimate dI = 1.31 ± 0.01.This is fully consistent with the analytical prediction of dI = 1.31, plotted in red.See Eq. ( 13) and Ref. [35] for the analytical estimate.
where (µ x , σ x ) and (µ y , σ y ) are the average and variance along the x and y axis, respectively.N x and N y are normalization factors.This constructs a geometric quantum state that is absolutely continuous with respect to ν F S and therefore expressible via a probability density q(p, ϕ).With the choices made, we obtain: with: What are q(p, ϕ)'s quantum information dimension and the dimensional geometric entropy?Since this is an abso-lutely continuous density function, with support on the whole of P(H), one can directly compute the limit in Eq. ( 4), obtaining D = 2.Moreover, H 2 [Z] assumes a particularly simple form due to the Gaussian character of q(p, ϕ): Thus, again, D = 2 correctly addresses the dimensionality of the underlying geometric quantum state and H 2 [Z] appropriately quantifies its entropy.

C. Case 3: Chaotic Dynamics and Quantum Fractals
While the examples above clarify the meaning of, and the technology behind, information dimension, its strength resides in estimating the dimension of complex probability distributions, especially those whose support is fractal [28,45,46].These objects have interesting features, such as structural self-similarity and spontaneous statistical fluctuations, and they often arise as asymptotic invariant distributions of the dynamics of complex systems.
The geometric formalism allows us to show how examples imported from the classical theory of dynamical systems, leading up to fractal invariant sets, are part and parcel of the phenomenology of quantum systems.In particular, by exploiting the fact that the Fubini-Study uniform measure on CP 1 in (p, ϕ) coordinates is proportional to the Lebesgue measure on the square [0, 1] × [0, 2π], we look at two well-known examples of chaotic dynamical systems with chaotic attractors with fractal support-the Extended Baker's Map [35] and Chirikov Standard Map [47].We show how to directly implement them in quantum systems by leveraging geometric quantum mechanics.

Baker's Map
First, we look at the Extended Baker's Map (EBM) that, despite the chaotic behaviors it generates, can be analytically solved.For a detailed discussion about its properties, especially those related to the information dimension, we refer to Ref. [35].
This map is directly implemented on CP 1 via the following unitary transformations.Let B denote the Extended Baker's Map, each iteration of B maps a quantum state (p, ϕ) ∈ CP 1 to one and only one quantum state (p ′ , ϕ ′ ).
Here, we use λ a ≤ λ b ≤ 1 2 and β ≤ π.Note that the original extended Baker's Map, as in Ref. [35], is defined on the unit square (x, y) ∈ [0, 1]×[0, 1].The above adapts it to the Bloch square via (p → x, ϕ 2π → y).As a result, β is renormalized by a factor 2π with respect to the one α found in Ref. [35]: Since there is a one-to-one correspondence between points of the Bloch square and points in CP 1 , the action B[(p, ϕ)] = (p ′ , ϕ ′ ) can be implemented on H as a unitary transformation between an arbitrary pair of input (p, ϕ) and output (p ′ , ϕ ′ ) states, as follows.First, on the qubit Hilbert space, given any |ψ⟩ there is one and only one orthogonal state ψ ⊥ , up to normalization and phase.Thus, a unitary transformation that maps a generic |ψ⟩ onto |ϕ⟩ can be directly written as Second, embedding of CP 1 with (p, ϕ) coordinates onto the qubit Hilbert space is given by: With this, given a point (p, ϕ), the state orthogonal to |p, ϕ⟩ is simply |1 − p, ϕ + π⟩.This means ⟨p, ϕ|1 − p, ϕ + π⟩ = 0 for all (p, ϕ).Hence, this results in the unitary U := U (B) that implements B on the Hilbert space: As a result, iterates of U implement the EBM on CP 1 .Calling (p n , ϕ n ) the state after n iterations and U n+1 the unitary implementing the n-th iteration, we have Thus, while the definition is the same, at each iteration it is represented by a different unitary operator U n = U (p n+1 , ϕ n+1 ; p n , ϕ n ).Assuming each iteration of the map takes a finite amount of time, the appropriate way to interpret this mapping is that it is a inhomogeneous (in state space) vector field on CP 1 or, analogously, on H. Reference [35] gives a detailed discussion of the map's dynamic properties.Here, we simply recall that, given an arbitrary initial point (p 0 , ϕ 0 ), as a result of the dynamics, the point moves on a subset of the entire state space.The natural measure, resulting from the dynamics over infinite time, is a fractal object.More accurately, the attractor has a uniform distribution over ϕ while it has the structure of an extended Cantor set with respect to p. See Fig. 3 for a plot of 10 7 map iterates, illustrating the attractor's self-similar (fractal) structure.Moreover, its information dimension d I is known analytically: where α = β/2π.This gives a quantum information dimension D ≈ 1.31 and it allows us to benchmark the algorithmic procedure we use to numerically compute the information dimension, a necessary reference for cases in which D is not known.To extract the dimensional entropy we look at the estimated zero-point of the curve H(Z ϵ ) as a function of − log ϵ.The linear fit gives H D ≈ 0.25±0.15.See Fig. 4.

Standard Map
Let's shift attention to the dynamically richer Standard Map (SM).While its original definition is given on the square of side 2π, it is easily modified to operate on the Bloch Square [0, 1]×[0, 2π]; i.e., CP 1 in (p, ϕ) coordinates.
where p is taken modulo 1, ϕ modulo 2π, and (p 0 , ϕ 0 ) ∈ [0, 1] × [0, 2π].K is a nonnegative parameter that determines the map's degree of nonlinearity.Its value is renormalized by 2π due to the fact that in its original definition the standard map operates on the unit square [0, 1] × [0, 1], while here we work in (p, ϕ) The transformation can be implemented with a set of unitary transformations {S n }, using the same construction just described in Sec.VI C for the EBM.
For K = 0 only periodic and quasi-periodic orbits are possible.For K > 0 the map generates both regions of chaotic behavior and periodic orbits.Increasing K, the extent of periodic orbits decreases, yielding to larger areas of chaotic behavior.Figure 5 shows the behavior at As a consequence of the mixed behavior across the state space, the information dimension of the natural measure, computed over a single trajectory, depends on the initial condition.If initial conditions lead to chaotic orbits, then we expect D = 2 while, for periodic orbits, D = 1.We numerically verify this using the same algorithm exploited in the previous section to estimate EBM's information dimension.Figures 6 and 7 plot the results, consistent with the expected values.
Analogously, for the dimensional entropy there are two different situations, depending on whether the initial condition leads to periodic or chaotic behavior.Since in the chaotic case we simply have a 2D integral, here we look more closely at the second case, in which D = 1, where the following treatment can be applied.
Referring to Fig. 8, a generic quasiperiodic orbit covers a 1-dimensional line, which is identified by a generic equation f (ϕ, p) = 0, whose solutions are parametrized by a curve γ : [0, 1] → CP 1 , or a set of them {γ i }, as in the case of Fig. 6.In the following, assume that the set of curves γ i is bijective, so that given a point Z on any curve, there is one and only one curve the point is part of, thus the functions γ i (s) admit inverse γ −1 i : CP 1 → [0, 1].While at first this appears to be a restrictive assumption, one can always use this construction in cases in which there are overlapping curves, simply by decomposing them into nonoverlapping subparts.Proceed in this way by analyzing each separately and, since the treatment is formally the same for each, we examine one of them and drop the index i.The function γ is the nonvanishing support of the distribution whose entropy we are evaluating.On γ the proper notion of invariant measure is provided by the Fubini-Study infinitesimal length element: dl γ F S := || γ|| F S ds.Here, γ = (dp/ds, dϕ/ds), ||v|| F S = g F S ab v a v b is the Fubini-Study norm of a vector v in the tangent space, and g F S is the Fubini-Study metric.Thus, γ's Fubini-Study length provides a notion of measure on [0, 1] that is invariant under changes of coordinates and by unitary transformations in CP 1 , via µ γ s (ds) := dl γ F S = || γ(s)|| F S ds.This provides the proper notion of integration on [0, 1] to respect all the necessary invariance properties inherited by the fact that the points on γ belong to CP 1 .
In this way, given a measure ν γ on CP 1 with support on γ and density dν γ = ν(dl γ F S ) = f (s)dl γ F S , the limit in Eq. ( 7) can be carried out to give: where f (s) is the density or, more appropriately, the Radon-Nikodym derivative, of ν γ with respect to µ γ .
For example, one can verify that this procedure gives the expected results in the case of a uniform distribution.
Calling L[γ] the Fubini-Study length of curve γ, we have that It is worth noting that a most important property of this procedure is that it facilitates computing the entropy of a 1D distribution on γ ∈ CP 1 by mapping it to the entropy of a continuous density on [0, 1].This amounts to defining f as the continuous density that satisfies the following consistency constraint: For any arbitrary finite partitioning [0, 1] = ∪ i I i that generates a partition of γ into a set γ = ∪ i γ i of N adjacent curves γ i = γ(I i ), the density f is defined via the following chain of equalities: for any i and where n N (γ i ) is the number of points belonging to γ i in a finite (size N ) sample of the density on γ.This provides a constructive method to analytically compute H 1 , provided one has the form of γ and f (s).It also gives a direct way to numerically estimate H 1 via the sampling provided by the dynamics:

D. Case 4: Thermodynamic Limit
Finally, let's shift to explore dimensions for an overtly physical setting: a finite quantum system without symmetry that interacts with a finite, but arbitrarily large, environment.The goal is to infer properties in the thermodynamic limit.The generic procedure to investigate the thermodynamic limit in geometric quantum mechanics was established and made explicit in Ref. [1].
The following adopts that procedure and investigates the geometric quantum state of the ground state of an openboundary 1D spin-1/2 Heisenberg chain with a broken translational symmetry: a defect, realized by removing the local magnetic field in the last qubit.Let ⃗ τ be the system's spin operator and ⃗ σ j the environment's spin operators.The total Hamiltonian is: where N E is the size of the environment, H S = ⃗ B • ⃗ τ , and: The defect-bearing Hamiltonian breaks translational symmetry creating a rich geometric quantum state-one that exhibits self-similarity and fractal structure.To illustrate the latter we used B z = 0.5 and N ∈ [10,22].As we will see, the choice is supported by the numerical analysis.
At each size N we used the Lanczos algorithm, available in Python via SciPy [48], to extract the ground state |GS(N )⟩ and obtain the associated geometric quantum state q GS N (Z). Figure 9 plots the support of q GS N for N = 22.Direct inspection of q GS N (Z) suggests that the support of q GS ∞ has a fractal structure with D ∞ ∈ (0, 1).Thus, we are interested in q GS ∞ (Z) = lim N →∞ q GS N (Z).And so, for each N we estimate the information dimension In the thermodynamic limit its information dimension is estimated to be D ≈ 0.83 ± 0.02. Figure 10.Schematic depiction of numerically extracting D∞ by examining systems with progressively larger system size N , while at each size we estimate two two scaling curves.At their overlap a linear increase with − log ϵ is present.This permits estimating D∞ with the slope of the tangent there.In this ideal case, this is exactly identical to D∞ for each N .In reality, finite-size effects means the data is noisy and the estimation is harder; cf.via the numerical procedure used and benchmarked in previous sections.This provides 13 different datasets to estimate the value of the information dimension in the thermodynamic limit.Accurately estimating D ∞ -the QID of q GS ∞ (Z)-is a nontrivial since, in principle, it involves evaluating two limits: ϵ → 0 and N → ∞.The limits can be singular, meaning that the result might depend on the order in which they are performed.This is indeed what happens when trying to directly estimate D ∞ in a naive fashion.Namely, by Theorem 2, at each finite N D N = 0.This leads one to conclude that D ∞ = 0.This is not correct.The reason a vanishing dimension appears when first computing D N is that the environment is finite and, since H N [Z ϵ ] ≤ N log 2, the curve H N (− log ϵ) levels off out after the expected linear increase in − log ϵ; see Fig. 10.That is, vanishing dimension arises from evaluating the limits in the wrong order: ϵ → 0 at fixed N first and then N → ∞.Instead, we are interested in the converse: thermodynamic limit first to obtain q GS ∞ and then ϵ → 0 to extract D ∞ .Vanishing dimension does not occur when performing the thermodynamic limit first as this effectively removes the upper bound.If the analytical form of the ground state in the thermodynamic limit is known, one can proceed without further ado.This, however, is a rare case and, numerically, these effects are expected to be present.To cope with this one must correctly identify, for each N , a region of consistent linear growth where H N [Z ϵ ] ≈ −D ∞ log ϵ, for all ϵ ∈ [ϵ 0 (N ), ϵ 1 (N )].Verifying that the estimate is robust against increasing the size of the environment yields a reliable estimate of D ∞ .A cartoon, of an idealized situation, to provide visual support to the abstract intuition, is given in Fig. 10.The estimation was performed by numerically extracting the curves H N [Z ϵ ] via a direct box-counting algorithm: fix the value of ϵ and build a grid; recall Section III.Then, using the numerical representation of q GS N , we evaluated the probability mass in each cell and computed this distribution's Shannon entropy.This gives a progressively-finer coarse-graining of the state space.The scaling curves were then analyzed in two separate ways, yielding compatible results.
First, a linear fit was performed by identifying a common region of linearity for all the 13 curves H N (− log ϵ) analyzed.Then, from the 13 averages we estimated the information dimension (and its error) from the average and standard deviation.The results yield D (1) ∞ = 0.83 ± 0.02 and are summarized in Fig. 11.Second, we collapsed all the data onto a unique straight line by removing their estimated vertical offset-setting intercept equal to 0. We removed a single outlier, to reduce the error, and checked that this did not appreciably change the estimate.We then performed linear regression on the aggregated data points.The result, summarized in Fig. 12, yield D (2) ∞ = 0.84 ± 0.01.Altogether, the results support the intuition that the thermodynamic limit is witness to highly nontrivial geometric quantum states with fractal support.Increasing the en-Figure 12. Information dimension of the same geometric quantum state, estimated using aggregated data.To extract the information dimension in the thermodynamic limit, we do not distinguish between points belonging to different environment sizes.Aggregating them, we performed linear regression to extract a prediction, with associated error, of the curve's slope.The result yields D (2) = 0.84 ± 0.01.
vironment's size, the system converges to a self-similar distribution, with a noninteger information dimension D ∞ ≈ 0.83 ± 0.02.The state support, shown in Fig. 9, is reminiscent of the Cantor set or, more appropriately, one of its generalizations, e.g., the EBM's invariant distribution in the x direction.The estimation of the dimensional geometric entropy is somewhat easier.Indeed, while its value diverges, it does so in a controlled fashion, which is at most linear in the environment size N E .We thus extract the entropy rate by direct inspection of its definition, given explicitly in Thm. 3, and estimate its linear asymptote, giving the entropy rate h ∞ in the thermodynamic limit: Since convergence to linear scaling occurred rather rapidly, an accurate estimate of h ∞ is obtained directly from the data, up to two significant digits: h ∞ ≈ h est = 0.66 ≈ 95% log 2. Figure 13 gives both the data and the results of the linear fit.This concludes our survey of informational properties of geometric quantum states.Table I summarizes the results.The results leave several questions and points of discussion, to which we now turn.After which we draw several conclusions.

VII. DISCUSSION
It has been over a half century since Kolmogorov and followers showed that Shannon's information theory [3] provides essential dynamical invariants for chaotic physical systems [4][5][6][7][8][9].Today, practically, we know that information theory readily applies to physical systems that evolve in discrete time with either a discrete state space or tractable symbolic dynamics [49].Applying the Shannon entropy functional, this involves quantities that capture informational features with physical relevance-such as, a system's randomness and structure.This approach has successfully described the behavior of both Hamiltonian and dissipative classical systems.That said, the situation is decidedly less straightforward for physical systems with an inherently continuous sample space that lack a straightforward symbolic dynamics.Relying on analog in-formation theory, informational descriptions are markedly more challenging to define and calculate.
Quantum systems belong to this category, as the space P(H) of pure states has continuous nature.Geometric quantum mechanics brings this particular aspect of quantum systems to the fore, describing their states as probability measures on P(H)-that is, in terms of geometric quantum states.Thus, the geometric approach directly leads one to adapt the tools from analog information theory to the quantum domain.
In this spirit, our development focused on information dimension and differential entropy, initially proposed by Renyi within the context of analog information theory.We showed that these tools provide a synthetic view of a system's geometric quantum state: the information dimension D determines the dimensionality of the state's support, while the dimensional geometric entropy H D gives an appropriate differential entropy for a geometric quantum state with information dimension D.
Once defined and properly interpreted, we explicitly computed their values in several examples: a finitedimensional quantum system interacting with finite and infinite environments; a qubit evolving with quantum implementations of nonlinear maps-the Extended Baker's Map and the Standard Map-and finally a qubit in a progressively larger environment, where we extracted properties in the thermodynamic limit.
The interest for these investigations is twofold.On the one hand, extending the tools of dynamical systems theory to the quantum domain is a topic of broad and longlived interest.In point of fact, dynamical systems has led to successful modeling and quantitative understanding of the structures and behaviors generated by large classes of synthetic and natural systems-from nonlinear dynamics to the modeling of population dynamics to tackling the underlying dynamics of information occurring in a computer running classical algorithms.On the other hand, the phenomenology of open quantum systems, in equilibrium and far from equilibrium, is a topic of both fundamental and applied relevance.Indeed, in the past half decade, the rise of the quantum computing paradigm made concrete several theoretical investigations focused on the information-theoretic properties embedded in the dynamics of open quantum systems and the thermodynamic resources necessary for quantum information processors to run smoothly and efficiently.
We believe the geometric approach is well suited for these goals, for the following reasons.The notion of geometric quantum state of a system [1] encodes not only the statistics of all measurement outcomes one can perform on the system, as with the density matrix, but also the detailed structure of the system-environment quantum correlations that determine said measurement statistics.Hence, determining the information-theoretic properties of geometric quantum states gives a novel way to understand the phenomenology of open quantum systems, whose behavior and structure result from exchanging information-theoretic and energetic resources with an environment.We believe this will eventually lead to new analytical tools of power sufficient to deepen our understanding of the phenomenology of open quantum systems, both in and out of equilibrium.

VIII. CONCLUSIONS
The development's overtly mathematical nature suggests concluding with three forward-looking comments.
First, simple examples of geometric quantum states yield an integer value for D. At least in the measure theory of classical processes, though, it is well-known that this is not typical.There are very interesting objects that exhibit noninteger information dimension: the self-similar or Cantor sets, now shorthanded as fractals.Indeed, these structures are critical to the operation of Maxwellian demons [50] and their modern realizations-information engines [51].Comparing classical and quantum domains, it stands to reason that the geometric quantum formalism provides an interesting arena in which to develop a theory of quantum fractals.Efforts in this direction are currently ongoing and will be reported elsewhere.The informational quantities introduced here play a central role in this endeavor.
Second, while here we focused exclusively on D and H D , it is straightforward to appreciate that the geometric approach allows for a richer cross-pollination between analog information theory and quantum information theory.For example, alternative definitions for core quantities of quantum information theory, based on the geometric approach and inspired by analog information theory, suggest themselves as parallels of entropy, relative entropy, mutual information, Kolmogorov-Sinai entropy rate, excess entropy, bound information, statistical complexity, and many others.Investigating the relations with their standard quantum counterparts-von Neumann entropy, quantum mutual information, and the like-presents interesting challenges.The solutions, we believe, are destined to enrich both quantum information science and analog information theory.
Third, the geometric approach provides a powerful way to study ensembles of pure states, with a rich phenomenology to uncover [52][53][54][55][56].This is particularly so given the recent emergence of quantum information theory and the advances in quantum computing.These reinforce the need for more advanced tools to study ensembles of pure states [57][58][59].Indeed, modern quantum simulators allow extracting ensembles of pure states, like the geometric quantum state, in systems with a controlled environment.This strengthens the case for the tools developed here and for the geometric approach more generally.This can be written as: q(p, ϕ) is positive and one straightforwardly verifies that it is normalized.Recall in (p, ϕ) coordinates that dV (p,ϕ) F S = dpdϕ/2 and so: dV F S q(p, ϕ) = 2 1 2

Figure 1 .
Figure 1.Geometric quantum state q(Z) = 2 9 k=1 p k δ[Z − Z k ] on CP 1 , with coordinates (θ, ϕ).It is a finite sum of 2 9 Dirac measures.Each point is a possible pure state |Z k ⟩ = cos θ k /2 |0⟩ + sin θ k /2e iϕ k |1⟩ in which the system can be, with probability p k .The specific value of each p k is encoded in a point's color; see legend.

Figure 2 .
Figure 2. Quantum state space of a qutrit: (Left) A finitedimensional quantum system with D = 3 represented in 2D.Section III noted that canonically conjugated coordinates allow considering the full quantum state space as a classical 2−simplex ∆2, which represents the space of classical probability distributions (1 − p1 − p2, p1, p2).(Right) A two-torus T 2 that accounts for the nontrivial phases (ϕ1, ϕ2).

Figure 3 .
Figure 3. Geometric quantum states visited along a single trajectory generated by the Extended Baker's Map with parameters λa = λ b = 0.2, β = 4π/10, and initial condition (p0, ϕ0) = (0.32865, 0.98886).N = 10 7 time-steps plotted on the Bloch square (p, ϕ) ∈ [0, 1] × [0, 2π].Over time, due to the map's chaotic nature, even a single trajectory covers a (strange) attractor, with self-similar (fractal) structure.More specifically, vertically, the attractor has a uniform structure.Horizontally, it has self-similar, fractal structure, equivalent to a generalized Cantor set.This is demonstrated, going from the left panel to the right, via successively magnifying small subsets of states.

2 Ny,,Figure 4 .
Figure 4. Extended Baker's Map information dimension dI :The estimation incrementally decreases the coarse-graining scale ϵ and, at each step, calculates H(Z ϵ ).Then, excluding initial points to avoid saturation, it performs a least-square fit to extract the H(Z ϵ )'s growth rate as a function of log 1/ϵ.We estimate dI = 1.31 ± 0.01.This is fully consistent with the analytical prediction of dI = 1.31, plotted in red.See Eq. (13) and Ref.[35] for the analytical estimate.

Figure 6 .
Figure 6.(Left) Quasiperiodic orbit with dynamic generated by the Standard Map at K = 2 and initial conditions (p0, ϕ0) = (0.2, π).(Right) Numerical estimation of the information dimension, obtained by extracting the growth rate of H(ϵ), shorthand for H(Z ϵ ), as a function of log(1/ϵ).The estimated value is consistent, up to 2 significant digits, with the expected D = 1.

Figure 7 .
Figure 7. (Left) Chaotic orbit with dynamic generated by the Standard Map at K = 2 and initial conditions (p0, ϕ0) = (0.1, 4π/10).(Right) Numerical estimation of the information dimension, as above.Again, the estimated value is consistent, up to 2 significant digits, with the expected one of D = 2.

Figure 8 .
Figure 8. Dimensional quantum entropy of a qubit: Geometric construction for quasiperiodic behavior.The construction gives rise to a natural measure of dimension D = 1.The measure's characteristics on CP 1 transfer to a random variable on the unit interval in a way that does not deform the distribution: it keeps intact the ratios νγ(γi)/νγ(γj) = µγ(Ii)/µγ(Ij) ≈ n(γi)/n(γj).This holds thanks to the fact that D = 1 and therefore the support of the measure is a curve γ on CP 1 that can always be parametrized by γ(s) with s ∈ [0, 1].The green area represents a fictitious probability density on γ mapped onto [0, 1].

Figure 9 .
Figure 9. Support of the geometric quantum state |GS(NE)⟩ for the ground state of the Heisenberg defect-Hamiltonian with environment size N = 22.The GQS has two separate islands with internal structure that is self-similar.(Left to Middle to Right panels) Progressively magnifying the region around each part of the support reveals the distribution's self-similar support.In the thermodynamic limit its information dimension is estimated to be D ≈ 0.83 ± 0.02.

Fig. 11 .
Figure 10.Schematic depiction of numerically extracting D∞ by examining systems with progressively larger system size N , while at each size we estimate two two scaling curves.At their overlap a linear increase with − log ϵ is present.This permits estimating D∞ with the slope of the tangent there.In this ideal case, this is exactly identical to D∞ for each N .In reality, finite-size effects means the data is noisy and the estimation is harder; cf.Fig. 11.JPC: Explain better why for each N two curves are plotted.JPC: Legend: Add N + 2 in purple.

Figure 11 .
Figure 11.Information dimension of the geometric quantum state of a qubit interacting with a 1D environment with a defective (i.e., nontranslation invariant) Heisenberg model of progressively increasing size, see Eq. (14).The entire system is in its ground state |GS(N )⟩, where the environment size is N ∈[10,22].Each N yields a geometric quantum state whose information dimension we estimate using the box-counting algorithm explained in the text, extracting the slope using a linear fit for HN [Z ϵ ]. (Inset) A collection of all the data, together with the linear fits.(Overall) The collection of horizontal lines displays all M = 13 estimates and extracts the average and standard deviation.The result yields D(1) ∞ = 0.83 ± 0.02.The shaded area in green, red, and blue correspond to the areas covered by fluctuations around the average of size σ, 2σ, and 3σ, respectively, where σ = 0.02 is the standard deviation of the sample of slopes.

Figure 13 .Case 1 (Case 3 ( 1 0
Figure 13.Dimensional quantum entropy H[NE] scaling in the geometric quantum state q as a function of environment size NE.The estimated linear growth H[NE] ∼ 0.66NE and so the state has entropy rate of hest = 0.66.
As a first example, consider a system S that is part of a larger system SE of finite dimension.In this setting S develops correlations with a finite-dimensional environment E. Let d E and d S denote the dimensions of the Hilbert spaces H E and H S of E and S, respectively.Also, assume the overall system SE to be in a pure state |ψ⟩ ∈ H S ⊗ H E .
A. Case 1: Finite Environment i=1 is a basis of H S and {|e α ⟩} d E

Table I .
Quantum information dimensions and dimensional quantum entropies for the geometric quantum states analyzed in Secs.VI A-VI D.