From Vlasov-Poisson to Schr¨odinger-Poisson: dark matter simulation with a quantum variational time evolution algorithm

Cosmological simulations describing the evolution of density perturbations of a self-gravitating collisionless Dark Matter (DM) ﬂuid in an expanding background, provide a powerful tool to follow the formation of cosmic structures over wide dynamic ranges. The most widely adopted approach, based on the N-body discretization of the collisionless Vlasov-Poisson (VP) equations

(Dated: January 29, 2024) Cosmological simulations describing the evolution of density perturbations of a self-gravitating collisionless Dark Matter (DM) fluid in an expanding background, provide a powerful tool to follow the formation of cosmic structures over wide dynamic ranges.The most widely adopted approach, based on the N-body discretization of the collisionless Vlasov-Poisson (VP) equations, is hampered by an unfavourable scaling when simulating the wide range of scales needed to cover at the same time the formation of single galaxies and of the largest cosmic structures.On the other hand, the dynamics described by the VP equations is limited by the rapid increase of the number of resolution elements (grid points and/or particles) which is required to simulate an ever growing range of scales.Recent studies showed an interesting mapping of the 6-dimensional+1 (6D + 1) VP problem into a more amenable 3D + 1 non-linear Schrödinger-Poisson (SP) problem for simulating the evolution of DM perturbations.This opens up the possibility of improving the scaling of time propagation simulations using quantum computing.In this paper, we introduce a quantum algorithm for simulating the Schrödinger-Poisson (SP) equation by adapting a variational real-time evolution approach to a self-consistent, non-linear, problem.To achieve this, we designed a novel set of quantum circuits that establish connections between the solution of the original Poisson equation and the solution of the corresponding time-dependent Schrödinger equation.We also analyzed how nonlinearity impacts the variance of observables.Furthermore, we explored how the spatial resolution behaves as the SP dynamics approaches the classical limit (h/m → 0) and discovered an empirical logarithmic relationship between the required number of qubits and the scale of the SP equation (h/m).This entire approach holds the potential to serve as an efficient alternative for solving the Vlasov-Poisson (VP) equation by means of classical algorithms.

I. INTRODUCTION
A number of astrophysical and cosmological observations consistently point toward the definition of the socalled standard cosmological model [1].In this model, the mass-energy content of the Universe is made by about 70% of an unknown form of Dark Energy (DE), which accounts for the accelerated cosmic expansion, by about 25% of an unknown form of collisionless non-baryonic Dark Matter (DM), while only the remaining ∼ 5% is made of ordinary baryonic matter.In addition, viable models of galaxy formation require DM particles to be cold (CDM), i.e. with negligible streaming velocities.With the further observational evidences for DE to be consistent with a cosmological constant term (Λ) in the Einstein field equations, all this leads to the definition of the standard ΛCDM cosmological model [2].While the exact nature of cosmic dark constituents remains so far elusive, it is widely accepted that the gravitational instability of the tiny CDM density perturbations imprinted in the primordial Universe drive the formation of cosmic structures, from kiloparsec (kpc) scales relevant for galaxy formation, to the Gigaparsec (Gpc) scales of the global cosmic web [3].Describing in detail the evolution of such DM perturbations within a DE-dominated expanding background, and comparing the predictions to observational data is crucial to shed light on the nature of DM and DE.The most widely adopted approach to address the study of the gravitational instability of density perturbations in a collisionless fluid is by adopting the Nbody discretization of the evolution of fluid phase-space distribution function described by the Vlasov-Poisson (VP) system of equations [4].
In its most straightforward implementation, the Nbody method explicitly computes the gravitational interaction between each pair of the N particles, which discretize the fluid, thus implying a N 2 scaling with the number of resolution elements.While different methods, based on different levels of numerical approximation, have been introduced to speed-up these computations, still they are currently hampered by the unfavorable scaling of the available classical algorithms with the respect to system sizes.Furthermore, we should keep in mind that the N-body discretization of the phase-space structure of the fluid is also an approximation to reduce the dimensionality of the problem to a treatable level.
A recent work by Mocz et al. [5] showing numerical correspondence between the 6D+1 Vlasov-Poisson (VP) and the 3D+1 Schröndiger-Poisson (SP) equations for cosmological simulation revived the interest in simulating and studying various form of dark matter, which can be modelled by the SP equation [6][7][8].In fact, the SP equation has also a direct physical interpretation of the so-called axion model, which postulates the presence of scalar particles as constituents of dark matter.In the ultra-light particle mass limit, this model is known as fuzzy dark matter (FDM) [9].This correspondence opens up the possibility of using quantum algorithms (QA) for the investigation of dark matter dynamics, as it was already demonstrating that QA can reduce the scaling complexity for the solution of quantum mechanical problems in many-body physics and quantum chemistry [10][11][12].
More generally, we propose a scalable quantum algorithm for the simulation of the time propagation of nonlinear Schrödinger-like equations of the form where H[Ψ] indicates the functional dependence of the Hamiltonian from the system wavefunction.
In this work, we explore the challenges arising in the implementation of cosmological simulations on quantum devices.The dynamics is governed by the SP equation, where a self-gravitating potential introduces nonlinearities in the problem.The mapping of the nonlinear problem onto a quantum device is solved using a classicalhybrid variational algorithm similar to the one proposed in Lubasch et al. [15].The evolution of the wavefunction is carried out using a variational time evolution (VTE) approach, tailored for nonlinear self-consistent problems defined on a grid, which allows for an exponential saving in computational memory resources through the encoding of N grid points in log 2 (N ) qubits.Building on [16], we adapt the VTE algorithm to the case where the potential is given by a variational ansatz, proposing quantum circuits for the evaluation of the required matrix elements whose depth scaling is polynomial with the number of qubits and the number of sample required for a desired accuracy scales polynomially with the number of grid points N .
We investigate the behavior of spatial resolution as the SP dynamics converges towards the classical limit (h/m → 0).Our investigation unveiled an empirical logarithmic correlation between the required number of qubits and the scale of the SP equation (h/m).
This work is structured as follows.In Section II we describe the mapping of the cosmological SP equation on a quantum computer, including a discussion of the strategies that must be adopted in the latter for the description of non-linear problems.
Section III is devoted to the description of the VTE algorithm for self-consistent nonlinear problems, including a discussion on the quantum circuit implementation.Numerical simulations for a one dimensional 5-qubit (i.e., 32 grid points) system will be given in Section IV.The results include an analysis of the time-evolution obtained with different choices of physical parameters interpolating between the pure quantum regime and a classical, h/m → 0, limit.A study on the resolution convergence in this classical regime is also presented.Finally, we discuss the computational costs of our quantum algorithm and the conditions for potential quantum advantage.We draw our main conclusions in Section V

A. History of the Schrödinger-Poisson equation
Under the fluid assumption, the phase-space distribution of massive CDM particles at time t is described by the distribution function f (x, v, t), where x, v ∈ R 3 are the positions and velocities of the particles, so that f dx dv describe the phase-space density within the 6D volume element dx dv, so that the density field in configuration space is given by ρ(x, t) = f (x, v, t) dv.Under the assumption of a collisionless fluid, the evolution of the distribution function obeys a continuity equation in phase-space, df (x, v, t)/dt = 0.If the fluid is self-gravitating, then the Poisson equation, ∇ 2 U (x, t) = 4πGρ(x, t) (with G the Newton's gravitational constant) provides the relationship between the density field and the gravitational potential U [17].Simulations of cosmic structure formation within a ΛCDM model aim at solving this Vlasov-Poisson system of equations, once initial conditions on position and velocity of the particles f (x, v, t 0 ) are assigned to represent an ensemble realization of a given cosmological model [18].As such, the VP equations must be solved in 6D + 1 dimensions.The high dimensionality of this problem makes it very hard to tackle when a high spatial resolution is needed, as usual in modern cosmological simulations.A widely used approach to reduce the dimensionality of the problem is to model the initial DM distribution as an ensemble of collisionless massive particles interacting only through self-gravity.Such a set of particles formally obeys to the Euler-Poisson (EP) equations, a closure of the VP equations obtained by asking that the distribution function is single-valued in space.Classically the evolution is carried out using N-body [19][20][21] or fluid approaches [22].
The N -body approach [19,20,23] best approximates the analytic solution of the system (each DM particle has a single-valued velocity; at large scales however they can cross, as the VP equations require) and usually presents no singularities.However, it requires much more computational resources than the fluid one.On the other hand, the fluid method, that directly solves the EP equations, manages to reduce the dimensionality of the problem from 6D + 1 to 3D + 1, but presents singularities [5,22].The potential limitations of both the N-body and the fluid methods clearly demonstrates that finding an alternative and efficient way to solve the VP equations would provide a significant conceptual and computational benefit for the numerical study of cosmic structure formation.Within this context, the Schrödinger-Poisson (SP) equations, i.e. the coupling of the Schrödinger equation with a self-interacting potential obeying the Poisson equation, have recently been proved to recover in the classical limit h/m → 0 the dynamics of the VP equations [5,24,25].Such an approach was first introduced in ref. [26] as the non-relativistic limit of the Einstein field equations with a scalar boson field as source.The procedure known as the Schrödinger method (SM) maps the initial classical distribution f (x, v, t 0 ) to the quantum wavefunction Ψ(x, t 0 ) by means of a nonlocal operation.Details about this procedure are given in [5]; here we just provide a brief overview of the method.We consider two primary scenarios.In instances where the initial distribution function is characterized by a cold or single-valued stream, meaning that a unique velocity corresponds to each point, it is possible to directly reconstruct the phase S of the quantum wavefunction Ψ = √ ρ exp(iS/h), where ρ = dvf (x, v, t) through the solution of the Poisson problem, where the scale h/m emerges as an effect of the quantization.
In the scenario involving multi-streams or warm initial conditions, where a single grid point may correspond to two or more different velocity values, the situation becomes more complex.This complexity arises because the densities do not precisely coincide, and the quantum wavefunction incorporates interference patterns.In this case, the mapping from phase-space distribution function to wavefunction reads where, we sum over sampled velocities v, each one with an associated random phase ϕ rand,v ∈ [0, 2π) to ensure uncorrelated phases for each fluid velocity.
The wavefunction then evolves according to the SP system of equations Here we have chosen to use a density contrast ρ − ρ * as source of the gravitational potential, where ρ * represents the average density over the volume considered.We note that in this approach Eq. ( 4) describes a density field, not a particle's wavefunction.Note also that the constant h and m are not the Planck constant and the mass of the particle but are related respectively to the quantum and classical effects (see discussion below in Sect.II B and in the Appendix A for details about the scale of the equation).
Once the Schrödinger-Poisson evolution is completed, the distribution function can be extracted from the final wavefunction using the Husimi procedure, which is a smoothed version of the Wigner quasi-probability distribution.A similar approach can be found in [27] for the solution of the Vlasov equation with electromagnetic fields in plasma physics applications.In a 3D context, this operation can be seen as the spatial smoothing of the wavefunction Ψ with a Gaussian filter of width η.Additionally, it involves a Fourier-like transformation to extract momentum information, The squared module of the wavefunction in Eq. ( 6) yields a result that closely approximates the desired distribution function [5,13].
As a side note, we remind that the SP equations has been already used in the numerical study of cosmic structure formation to study the dynamics of the Fuzzy Dark Matter (FDM) perturbations [28,29].This class of DM candidates emerges as the ultra-light mass limit of a scalar bosonic field, whose particles are known as axions.In this case h represents in fact the actual Planck constant and m the mass of the axion-like particles.The characteristic scale of the problem is the ratio h/m: at smaller scales the dynamics is influenced by quantum effects as quantum pressure, while at larger scales, this effect becomes negligible and the classical Cold Dark Matter (CDM) limit is recovered.

B. The nonlinear SP equation on quantum computers
We consider a complex wavefunction Ψ(x, t) (with x ∈ R 3 ) defined in such a way that |Ψ| 2 = ρ/ρ * .The following normalization emerges naturally from the definition of the volume-mean density ρ The SP equation of interest (see diagram in Fig. 1) assumes the general form with the self-interacting potential V [Ψ] defined as Here λ = h/m is the intrinsic scale of the problem and V [Ψ] is a redefinition of the self interacting potential U [Ψ] that renders the Poisson equation dimensionless.We use square brackets, e.g., V [Ψ], to denote functional dependence.Details on how to recover Eqs. ( 8), ( 9) from Eq. ( 4) are given in the Appendix A. This set of equations can be seen as a time-dependant Schrödinger-like equation (TDSE), where the self-interacting nature of the potential in Eq. ( 9) causes the dynamics of the system to be strongly nonlinear.It features two main processes, whose intensity are regulated by the magnitude of λ.We observe that if λ → ∞ the potential term vanishes, leaving only the free Schrödinger equation which leads to diffusion [30] (however, due to the imaginary coefficient iλ/2, the Schrödinger equation cannot be strictly classified as a diffusion equation).In this case we expect to see a spatial smoothing of the density distribution.In the opposite limit, when λ → 0, the potential term dominates: this should cause the collapse of the distribution followed by a series of peaks and caustics.As such, this can be seen as the onset of the classical regime of gravitational instability [5].
While quantum computation proved to be efficient in solving linear partial differential equations (PDEs) [31][32][33] problems arise when dealing with nonlinear equations due to the intrinsic linearity of the quantum computation formalism [10,34,35].Two main challenges are associated to the non-linearity of Eq. ( 8).The first one is related to the fact that quantum states are usually prepared and evolved through unitary operations.This preserves the well-known probability-like normalization of the quantum register: ⟨ψ |ψ⟩ = 1.Thus, the physical wavefunction |Ψ⟩, that solves Eq. ( 8), and the generic quantum state on the quantum register |ψ⟩ live in two different Hilbert spaces.We will give more details on this subject in Section II C. The second complication is related to the self-consistency of the problem, which forces us to look at alternative time evolution algorithms than Trotter-based expansions [36,37].
To address both issues, in this work we propose a variational time evolution algorithm specifically adapted to the nonlinearity of the problem, which relies on the development and the application of novel quantum circuits described in Sec.III.Example of a 3-qubit Ry-CNOT ansatz circuit used for the wavefunction |Φ Ṽ ⟩ used to evaluate the potential according to Eq. ( 18).This circuit has 3 rotational blocks Urot and 2 entangling blocks Uent with linear entanglement.The output function is parameterized through the nine real parameters θ such that U (θ) |0⟩ = |Φ Ṽ (θ)⟩; in this case the number of parameters exceeds the Hilbert space dimension 2 3 = 8.

C. The quantum computing approach to the SP equation
A first attempt to solve the nonlinear SP equation was given by Mocz and Szasz [7].Such a solution is fully variational and makes use of a finite difference optimization of the potential and of the system wavefunction evaluated at two subsequent time steps.The variational nature of this approach also allows one to bypass the costly solution of the Poisson equation in Fourier space in favour of a variational optimization of the potential as implemented in a separate qubit register.
In this work, we propose a novel set of quantum circuits that enable the implementation of a different strategy based on a adapted variational time-dependent quantum algorithm for the propagation of the variational parameters defining the system wavefunction (See Section II C 1).This enables a more rigorous implementation of the wavefunction dynamics, avoiding the instabilities implicit in most VQE optimization procedures (e.g., slow convergence due to the trapping in local minima and barren plateaus [38,39]).On the other hand the VTE algorithm comes at the cost of evaluating additional matrix elements for the solution of the equation of motion for the wavefunction parameters.

Grid-based representation of the system wavefunction
A typical space discretization associated to problems in first quantization [7,15,16,36,37] approximates a continuous space with a grid.In 1D, a line of length L is divided in arbitrary N equidistant points.For each grid point x j we have Ψ j ≃ Ψ(x j ), with j ∈ {0, 1, ..., N − 1} and periodic boundary conditions Ψ N = Ψ 0 .With a n-qubit quantum register, one can generate a quantum state |ψ⟩ belonging to a N -dimensional Hilbert space, where N = 2 n .Making use of such logarithmic encoding, only n = log 2 N qubits are needed to describe a N -point grid.A generic state |ψ⟩ can hence be represented on a quantum register as a superposition of com-putational basis states, where bin(j) is the binary representation of the grid position j and ψ j ∈ C is the associated amplitude or weight, such that the probability distribution of measuring the different basis states (i.e., different positions on the grid) is normalized as ⟨ψ|ψ⟩ = By combining this relation with the discretization of Eq. ( 7), we can establish a correspondence between the approximated physical wavefunction on the grid point x j and the corresponding coefficient of the j-th basis |bin(j)⟩ in Eq. (10), such that Ψ j = √ N ψ j .The dynamics of the system wavefunction is described by means of a time-dependent variational approach [40].To this end, we define a quantum trial state |ψ(θ(t))⟩, parameterized by a set of (time-dependent) variables θ(t) = {θ 1 (t), ..., θ Mp (t)}, which evolve according to welldefined equations of motion [40].The initial state is prepared through a suitable choice of a parameterized unitary (quantum circuit) U (θ(0)).An explicit circuit example is shown in Fig. 3. Using the previous relation between Ψ j and ψ j , we can describe the time evolution of the physical state using the updated parameters θ(t) (see II C 2).

Variational time propagation with nonlinearities
The trial wavefunction |ψ(θ(t))⟩ is evolved adapting the VTE algorithm proposed in Ref. [16] to the case where the potential is self-consistent with the wavefunction and needs to be re-evaluated at each timestep.In VTE, the dynamics is tracked on the manifold spanned by the time-dependent parameters θ(t) used to describe the trial wavefunction.
For a system evolving under the action of a Hamiltonian H, we derive, from the McLachlan variational principle [16,40], a set of equations of motion (EOM) of the form where Algorithm 1 alg: VTE for Self-consistent potential.
as defined in Eq. ( 8).The dependence of Ψ on the parameters θ(t) is implicit.Note that to capture the exact evolution comprehensive of nonlinear effects, the terms in Eqs. ( 13) and ( 14) are rescaled according to Eq. ( 11).The main obstacle to the application of such method is the evaluation of the term ℑ ⟨∂ θ k Ψ|H|Ψ⟩ in Eq. ( 14).The difficulty stands in the fact that the self-consistent potential does not have a standard form, but it depends on the system wavefunction.The evaluation of this term is made possible by application of the novel quantum circuit scheme discussed in Sec.III.

Optimization of the potential
As anticipated in Sec.II B, the functional dependence of the potential on the system wavefunction, Ψ, brings a further level of complexity into the dynamics of the system.While classically, the solution of the Poisson equation (9) for a generic wavefunction Ψ can easily be found using a spectral method in Fourier space [7], such strategy is not practical on near-term quantum computers, as it would require rather deep circuits [41].We instead resort to a variational approach.Hence, we introduce a second set of parameters ϕ(t) = {ϕ V (t), φ1 (t), ..., φL (t)} describing a quantum state such that the potential can be obtained as where the index j in V j (ϕ(t)) labels the grid position x j associated to the bit string bin(j).In Eq. ( 17) the pa-rameter ϕ V [7] ensures the normalization of the potential, The potential can therefore be interpreted as a functional of the circuit parameters, V j (ϕ(t)).The parameters are iteratively updated to minimize the distance between the parameterized potential and the one arising from Eq. ( 9): Details about the terms appearing in Eq. ( 19) are given in Appendix B. When the optimization converges, the function V j (ϕ(t)) approximates the exact potential V (x, t) with x ∈ {x j } corresponding to the parameterized wavefunction Ψ(θ(t)) at a specific time t.

III. THE ALGORITHM
The problem of self-consistency is solved, as anticipated in Section II B, by alternating the solution of the TDSE (V T E) and the optimization of the potential (P ot.Opt).The intrinsic nonlinear nature of the SP equation is reconciled with the requirements of a quantum circuit implementation imposing the correct normalization of the physical wavefunction and potential, as given by Eq. (11) and Eq. ( 17)), respectively.A scheme of this algorithm is reported in Alg. 1, where {θ ti } and {ϕ ti } refer to the parameters' set at time t i ; i ∈ {0, 1, ..., N t − 1}.For conciseness, in Alg. 1 we use the following notation Ψ i ≡ Ψ(θ ti ).

A. Circuit implementation
The trial quantum states for both the wavefunction and the potential are implemented using a heuristic local ansatz [7,15,16,37,42,43] that alternates single qubit rotation layers U rot (θ) and entangling layers U ent (see example in Fig. 2) where D is the number of entangling layers and θ ξ a subgroup of parameters.In  13) and ( 14).In the following, we propose an efficient implementation of the circuits for the evaluation of the terms  with derivatives in Eqs. ( 13) and (14).In particular, we provide a detailed procedure for the calculation of those matrix elements that have a functional dependence on the non-linear potential, such as ⟨∂ θ k ψ| H(V (ϕ)) |ψ⟩.
Given the structure of the ansatz in Eq. ( 20) and θ k in the subset θ ξ , the derivatives ∂ θ k leaves the unitary unchanged, with the exception of the target rotational layer: where θ ξ,j ∈ θ ξ and α j ∈ {X, Y, Z} is a Pauli matrix, generator of single qubit rotations.Combining Eqs. ( 20), (21) and |ψ(θ)⟩ = U (θ) |Ξ⟩, one gets for the partial derivative for a generic quantum state |Ξ⟩.Here W k (θ) is a modified version of U (θ) where the single qubit rotation R α (θ k ) is preceded by its own generator [16,44].
In the search for an efficient quantum circuit able to reproduce the matrix and vector elements of the McLachlan equation of motion of Eq. ( 12), the main obstacle is to produce a quantum state with the following structure: where U 1 , U 2 are generic unitaries and the second quantum register (single qubit) is used to evaluate the value of the matrix element.In the specific case at study, these unitaries should be expressive enough to enable a suitable parameterization of the wavefunction and its derivatives (Eq.( 22)).Given the structure of the circuit W k , by controlling only the Pauli matrix that implements the derivative, it is possible to prepare the quantum states for a given reference state |Ξ⟩ where F k,l (θ) and F k (θ) refer to unitaries for the different derivatives (see Fig. 3).Fig. 4 summarizes all quantum circuits relevant for the evaluation of the terms in Eq. ( 13), (14).A brief discussion on how to evaluate them on a QC will follow, starting with the overlaps ℑ ⟨∂ θ k ψ| ψ⟩ and ℜ ⟨∂ θ k ψ| ∂ θj ψ⟩.One can notice from Eqs. (24), (25) that upon applying a H gate, measuring ⟨⟨σ z ⟩⟩ on the ancillary qubit returns the desired quantities.Furthermore, there is no need to evaluate the real part to compute the product of the overlaps in Eq. ( 13) since the term ⟨∂ θ k ψ| ψ⟩ is purely imaginary.The circuits used to do so are shown in Figs.4a, 4b.
The potential part ℑ ⟨∂ θ k ψ| Ṽ ( φ)|ψ⟩, is what actually connects the solution of the T DSE and the Poisson equation.Ṽ ( φ) is given in Eq. ( 17) and is prepared using the parameters φ resulting from the minimization of Eq. ( 19).The circuit in Fig. 4c is the one used for the evaluation of this linking term, where the series of n Toffoli gates provides a pointwise multiplication between the wavefunction and the potential registers (i.e, k Ṽk ψ k ).
Concerning the term ℑ ⟨∂ θ k ψ| ∇ 2 |ψ⟩ , a few considerations are needed.For systems of cosmological relevance, we expect accurate simulations to require a fine enough spatial resolution to resolve all spatial features.Therefore, using a finite differences approach, as also proposed in Ref. [15], can be justified as the discretization error should be irrelevant at higher resolutions.In this framework, an approximation of the Laplace operator is given by . Quantum circuits for the evaluation of the VTE matrix elements in Eq. ( 13), ( 14) through the measurement of the ancilla qubits ⟨σz⟩.The correspondence between expectation value and measurement is reported under the respective scheme.The unitaries F k,l and F k are both reported in Fig. 3.They are used to produce the wavefunction and its derivatives.The Toffoli gate in panel (c) represents a Toffoli ladder: n Toffoli gates linking the wavefunction and the potential register qubit per qubit.(d) F (1/0) k denotes F k with different control states (|0⟩ or |1⟩) and A is the adder circuit [15] (See Appendix C for more details on the adder circuit).
with the positive (and negative) shifted wavefunctions |ψ ± ⟩ = N −1 j=0 ψ j±1 |bin(j)⟩, obtained using the adder circuit A [15], whose action on the j-th base is |bin(j)⟩ → |bin(j − 1)⟩, in combination with the unitary F k (θ) of Eq. ( 24) with different control state allows to evaluate the shifted overlaps in Eq. (26).A scheme of the circuits needed to perform these operations is presented in Fig. 4d.
See Appendix D for more details about the functioning the circuits in Figs.4d, 4c.

IV. RESULTS AND DISCUSSION
Before addressing the setup used in our simulation, some consideration about the characteristic scales appearing in the SP equation and the corresponding units are needed.
Given the invariance of the SP Eqs (8), (9) under the following scaling transformation: λ emerges as an intrinsic scale of the problem [5] as its scaling law combines changes in both the spatial and time domain (i,e system with different box dimension or evolution time will display different dynamics).
Concerning the dimension of the physical quantities appearing in the problem, we used arbitrary unit.See Appendix A for details on the arbitrary values chosen for the density normalization ρ * and the constant G in the transition from Eq. (4) to Eqs.( 8), (9).
As a final remark, we would like to emphasize that in this preliminary study all simulations were performed in an idealized setting, without considering gate errors and sampling shot noise.

A. Numerical simulations
We consider a one dimensional system of length L = 8 with periodic boundary conditions.As anticipated above, we use arbitrary units for both spatial coordinates and time variable.The choice of L and the total time of the simulation is done in such a way that, once we fix λ = 1, the self interacting potential Eq. ( 9) exactly balances the diffusion associated to the Schrödinger timeevolution.In order to compare our results with those from Ref. [7], we used as initial condition a sinusoidal distribution of the form Ψ(x, 0) = 1 + 0.6 sin π 4 x , evolved according to Eqs. ( 8) and ( 9).This specific initial condition is a well-known standard test case.It corresponds to one of the different Fourier components typically found in initial distributions for the VP equations, like Gaussian random fields.It is widely used as it makes it easy to observe the effects of shell-crossing.We will delve deeper into the concept of shell crossing in Sect.IV B for further clarification.
For this proof-of-principle numerical implementation, the parameters θ 0 reproducing the initial quantum state are obtained by optimizing the state fidelity F(ψ(θ), ψ) between the variational trial state |ψ(θ)⟩ and a target state | ψ⟩.In this work we refer to F as the state fidelity between two quantum states [46] (i.e, state normalization is 1).In the situation where |ψ 1 ⟩ and |ψ 2 ⟩ are pure states, we have F(ψ 1 , ψ 2 ) = |⟨ψ 1 |ψ 2 ⟩| 2 .This value will be also used to measure of the convergence of the states obtained with the variational method to the ones obtained classically.We point out that this has noting to do with the convergence to the actual solution of the physical Table I.State fidelity F (between the classical reference and the evolved state at t = 3) for different VTE simulations.Hyperparameters: D ψ and DV , number of rotation layers in the wavefunction and potential ansatz respectively; Mp total number of parameters in the wavefunction ansatz; Nt number of timestep used in the simulation; rc cutoff for singular values, used determine the effective rank of the matrix M in Eq. ( 12) (more information available in Scipy documentation [45]); ϵ regularization factor added to the diagonal of the matrix M in Eq. (13)  problem (i.e, does not take into consideration the grid discretization error).The classical optimization of the potential ( Pot.Opt. in Algorithm 1 ) is performed using a combination of COBYLA (to start the optimization) and BFGS (to find the best solution) algorithms as implemented in SCIPY v1.9.0.All simulations were performed in Qiskit [47] within the statevector framework, i.e., using a matrix representation of the quantum circuit and a vector representation of the quantum state.The equations of motion in Eq. ( 12) are integrated using an explicit Euler method with fixed timestep for a total of N t steps.Here, it is important to mention that, in general, the inversion of the matrix M in Eq. ( 13) may become ill-defined.To reduce the resulting instabilities of the dynamics, we used the SCIPY least squares solver [45] with a suitable choice of the corresponding hyperparameters: the cutoff r c , used to determine the effective rank of the matrix in Eq. ( 12) such that the singular values smaller than r c • Λ max are set to zero (here Λ max is the singular value of largest magnitude), and the regularization factor ϵ, applied to the diagonal of the matrix M in Eq. (13).
In order to determine the quality of the results, we should also consider the level of expressivity of the variational ansatz, which is used to encode the system wavefunction and the potential.In order to achieve accurate results, one would need -in principle -a number of circuit parameters θ(t) for the wavefunction that approaches the size of the Hilbert space.On the other hand, the number of terms in the matrices and vectors used in the equations of motion, Eqs. ( 13) and ( 14), scale as M 2 p and M p , respectively, as shown in Tab.II, where M p is the number of parameters.Reducing the number of parameters significantly reduces the total number of circuit evaluations.This however translates in a lower accuracy of the dynamics, as the ansatz may not enable a thoroughly description of the sector of interest of the full Hilbert space.Similarly, a large number of parameters will enable a more accurate description of the self-consistent potential, at the price of a more cumbersome (classical) optimization process and an increased circuit depth.
To assess the quality of our implementation (including Table II.Number of different circuits used to evaluate the terms in Eq. ( 12) with the respective number of qubits needed for the implementation.Here Mp is the number of variational parameter in the wavefunction ansatz and n = log2N the number of qubits used for the discretization.

Term
No. circuits No. qubits the adjustment of the hyperparameters), we performed two series of simulations.The first one is a classical spectral method based on the F F T as in [7].Results obtained from this approach will be used as a reference.The actual implementation of our proposed quantum algorithm consists, instead, of repeated cycles of circuit optimization and VTE steps (Algorithm 1).When comparing its outcomes with the exact ones (Fig. 5 and Tab.I) we observe that the quantum approach rightfully captures the qualitative behaviour of the wavefunction, although the probability distribution obtained from the VTE is not as smooth as the exact one.

B. Interpretation of the SP results
Fig. 6 shows the time evolution of the initial sinusoidal distribution, as given in Eq. ( 28), over a time span of approximately 6 time units for two different choices of the parameter λ (left: λ = 1, right: λ = 0.25).The lower panels depict the same dynamics as a two-dimensional surface plot of the time dependent wavefunction.The larger the value of λ, the larger the quantum nature of the dynamics; in fact, in the limit of λ → 0, the SP dynamics converges towards the classical VP dynamics [5].Physically, the collapse and splitting of the probability distribution (left panels in Fig. 6) is an effect of the selfinteracting potential.This is regulated by the scale of the problem λ.However, as stated in the preamble of Sect.IV, what really matters is not the absolute value of λ, but its value relative to the box size and time (e.g, if instead of L = 8 we had L = 1, we would need to change λ to λ/64, accordingly).In the classical limit h/m → 0, the quantum effects are suppressed, the potential cannot counter anymore the diffusion and secondary peaks arise, as in the classical VP solution.In this scenario, the effects of shell-crossing are more pronounced.
The term shell-crossing can be better understood in the context of the study of the collapse of a spherical density perturbation in a self-gravitating collisionless fluid [48].Following an accretion due to the expansion of the Universe, spherical shells of matter collapse under the influence of gravity, until they intersect and a Both simulations have been carried out with a spectral method [7].In the top row, probability distributions are shown at fixed time frames.In the bottom row the same evolution is shown in 2-D perspective by a heatplot: the x axis represents the spatial coordinate, while the y axis is used for the time; the probability distribution magnitude is represented through a color gradient.The difference between these two simulations is given by the intensity of the quantum pressure term.In the left column (a) λ = 1 and the quantum effect balances the diffusion; In the right column (b), with λ = 1/4 the dynamics is similar to the classical one (VP).singularity arises.Subsequently, the term has been repurposed in the context of dark matter [49,50] due to its non-collisional nature.More in general the shell-crossing happens whenever whenever the orbits of two, or more, fluid elements intersect.
An example is shown in the right column of Fig. 6.Starting from the initial sinusoidal distribution, the gravitational attraction induces the concentration of the matter density in a first peak (around time t = 3), which then collapses by effect of gravity damping.This process repeats few more times, giving rise to a multitude of subpeaks as a result of repeated episodes of shell-crossing.

C. Scaling of required resources
The largest cosmological simulations describe nowadays the evolution of boxes having a size of several Gigaparsecs, and using of the order of a trillion resolution elements (particles) [51].While simulations of this size (see Eq. ( 29)) as a function of the resolution, given by the number of qubits n, for different values of λ (scale for quantum effects).To match the number of points between the two systems, extra points are taken onto the connecting line between two adjacent points in the n qubits discretization.(b) Minimum number of qubits required to obtain a fixed arbitrary fidelity C(13) as a function of λ.
are beyond the reach of what can be achieved on current quantum computers, the possibility of efficiently running large suites of simulations with ∼ 10 10 particles each is still highly valuable to carry out a number of useful calibrations of observational quantities and to explore the parameter space of cosmological models [52,53].We thus consider a situation of possible cosmological interest to be a 3D simulation with resolution in grid points per dimension of 2048 = 2 11 .Thanks to the logarithmic encoding, a total of 2 33 grid points can be obtained with n tot = 33 qubits.In Tab.II we report the number of qubits needed for every term of Eq. ( 12) and the relative number of different circuits used.In this exploratory work we used a heuristic number of parameters M p and timesteps N t for our simulation.Thus we are not in position of providing an accurate estimate of the number of parameters, or timesteps, required for a relevant cosmological simulation.What we can say is that, such simulation would require a maximum of 2n + 1 qubits, used in the evaluation of the potential term.
The implementation of error mitigation protocols for near-term hardware experiments with noisy devices does not significantly affect the estimated number of resources (e.g., number of qubits and two-qubit gates).In particular, noise mitigation schemes such as probabilistic error cancellation [54] (PEC) and probabilistic error amplification [55] (PEA) only require additional single qubit operations to implement Pauli twirling [54] (for the conversion of coherent to incoherent noise) and dynamical decoupling, with no effect on the overall resource scaling.On the other hand, a significant increase in the number of measurements is expected for both PEC and PEA approaches.
From Table I we can retrieve some useful insights about the required timestep (to ensure numerical stability) and the scaling of the target error with the system size.It is worth mentioning that the following outtakes regard the scenario in which the EOM (12) is integrated by an explicit first order Euler method.Firstly, we note that to precisely describe the full Hilbert space, the number of parameters M p should increase by a factor of two with the addition of each qubit.Furthermore, increasing spatial resolution (number of qubits) necessitates a higher number of timesteps to maintain the desired level of accuracy in describing the dynamics.This phenomenon is analogous to what occurs in classical numerical integration problems, such as spectral methods or N -body simulations.On the other hand, when the fidelity F is held constant, the expected number of timesteps N t decreases as the number of variational parameters M p increases.This trend can be attributed to the fact that the equation being integrated (Eq.( 12)) operates within parameter space, whereas the original dynamics (i.e., the Hamiltonian in Eq. ( 15)) is only reflected in the vector term (as per Eq. ( 14)).Moreover, the variational approach enables the use of a parameter count smaller than the Hilbert space dimension.Consequently, capturing the same dynamics within a submanifold, which offers less flexibility in terms of parameter evolution, requires a finer timestep.
In particular, to span the entire Hilbert space, we would need M p = 2N variational parameters.As M p deviates from this value, our ability to capture dynamical fluctuations diminishes, necessitating more timesteps Figure 8. Density distribution at a fixed time frame for different resolutions (i.e, number of qubits n).On the top (a) and bottom panel (b) the scale λ is set respectively to 1/2 and 1/16.One can notice how higher resolution is needed to resolve a more classical system (lower λ).
to accurately track the wavefunction evolution.This provides an explanation for the lower fidelity values observed in Table I when a larger number of qubits is employed.In such cases, either M p or the number of timesteps does not increase in alignment with the scaling necessary to maintain fidelity at a stable level.
It is important to further note that this principle is applicable when M p > M min , where M min represents the minimum number of variational parameters required to reproduce the target function within a specified accuracy.This minimum parameter count can change over the course of the wavefunction evolution based on the complexity.

Space resolution and classical limit
In this preliminary study, we only performed numerical tests on relatively small-scale systems for which numerical simulations of our quantum algorithm were possible with the available computational resources.However, it is essential for us to confirm that the resolution we employed is sufficient to accurately capture the dynamics of the system.As we approach the classical limit, however, the space resolution needed to capture the right dynamical behaviour increases.This is clear in the left panel of Fig. 7, where the convergence of the probability distribution is shown as a function of the spatial resolution for simulations with different scale λ.We observed that with decreasing λ accurate results require a finer representation of the space coordinate.This is mainly due to the appearance of peaked structures observed in the dynamics (see Fig. 6), which are harder to resolve than in the case of larger λ values.It is worth mentioning that the increase in space resolution also requires a corresponding decrease of the simulation time step (Table I).In the right panel of Fig. 7 the resolution is shown as a function of the scale λ for different convergence values.From an empirical fit we showed that the number of qubits necessary to resolve the dynamics of a system scales as O(log(λ)).To quantify convergence we used the L 2 norm between the n qubits probability distribution f n -at a fixed time frame -mapped onto the 13 qubits grid and the 13 qubits probability distribution f n C (13) In detail, the scaling law is fitted with a logarithmic function n(λ, C(13) ) = Klog(λ) + q( C(13) ), where K = −1.44 and q( C( 13) ) is the resolution needed to obtain the desired convergence factor C(13) when λ = 1.Here C(13) indicates a reference value of C n , chosen a priori, thus does not depends on n.
To be able to determine from a qualitative standpoint what value of C 13 is needed to obtain convergence in resolution, we plotted in Fig. 8 the probability distribution at a fixed timestep, for different resolution and different λ.Comparing the images of this plot with the graphics in Fig. 7 tells us what convergence level is associated to a numerical value of C 13 .We observed that the right behaviour can be captured as soon as the various density distributions start overlapping.More precisely this happens for 6 qubits when λ = 0.5 and for 8 ÷ 9 qubits when λ = 0.0.625.It is fair to assume that a L 2 distance of O(10 −1 ) is enough to resolve the dynamic.We hence gather from both the fit and the previous remarks that a one dimensional resolution of 11 qubits can be enough to resolve a simulation approaching the classical limit with λ up to O(10 −3 ).Moreover, it is possible to show that for the simulations reported in Figure 5 with λ = 1 a resolution of 32 grid points (equivalent to 5 qubits) is sufficient to converge the primary features of the dynamics.

Sampling and system size
Of importance is also the study of the convergence of the results as a function of the number of measurements (N s ) needed to accurately evaluate the elements in Eqs. ( 13) and (14).Measurements introduces a statistical noise in the solution of the equation of motion for the propagation of the wavefunction parameters, which has an impact on the overall dynamics.Building on [15] we investigate the aforementioned behaviour in the case of the newly introduced term ⟨∂ θ k ψ| H |ψ⟩. The potential part is directly proportional to the measurement of the the ancilla qubit ⟨σ z V ⟩, thus the variance of the measurements can be estimated by the following where the value of σ z V is intended in the limit of N s → ∞ and the norm of the potential ϕ V (n) scales with the number of qubits as 2 n/2 (this can be easily seen applying the spectral method proposed in Ref. [7] to obtain the potential, where the wavefunction is normalized as in Eq. ( 11)).The fact that the number of shots scales exponentially with the number of qubits is related to the nonlinear nature of the problem.Precisely, it is a consequence of the factorization of the physical wavefunction and the potential (remember Eqs. ( 10), ( 17)), where the normalization factor appears as an additional parameter that depend on the number of grid points.
The kinetic part is given by a linear combination of three different set of measurements, see Eq. ( 26).The variance is estimated with a quadrature-sum as Here the factor 2 2n emerges from the term 1/∆x 2 required form the finite differences method.We observe that in both situations the number of measurement required for an arbitrary accuracy increases with the number of qubits.

V. CONCLUSIONS
In this paper, we tackled the problem of simulating a many-body problem of collisionless self-gravitating particles interacting only through a potential.In a cosmological context, this describes, e.g., the case of gravitational instability of a cold dark matter fluid in an expanding background.Our analysis builds on the possibility to recover the dynamics of the Vlasov-Poisson (VP) equations by mapping it to a framework more suited for quantum computing (QC), namely the Schrödinger-Poisson (SP) equations.
We proposed a variational time-evolution (VTE) algorithm for the solution of the corresponding non-linear time-dependant Schrödinger-like equation (TDSE) in which, at each time-step, the potential, which is a functional of the time-evolved system wavefunction, is obtained upon minimization of a suitable parameterized unitary in the quantum register.
The proposed quantum algorithm was developed with the aim of scaling up to system sizes, which are in principle much less favourable for classical computers than for quantum computers.To this end, we used a compact (i.e., logarithmic) encoding of the spatial grid (i.e., n qubits describing 2 n grid points), while enabling the representation of any self-consistent potential, which can be described by combining a parameterized unitary circuit and classical normalization factors.In particular, working with a circuit depth that scales polynomially with the number of qubits, we were able to reach a final state fidelity of approximately 0.96 in a 5 qubits simulation.Concerning the scaling of the VTE circuit, the number of terms required to evolve the wavefunction in a single timestep scales quadratically with the number of variational parameters.However, the number of timesteps required to achieve a given fidelity increases as the ratio between the number of variational parameters and the Hilbert space dimension decreases, as shown in Tab.I.This behaviour might be related to the heuristic ansatz used in our implementation (e.g, Figs.2,20).We postpone to future investigations understanding weather using ansatz based on tensor networks (e.g, Matrix Product States), as proposed in [15,56], can bring some improvements.
In addition, the number of measurements required to reach a desired accuracy shows a polynomial scaling with the number of grid points.We point out that this behaviour is not specifically related to our proposed VTE algorithm, but to the approach chosen to tackle the nonlinear nature of the problem, namely factorizing the potential and the wavefunction into unitary circuits followed by classical normalization.
Moreover, using classical simulations we investigated how the required resolution changes as we approach the classical limit h/m → 0 in a 1D scenario.The proposed empirical log-scaling law opens up new interesting perspectives for the use of QC in the propagation of the SP equation in more general settings, including the 3D case.
In conclusion, we consider this work as a first steps towards the use of QC in the solution of the dynamics of a self-gravitating collisionless fluid.While the scaling up of the quantum approach to system sizes that may be relevant for cosmological prediction in 3D seams unlikely before the advent of fault-tolerant quantum computing, there may be interesting studies (e.g., the study of static and dynamic phase transitions) which may occur already in low dimensions (1D) and that can become classically hard because of the complexity of the quantum description SP formulation (e.g., because of the growing entanglement).A similar strategy was recently implemented in the domain lattice gauge theory (see [57]).It is also worth pointing out that, while this study was inspired by the cosmological problem of gravitational instability of a collisionless fluid, our results are general and can be applied to other domains, including the study of the plasma dynamics in a Tokamak fusion reactor.
At the current state of development, our QC algorithm is clearly not competitive, in terms of accessible dynamic range, with respect to classical methods, both in cosmology and plasma physics, using near-term, noisy, QC with a number of qubits ∼ 100 [58].On the other hand, developments that can make our approach more noise-resilient can still be foreseen, including more efficient integration methods and physically motivated variational ansatz.
A particularly intriguing prospect is the incorporation of non-variational methods within the time evolution algorithm for solving the Poisson equation (or any other equation where the potential relies on the wavefunction).This approach holds the potential to deliver more precise results at the cost of a significant increase in the circuit depth, likely requiring a fault-tolerant quantum computing implementation [59][60][61] .We therefore look with a good deal of optimism into the future developments of this very promising application domain for QC.

ACKNOWLEDGMENTS
We thank Guglielmo Mazzola for insightful discussions and feedback and the unknown referees for their constructive comments.This paper is supported by the Fondazione ICSC National Recovery and Resilience Plan (PNRR) Project ID CN-00000013 "Italian Research Center on High-Performance Computing, Big Data and Quantum Computing" funded by MUR Missione 4 Componente 2 Investimento 1.4: "Potenziamento strutture di ricerca e creazione di "campioni nazionali di R&S (M4C2-19)" -Next Generation EU (NGEU).We acknowledge the use of IBM Quantum services for this work.IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide.Other product and service names might be trademarks of IBM or other companies.The current list of IBM trademarks is available at https://www.ibm.com/legal/copytrade.

Figure 1 .
Figure 1.Mapping, M, of the classical N −body Vlasov problem into the corresponding quantum Schrödinger Poisson formulation obtained through nonlocal manipulation (e.g, Husimi smoothing[13]).Detail on the mapping M and its inverse are given in[5,14].

Figure 2 .
Figure 2.Example of a 3-qubit Ry-CNOT ansatz circuit used for the wavefunction |Φ Ṽ ⟩ used to evaluate the potential according to Eq. (18).This circuit has 3 rotational blocks Urot and 2 entangling blocks Uent with linear entanglement.The output function is parameterized through the nine real parameters θ such that U (θ) |0⟩ = |Φ Ṽ (θ)⟩; in this case the number of parameters exceeds the Hilbert space dimension 2 3 = 8.

Fig 3 ,
we show the typical circuits used to encode the wavefunction |ψ(θ)⟩, while Fig.2reports the one used for the potential |Φ Ṽ (ϕ)⟩.The latter consists of just R Y (θ) rotations and CX gates, since the target potential function is real-valued.The quantum part of the evolution algorithm resides in the measurement of the expectation values in Eqs. (

Figure 3 .
Figure 3.Quantum circuits used to prepare: (a) the trial wavefunction; (b) the unitary matrix F k that generates states like the one in Eq (24); (c) the unitary matrix F k,l that generates states like the one in Eq. (25).

Dψ = 4 ;Figure 5 .
Figure 5.Comparison between probability distributions at different times for a 5 qubits system and λ = 1.The left panel (a) is the classical reference, obtained with a spectral method[7].In the middle one (b) are presented the results obtained through a VTE simulation (using the algorithm in Fig.1).In the left (c) we compare the classical probability distribution at t = 3 with the results obtained from the VTE simulations with different hyperparameters (more details in Tab.I) .The one chosen for the simulation in the middle panel (b) are Nt = 2 • 10 4 , ϵ = 10 −4 , rc = 10 −8 , D ψ = 6, DV = 6..

Figure
FigureClassical evolution of the 1-D probability distribution under the effect of the gravitational potential, for different values of λ.Both simulations have been carried out with a spectral method[7].In the top row, probability distributions are shown at fixed time frames.In the bottom row the same evolution is shown in 2-D perspective by a heatplot: the x axis represents the spatial coordinate, while the y axis is used for the time; the probability distribution magnitude is represented through a color gradient.The difference between these two simulations is given by the intensity of the quantum pressure term.In the left column (a) λ = 1 and the quantum effect balances the diffusion; In the right column (b), with λ = 1/4 the dynamics is similar to the classical one (VP).