Neural-Shadow Quantum State Tomography

Quantum state tomography (QST) is the art of reconstructing an unknown quantum state through measurements. It is a key primitive for developing quantum technologies. Neural network quantum state tomography (NNQST), which aims to reconstruct the quantum state via a neural network ansatz, is often implemented via a basis-dependent cross-entropy loss function. State-of-the-art implementations of NNQST are often restricted to characterizing a particular subclass of states, to avoid an exponential growth in the number of required measurement settings. To provide a more broadly applicable method for efficient state reconstruction, we present"neural-shadow quantum state tomography"(NSQST)-an alternative neural network-based QST protocol that uses infidelity as the loss function. The infidelity is estimated using the classical shadows of the target state. Infidelity is a natural choice for training loss, benefiting from the proven measurement sample efficiency of the classical shadow formalism. Furthermore, NSQST is robust against various types of noise without any error mitigation. We numerically demonstrate the advantage of NSQST over NNQST at learning the relative phases of three target quantum states of practical interest, as well as the advantage over direct shadow estimation. NSQST greatly extends the practical reach of NNQST and provides a novel route to effective quantum state tomography.


I. INTRODUCTION
Efficient methods for state reconstruction are essential in the development of advanced quantum technologies.Important applications include the efficient characterization, readout, processing, and verification of quantum systems in a variety of areas ranging from quantum computing and quantum simulation to quantum sensors and quantum networks [1][2][3][4][5][6].However, with physical quantum platforms growing larger in recent years [7], reconstructing the target quantum state through brute-force quantum state tomography (QST) has become much more computationally demanding due to an exponentially increasing number of required measurements.To address this issue, various approaches have been proposed that are efficient in both the number of required measurement samples and in the number of parameters used to characterize the quantum state.These include classical shadows [8] and neural network quantum state tomography (NNQST) [9].The goal of NNQST is to produce a neural network representation of a complete physical quantum state that is close to some target state.In contrast, the classical shadows formalism does not aim to reconstruct a full quantum state, but rather to obtain a reduced classical description that allows for efficient evaluation of certain observables.
A neural network quantum state ansatz has been shown to have sufficient expressivity to represent a wide range of quantum states [10][11][12][13] using a number of model parameters that scales polynomially in the number of qubits.Furthermore, as methods for training neural networks have long been investigated in the machine learning community, many useful strategies for neural network model design and optimization have been directly adopted for NNQST [14][15][16].Following the introduction of neural network quantum states [17], Torlai et al. proposed the first version of NNQST, an efficient QST protocol based on a restricted Boltzmann machine (RBM) neural network ansatz and a cross-entropy loss function [9].NNQST has been applied successfully to characterize various pure states, including W states, the ground states of manybody Hamiltonians, and time-evolved many-body states [9,18,19].Despite the promising results of NNQST in many use cases, the protocol faces a fundamental challenge: An exponentially large number of measurement settings is required to identify a general unknown quantum state (although a polynomial number is sufficient in some examples [20]).During NNQST, a series of measurements is performed in random local Pauli bases B for n qubits (B = (P 1 , P 2 , • • • , P n ), where P i ∈ {X, Y, Z}).Because this set is exponentially large, some convenient subset of all possible B must be selected for a large system, but this subset may limit the ability of NNQST to identify certain states.An important example is the phase-shifted multi-qubit GHZ state, relevant to applications such as quantum sensing.In this case, the relative phase associated with non-local correlations cannot be captured by measurement samples from almost-diagonal local Pauli bases, i.e., bases with m ≪ n indices i for which P i = X or P i = Y .Nonetheless, this limited set of almost-diagonal Pauli bases is widely used in NNQST implementations to avoid an exponential cost in classical post-processing [9,18].
To address this challenge, we use the classical shadows of the target quantum state to estimate the infidelity between the model and target states.This is in contrast with approaches that use the conventional basis-dependent cross-entropy as the training loss for the neural network.This choice is motivated by two main factors.Firstly, infidelity is a natural candidate for a loss function compared to cross-entropy; the magnitude of the basis-dependent cross-entropy loss is in general not indicative of the distance between the neural network quantum state and the target state.Additionally, infidelity is the squared Bures distance [21], a measure of the statistical distance between quantum states that enjoys metric properties such as symmetry and the triangle inequality.The infidelity is therefore a better behaved objective function for optimization.Secondly, the classical shadow formalism of Huang et al. was originally developed to address precisely the scaling issues of brute-force QST [8].Instead of reconstructing the unknown state, shadow-based protocols, first proposed by Aaronson [22], predict certain properties of the quantum state with a polynomial number of measurements.Therefore, classical shadows provide the following two main advantages in our work: (i) they are provably efficient in the number of required measurement samples for predicting various observables (e.g. the infidelity), and (ii) there is no choice of measurement bases required and therefore no previous knowledge of the target state is assumed.
Our new pure-state QST protocol, "neural-shadow quantum state tomography" (NSQST), reconstructs the unknown quantum state in a neural network quantum state ansatz by using classical shadow estimations of the gradients of infidelity for training (Fig. 1b).In our numerical experiments, NSQST demonstrates clear advantages in three example tasks: (i) reconstructing a time-evolved state in one-dimensional quantum chromodynamics, (ii) reconstructing a time-evolved state for an antiferromagnetic Heisenberg model, and (iii) reconstructing a phase-shifted multi-qubit GHZ state.Moreover, the natural appearance and inversion of a depolarizing channel from randomized measurements in the classical shadow formalism makes NSQST noise-robust without any calibration or modifications to the loss function, while one of these two extra steps is required in noise-robust classical shadows [23,24].We numerically demonstrate NSQST's robustness against two of the most dominant sources of noise across a wide range of physical implementations: two-qubit CNOT errors and readout errors.The rest of this paper is organized as follows: In Sec.II, we summarize the methods used in our numerical simulations, including the neural network quantum state ansatz, NNQST, classical shadows, NSQST, and NSQST with pre-training.In Sec.III and Sec.IV, we provide numerical simulation results for NNQST, NSQST, and NSQST with pre-training in three useful examples, both noise-free and in the presence of noise.In particular, we provide a comparison to direct shadow estimation in Sec.III D. Finally, Sec.V summarizes the key advantages of NSQST and some possible future directions.We provide additional technical details and suggestions for further improvements to NSQST in the appendices.

II. METHODS
In this section, we describe existing methods for characterizing quantum states and then introduce and describe two variants of NSQST.We introduce neural network quantum states in Sec.II A. State-of-the-art NNQST implementations and the classical shadow protocol are summarized in Sec.II B and Sec.II C, respectively.Our proposed NSQST protocol is described in Sec.II D. In addition, a modified NSQST protocol with pre-training is described in Sec.II E.

A. Neural network quantum state
Our pure-state neural network quantum state ansatz is adopted from Ref. [18].The parameterized model is based on the transformer architecture [15], widely used in natural language processing and computer vision [25,26].
As compared to older architectures such as the RBM, the transformer is superior in modelling long-range interactions and allows for more efficient sampling of the encoded probability distribution due to its autoregressive property [18].
The transformer neural network quantum state ansatz takes a bit-string s = (s 1 , . . ., s n ) ∈ {0, 1} n corresponding to the computational basis state |s⟩, and produces a complex-valued amplitude ⟨s|ψ λ ⟩ = ψ λ (s) parameterized by λ = (λ 1 , λ 2 ) as where λ 1 and λ 2 are vectors of real-valued model parameters for the normalized probability amplitudes p λ1 (s) and the phases φ λ2 (s) of the neural network quantum state.These amplitudes and phases may be parameterized by the neural network quantum state in various fashions.
One approach is to use two completely disjoint models, independently parameterized by λ 1 and λ 2 for the amplitude and phase values, respectively [9,27].Another approach is to use a single model parameterized by λ to encode both the amplitude and phase outputs, either via complex-valued model parameters [17,28] or by using real-valued model parameters with two disjoint layers of output neurons connected to a common preceding neural network [18].In our numerical experiments, we use the later parameterization for NNQST and NSQST, but the modified NSQST protocol with pre-training in Sec.II E uses two separately parameterized neural networks.See Appendix A for a more detailed account of the transformer architecture.
Given a trained neural network quantum state, observables and other state properties of interest can be predicted by drawing (classical) samples from the neural network model.The number of samples required to predict the expectation value of an arbitrary Pauli string (independent of its weight) and fidelity to a computationally tractable state with bounded additive error is independent of the system size [29].Computationally tractable states include stabilizer states and neural network quantum states.Thus, if sampling can be performed efficiently, the prediction errors from neural network quantum states are primarily due to imperfect training.We also note that not every neural network quantum state ansatz has this property of efficient observable and fidelity prediction, where an important example is a class of generative models trained on informationally complete positive-operator valued measures (IC-POVMs) [30,31].

B. Neural network quantum state tomography (NNQST)
NNQST (Fig. 1a) aims at obtaining a trained neural network representation that closely approximates an unknown target quantum state.The training is done by iteratively adjusting the neural network parameters along a loss gradient estimated from the measurement samples in various local Pauli bases (obtained by applying singlequbit rotations before performing measurements in the computational basis) [18].We denote a local Pauli basis as B = (P 1 , P 2 , • • • , P n ), with P i ∈ {X, Y, Z}.If a measurement sample s ∈ {0, 1} n is obtained after performing rotations to the Pauli basis B, we store the pair (s, B) as a training sample, corresponding to a product state |s, B⟩.
After choosing a subset B of Pauli bases for collecting measurement samples, we estimate a loss function that represents the distance between the target state |Φ⟩ and the neural network quantum state |ψ λ ⟩.The loss function in NNQST is based on the cross-entropy of the measurement outcome distributions for the target and neural network states in each basis B, which is then averaged over the set of bases B. Ignoring a λ-independent contribution arising from the average entropy of the targetstate measurement distribution, this procedure gives the cross-entropy loss function for NNQST [18]: Here, p Φ (s, B) is the probability of measuring the outcome s from the target state |Φ⟩ after rotating to the Pauli basis B and p ψ λ (s, B) is defined as where the overlap between the Pauli product state and the neural network quantum state requires a summation over the computational basis states |t⟩ that satisfy ⟨s, B|t⟩ ̸ = 0. Note that the number of these states |t⟩ is 2 K , with K being the number of positions i where P i ̸ = Z.This suggests that an efficient and exact calculation of p ψ λ (s, B) requires the projective measurements to be in almostdiagonal Pauli bases for a generic neural network quantum state |ψ λ ⟩ [18].
Using the law of large numbers, the cross-entropy loss can be approximated via a finite training data set D T as An approximation for the gradient ∇ λ L is then directly found from Eq. ( 4).During training, the gradient is provided to an optimization algorithm such as stochastic gradient descent (SGD) or one of its variants (e.g., the Adam optimizer [16]).In this paper, we exclusively use the Adam optimizer.

C. Classical shadows
Shadow tomography relies on the ingenious observation that a polynomial number of measurement samples is sufficient to predict certain observables for quantum states of arbitrary size [22].The classical shadow protocol further exploits the efficiency of the stabilizer formalism, making this procedure ready for practical experiments [8,[32][33][34].In this paper, we focus on estimating linear observables of the form Tr(Oρ) for a pure state ρ = |Φ⟩⟨Φ|.An important example (for O = |ψ λ ⟩⟨ψ λ |) is the fidelity between the target state |Φ⟩ and a reference state |ψ λ ⟩.The first step in the protocol is to collect the so-called classical shadows of |Φ⟩.To obtain a single classical shadow sample, we apply a randomly-sampled Clifford unitary U i ∈ Cl(2 n ) to the quantum state and measure all n qubits in the computational basis, resulting in a single bit-string |b i ⟩.The stabilizer states |ϕ i ⟩ = U † i |b i ⟩ contain valuable information about ρ.Using representation theory [35], it can be shown that the density matrix obtained from an average over both the random unitaries and the measured bitstrings M(|Φ⟩⟨Φ|) := E U ∼Cl(2 n ),b∼P Φ (b) [|ϕ⟩⟨ϕ|] coincides with the outcome of a depolarizing noise channel: where 2 n denotes the depolarizing noise channel of strength f .The original state can then be recovered as an average over classical shadows by inverting the above formula, |Φ⟩⟨Φ| = E M −1 (|ϕ⟩⟨ϕ|) .We emphasize that the original state can only be recovered after sampling from a prohibitively large number of Clifford unitaries, and for each of them sampling an exponentially large number of bit-strings.The classical shadows are therefore defined by [8]: More generally, in the presence of a gate-independent, time-stationary, and Markovian noise channel E afflicting the segment of the circuit between the preparation of the state ρ and the measurements, this definition extends to [23]: Here, f (E) is the strength of a depolarizing noise channel D n,f (E) comprised of the combined effects of the channel in Eq. ( 5) and the twirling of the additional noise by the random Clifford unitaries, effectively imposing further depolarization.Koh and Grewal [23] derived an analytic expression for f (E) as where is the sum of fidelities for the noise channel E acting on each of the computational basis states |s⟩ (fid(E) ∈ [0, 2 n ]; at the lower bound, fid(E) = 0, the depolarizing parameter becomes negative, f (E) < 0, but the associated depolarizing channel remains physical [23]).When the noise channel is not exactly known, extra calibration procedures are required in noise-robust classical shadow protocols [24].
Once the classical shadow samples {ρ i } N i=1 are collected, we calculate ô(i) := Tr(ρ i O) for each of the N classical shadows and obtain an estimator for the observable from an average over the N samples (alternatively, the medianof-means can be used to improve the success rate of the protocol; see Ref. [8] for more details).The key advantage of classical shadows is the bounded variance of observable estimations which, in turn, provides a bound on the number of classical shadow samples required to predict linear observables of the quantum state within a target precision.Indeed, as shown in Refs.[23,24], the number of classical shadow samples N required to estimate M arbitrary linear observables {O j } M j=1 , up to an additive error ε, scales as In the case of noise-free Clifford tails, E = I, we have fid(I) = 2 n , f (I) = (1 + 2 n ) −1 , and we recover the sample complexity O max 1≤j≤M Tr(O 2 j ) log M/ε 2 presented in [8], which is indeed independent of the system size n.The variance of observable estimators is also bounded as Var(ô) ≤ 3 Tr(O 2 ) in this case and is independent of the system size.

D. Neural-shadow quantum state tomography (NSQST)
Given a pure target state |Φ⟩, our goal in NSQST (Fig. 1b) is to progressively adjust the model parameters λ, such that the associated pure state |ψ λ ⟩ (see Eq. ( 1)) approaches |Φ⟩ during optimization.We approximate the fidelity between the model and target states using the classical shadow formalism described in Sec.II C, taking O λ = |ψ λ ⟩⟨ψ λ | as a linear observable, averaged with respect to ρ = |Φ⟩⟨Φ|.The number M of observables we predict during optimization therefore coincides with the number of descent steps taken by the optimizer, as updating λ changes the observable |ψ λ ⟩⟨ψ λ | in every iteration.By collecting N classical shadows, we can approximate our loss function (the infidelity) via In the noise-free case E = I, this expression simplifies to We see that, independent of the specific form of E, training the model is simply equivalent to increasing the average overlap between the random stabilizer states and the model quantum state.The next step in NSQST requires classical postprocessing to estimate the overlaps ⟨ϕ i |ψ λ ⟩.For certain states |ψ λ ⟩ (e.g., stabilizer states), the overlap can be calculated efficiently.Many states of interest do not fall into this class, leading to a potential exponential overhead.However, we can obtain a Monte-Carlo estimate of the overlap by sampling from the model quantum state |ψ λ ⟩.
In the model, we associate a probability to each computational basis state |s⟩.Therefore, It is now straightforward to provide a Monte Carlo estimate of the above quantity [36] (see Appendix D for an alternative approach).For each sample s from the neural network, we have direct access to the exact complexvalued amplitudes ψ λ (s) of the neural network quantum state in the computational basis.Moreover, we can compute the stabilizer state projections time, in view of the Gottesman-Knill theorem [37][38][39].Note that decomposing a randomly sampled Clifford operator into primitive unitary gates (e.g., with Hadamard, S, and CNOT gates) still takes O(n 3 ) time [39,40].However, this is a one-time procedure to be run for each U i and can be done in advance of state tomography.
For first-order optimization methods (such as SGD and Adam), it is the gradient of the loss function rather than the loss function itself that must be estimated.From Eq. ( 11) and using the log-derivative trick, we obtain the gradient where we define the diagonal operator D λ as A simple but important observation is that the noise enters Eq. ( 15) only in the overall prefactor ∝ 1/f (E).Thus, the noise may affect the learning rate, but it will not affect the direction of the gradient.This suggests that gradient-based optimization schemes can yield an accurate neural network quantum state without any noise calibration or mitigation.This is despite the fact that this same noise generally biases the estimated infidelity (see Eq. ( 11)).
We now discuss a possible limitation of our approach to classical post-processing.Given N classical shadows collected experimentally, and L computational basis samples collected from the neural network quantum state, the number of Monte Carlo samples collected from the neural network quantum state must be L ∼ O 1/N 2 f (E) 2 in order to guarantee a bounded standard error in the approximation of the gradient from Eq. ( 15).Since f (E) ≤ 1/(1+2 n ), this suggests that there may be an exponential cost in performing the Monte Carlo estimations.We emphasize that this potential exponential cost in classical post-processing does not affect the required number of classical shadows from measurements.As system sizes grow significantly larger, it will eventually become hopeless to perform an exact sum over all 2 n computational basis states and the Monte Carlo average may still lead to successful convergence with only a sub-exponential number of samples in some cases (further details and an alternative approach to performing the Monte Carlo average are discussed in Appendix D).In our numerical simulations with six qubits (see Sec. III), having 2 6 = 64 computational basis states, we have evaluated Eq. ( 15) using 5000 Monte Carlo samples.With this many samples, the Monte Carlo estimation error is negligible and statistical fluctuations in the gradient are predominantly due to the finite number of classical shadows collected.

E. NSQST with pre-training
Along with the standard NSQST protocol, we also outline a modified NSQST protocol we call "NSQST with pretraining", which combines the resources used in NNQST and the standard NSQST protocol.NSQST with pretraining aims to find a solution with a lower infidelity than either of the other protocols alone.In this protocol we train two models with disjoint sets of parameters λ 1 and λ 2 .We call these models the probability amplitude model and the phase model.Figure 2 provides a visual overview of the protocol for NSQST with pre-training.
First, the parameters λ 1 are optimized to produce an accurate distribution p λ1 (s) ≃ p Φ (s, B) from measurements performed exclusively in the computational basis [ . Note that we can efficiently evaluate the loss function, Eq. ( 4), and its gradient in this case as they depend only on the probabilities and not on the phases.Next, we perform NSQST to train the model parameters λ 2 , learning the phases φ λ2 (s).However, unlike the case of standard NSQST, to perform a Monte Carlo estimate of the gradient, Eq. ( 15), here we select random samples from a set of computational basis states s according to the pre-trained distribution p λ1 (s).
Since the NSQST Monte Carlo approximations do not follow the model λ 2 in an on policy fashion, re-sampling in every iteration is no longer necessary.Nevertheless, we still re-sampled in our numerical experiments (described below) to reduce the sampling bias, and since the classical sampling procedure was not computationally costly in our examples.NSQST with pre-training resembles coordinate descent optimization, with λ 1 and λ 2 being the two coordinate blocks.Optimizing λ 1 first and fixing it for the optimization of λ 2 reduces the dimension of the parameter space for the optimizers throughout the training.However, this does not guarantee convergence to a better local minimum in the loss landscape.We do not intend to demonstrate a clear advantage for NSQST with pretraining over the standard NSQST protocol, as the former uses more computational resources both experimentally (by requiring more measurements) and classically (in the form of the memory, time, and energy consumed to train the neural network quantum state).See Appendix C for a comparison of the number of model parameters and the number of measurements used for each of the two approaches.Another motivation for introducing this modified NSQST protocol is to provide new perspectives on the differences between learning the probability amplitudes and the phases of a target quantum state, as well as to inspire other useful hybrid protocols in the future.

III. NUMERICAL SIMULATIONS WITHOUT NOISE
In this section, we first demonstrate the advantage of our NSQST protocols over the NNQST protocol in three physically relevant scenarios, then demonstrate advantages over direct shadow extimation.Specifically, we consider a model from high-energy physics (time evolution for one-dimensional quantum chromodynamics), a model from condensed-matter physics (time evolution of a Heisenberg spin chain), and a model relevant to precision measurements and quantum information science (a phase-shifted GHZ state).
For all three physical settings, we compare the performance of NNQST, NSQST, and NSQST with pre-training by measuring the exact infidelity of the trained model quantum states to the target state averaged over the last 100 iterations of training (or epochs for NNQST, see Appendix C).For NNQST's basis selection, since none of our target states is known to be the ground state of a k-local Hamiltonian (i.e., a Hamiltonian with each term acting non-trivially on at most k qubits), we simply use all of the almost-diagonal and nearest-neighbour local Pauli bases (i.e., Pauli bases with at most two neighbouring terms being non-Z).The number of these bases scales linearly with the system size (4n − 3 bases).All NSQST protocols use only N = 100 re-sampled classical shadows per iteration for model parameter updates.We perform ten independent trials of each protocol (NNQST, NSQST, and NSQST with pre-training) in each of the three examples.
Finally, we adopt an improved pre-training strategy described in Appendix D and fix N = 200 Clifford shadows without re-sampling to demonstrate advantages of NSQST over direct shadow estimation with Clifford shadows or Pauli shadows.

A. Time-evolved state in one-dimensional quantum chromodynamics
Quantum chromodynamics (QCD) studies the fundamental strong interaction responsible for the nuclear force [41].Lattice gauge theory, an important non-perturbative tool for studying QCD, discretizes spacetime into a lattice and the continuum results can be obtained through extrapolation [42].Although lattice gauge theory has been extremely successful in QCD studies, simulations of many important physical phenomena such as real-time evolution are still out of reach due to the sign problem in current simulation techniques.Quantum computers are envisioned to overcome this barrier in lattice gauge theory-based QCD simulations and they may open the door to new discoveries in QCD [43][44][45][46].
We consider a Trotterized time evolution with the gauge group SU(3) and aim to reconstruct the time-evolved quantum state after a given amount of time.To this end, we use the qubit formulation in Ref. [47] and study a single unit cell of the lattice.This corresponds to n = 6 qubits representing three quarks (red, green, blue) and three antiquarks (anti-red, anti-green, anti-blue) as shown in Fig. 3a.The Trotterized time evolution starts from the initial state |Φ 0 ⟩ = |↓↓↓⟩ |↑↑↑⟩, which is known as the strong-coupling baryon-antibaryon state.The Hamiltonian governing the evolution is (from Ref. [47]): where , with m = am and x = 1/(ga) 2 for a lattice spacing a, where m is the bare quark mass and g is the gauge coupling constant.We use two Trotter steps in our simulation, each for time t = 1.8.See Appendix B and Ref. [47] for more details on the circuit and the physical significance of this time evolution.Figure 3 shows the results of simulating tomography on the time-evolved state |Φ SU (3) ⟩ using NNQST, NSQST, and NSQST with pre-training.Note that, although the NSQST protocols (with and without pre-training) are run for 2000 iterations, we use increments of ten iterations in the plot to provide a visual comparison with NNQST (which is run for 200 epochs due to faster convergence of L λ in optimization).For NSQST with pre-training, we display the optimization progress curve only after the probabilities p λ1 (s) have been pre-trained.This explains the lower initial infidelity for NSQST with pre-training.See Appendix C for further details on the simulation hyperparameters.
Based on Fig. 3, NSQST and NSQST with pre-training both result in a lower final-state infidelity, relative to NNQST, and both predict the mean kinetic energy values better than NNQST.Figure 3c further depicts the optimization progress curves of a typical trial.We see that for NNQST, the cross-entropy loss L λ quickly converges with very little fluctuations, despite the continued fluctuations of the state infidelity in the lower plot near L λ ≃ 1, indicating a very small overlap with the target state.On the other hand, standard NSQST and NSQST with pre-training both converge to a final state very close to the target state, despite fluctuations in the loss function caused by the finite number of classical shadows in each iteration.Moreover, we notice that NSQST with pre-training not only starts with a state of lower infidelity after pre-training, but also converges to a solution of lower infidelity than standard NSQST, with much more stable convergence in the end.One unexpected outcome is that NSQST with pre-training does not have a better kinetic energy prediction than the standard NSQST despite having lower infidelity.However, this can perhaps be an artifact of insufficient statistics given only ten trials.The predicted total energy and mass are also plotted in Fig. 13 (Appendix E), where NSQST and NSQST with pre-training yield significantly better predictions of total energy but not the local observable H m .In Fig. 4, the amplitudes and phases are displayed for typical final neural network quantum states for each protocol.We see that the NNQST protocol fails at learning the phase structure of the target state, despite accurately To highlight the dominant contributions, the phase output has been truncated for states s with p λ 1 (s) < 0.1.For the purpose of better visualization, the overall (global) phase of the neural network quantum state is chosen by aligning the phase of the most probable computational-basis state to that of the target state.The dashed line corresponds to a phase of 2π, since we choose our phase predictions to be in the range [0, 2π].Panels b and c show typical final states from NSQST and NSQST with pre-training.We observe that both NSQST protocols succeed at learning the phase structure while NNQST fails at the same task.
learning the probability distribution.This observation is consistent with NNQST's convergence to a poor local minimum in the lower plot of Fig. 3c, with the infidelity values stuck at around 1.0.On the other hand, standard NSQST and NSQST with pre-training are both successful at learning the phase structure of the target state, while NSQST with pre-training also learns the probability distribution better.

B. Time-evolved state for a one-dimensional Heisenberg antiferromagnet
The Heisenberg model describes magnetic systems quantum mechanically.Understanding the properties of the quantum Heisenberg model is crucial in many fields, including condensed matter physics, material science, and quantum information theory [48][49][50].In this example, we perform tomography on a state that has evolved in time under the action of the one-dimensional antiferromagnetic Heisenberg (AFH) Hamiltonian.We use four Trotter time steps to approximate the time evolution.
The one-dimensional AFH model Hamiltonian is We choose n = 6 for our simulation and we take open boundary conditions.The 6-qubit initial state is set to the classical Néel-ordered state |Φ 0 ⟩ = |↑↓↑↓↑↓⟩ and our target state occurs after evolving under the Heisenberg Hamiltonian up to time t = 0.8.The circuit describing the Trotterized time evolution is given in Appendix B. Figure 5 shows the simulation results for performing tomography on the time-evolved AFH state using NNQST, NSQST, and NSQST with pre-training.We see that NSQST and NSQST with pre-training both reach a lower final state infidelity than NNQST in Fig. 5a. Figure 5b displays the mean staggered magnetization (along x) at each site.We observe that NSQST with pre-training results in a tighter spread of values about the exact result, relative to NNQST or NSQST across all sites.Comparing NNQST and standard NSQST, we observe that the standard NSQST protocol has significantly worse predictions at sites 3 and 4 than NNQST, with a mean more than one standard error away from the exact value, despite having a significantly lower final state infidelity.This is likely due to the fact that NNQST is trained using nearlydiagonal measurement data, providing direct access to the staggered magnetization observable of interest, whereas NSQST was trained using the infidelity loss.This result demonstrates that reaching a lower infidelity does not necessarily imply a better prediction of local observables, although we can improve the standard NSQST protocol by using more classical shadows per iteration.Finally, the typical optimization progress curves in Fig. 5c are consistent with the statistical results shown in Fig. 5a.For NNQST, the infidelity does not converge stably, despite the convergence of its loss function.
The probability amplitudes and phases obtained after training on a time-evolved AFH state are shown in Fig. 6.Here, the two highest peaks in p λ (s) correspond to the two Néel states |↑↓↑↓↑↓⟩ and |↓↑↓↑↓↑⟩ [51].Figure 6 further confirms the advantage of NSQST with pre-training.Not only does NSQST with pre-training find more accurate phases, it also finds a better description of the probability distribution since the pre-training involves many measurement samples from the all-Z basis (whereas NNQST splits the same number of measurement samples over multiple bases).

C. The phase-shifted GHZ state
In this last example, we consider the tomography of a phase-shifted GHZ state.Here, our target is a 6-qubit GHZ state with a relative phase of π 2 .A GHZ state is a maximally entangled state that is highly relevant to quantum information science due to its non-classical correlations [52].Moreover, the GHZ state is the only n-qubit pure state that cannot be uniquely determined from its associated (n − 1)-qubit reduced density matrices [53], indicating genuine multipartite entanglement.
As shown in Fig. 7a, both NSQST and NSQST with pretraining result in a significantly lower average final state infidelity than NNQST.The optimization progress curves displayed in Fig. 7b confirm this result, as we see that the infidelity of the NNQST state is rapidly fluctuating during training, quite distinctly from the previous two examples.Given that we have employed a widely used adaptive optimizer and that we have chosen a reasonable initial learning rate (5 • 10 −3 ), the occurrence of such a divergence is likely due to the fact that the NNQST loss function does not incorporate tomographically complete information about the target GHZ state.

D. Comparison with direct shadow estimation
So far we have only compared the performance of NNQST, NSQST, and NSQST with pre-training, but an important question is whether any of the above methods has an advantage over direct shadow estimation.In this subsection, we compare the performance of NSQST with pre-training and direct shadow estimation for a onedimensional QCD time-evolved state from Sec. III A. In addition, we perform a scalability study of the phaseshifted GHZ state with up to 40 qubits, comparing NSQST to direct shadow estimation.To minimize the number of Clifford shadows used for training, we fix 200 Clifford shadows as training data and do not re-sample in every iteration.In addition to the original pre-training protocol from Sec. II E, we adopt the improved pre-training strategy described in Appendix D. A typical optimization progress curve with the improved pre-training strategy is shown in Fig. 14 from Appendix E, where very few iterations are required for convergence.Once training is completed, we compare the prediction errors of NSQST with pre-training, Clifford shadows, and Pauli shadows.
As shown in Fig. 8, we have compared the absolute prediction error of NSQST with pre-training (with and without an improved strategy) and two types of direct shadow estimation methods.For a fair comparison, we have randomly sampled the same number (1200) measurements for each method.For shadow reconstruction, 1200 Clifford shadows or 1200 Pauli shadows were used.For NSQST with pre-training, 1000 computational-basis measurements were used for pre-training and 200 shadows were used.In Fig. 8a, we observe that using an improved strategy, NSQST with pre-training achieves a significantly smaller prediction error than either Pauli shadows or Clifford shadows.This is expected, as the kinetic-energy Hamiltonian H kin in Eq. ( 18) contains high-weight Pauli strings, and Pauli shadows are provably efficient at predicting only local observables [8].On the other hand, Clifford shadows have an exponentially growing variance bound for any Pauli observables irrespective of locality (since Tr(O 2 ) = 2 n in Eq. ( 10)), which explains the large prediction error for kinetic energy.In Fig. 8b, we report the predicted error in the fidelity to the ideal time-evolved state.The Pauli shadows yield the largest prediction error in fidelity estimation due to the exponentially growing variance bound for non-local observables.Finally, in Fig. 8c, we apply the four methods to the problem of predicting a single Pauli string with increasing weight, where we change the identity matrix to Pauli-X at each site as the weight increases.The non-local observable of interest ⟨X...⟩ corresponds to the Wilson loop operator in lattice gauge theory with Z 2 symmetry [54].Since predicting high-weight observables is a hard task for both shadow protocols, the prediction error from NSQST with pre-training is much lower than for either Clifford or Pauli shadows for most of the observables.We also observe a consistent increasing prediction error for the Pauli shadows as the weight increases.
A natural question that arises is the scalability of NSQST's advantages over direct shadow estimation.While a trained neural network quantum state closely approximating the target state has more predictive power than classical shadows alone, there is no guarantee of successful convergence during training.For instance, learning a general multi-qubit probability distribution without any prior knowledge in the pre-training step is hard [55] and would eventually require exponentially growing resources.
The key to having a scalable advantage is to leverage prior knowledge of the prepared target state and to find ways to impose these known constraints in the neural network ansatz and the loss function [36,56,57].As a proof of concept, we numerically study the sample complexity scaling of learning a phase-shifted GHZ state using NSQST with pre-training and using the improved strategy from Appendix D. With 3000 measurements in the computational basis, 200 Clifford shadows, and 5000 Monte Carlo samples for each system size, we investigate the scaling of the final infidelity.As shown in Fig. 9a, the final infidelity does not grow as the system size increases.This is expected, as the multi-qubit GHZ state is sparse in the computational basis and only a single relative phase needs to be determined.Although learning the GHZ state using a neural network ansatz is a trivial example, the FIG. 9. Infidelity and predicted expectation value of the phaseshifted GHZ state as the system size grows.For each system size, we generate 3000 computational-basis measurements and 200 Clifford shadows.In panel a, with independently sampled measurement data and 5000 Monte Carlo samples, we run NSQST with pre-training using the improved strategy for five trials and report the individual final infidelities.Note that the number of Monte Carlo samples used during training is much less than the number of basis states, which is not an issue if the target state is sufficiently sparse, as in the case of the multi-qubit GHZ state.In panel b, we plot the predicted expectation value of the Pauli string XX...XY , which is one of the target state's stabilizers.In comparison, the expectation values predicted from five trials of direct shadow estimation are plotted, each with 3200 independently sampled Clifford or Pauli shadows.The data points are slightly shifted relative to the ticks of the x-axis for a better display.
sparseness property of the GHZ state is not exploited by direct shadow estimation and the collected Clifford shadows as training data would not be sample-efficient at predicting Pauli observables.In Fig. 9b, we observe that direct shadow estimation fails to predict expectation values ⟨XX...XY ⟩ accurately and yields only values of zero as the system size increases.The result presented in Fig. 9 suggests that there is hope to reconstruct sufficiently sparse states with sub-exponentially growing resources using NSQST, and potentially non-sparse states as well with enough known constraints imposed.Finally, we emphasize that, as compared to the neural network quantum state ansatz trained on IC-POVMs data in Ref. [30], our chosen state ansatz explained in Sec.II A is sample-efficient at predicting Pauli string observables of arbitrary weight and fidelity to any classically tractable states.
We make a final remark on the predictive power of randomized measurements alone versus a trained variational pure state.While one may generally expect the trained pure state to inherit the features of the training data, this may not be true in specific cases for NSQST and Clifford shadows, where the trained pure state's predictive power mainly depends on the global reconstruction error (quantum infidelity), rather than an estimator for a particular observable.Moreover, the variational training framework of NSQST is not limited to a neural network quantum state with Clifford shadows.Other variational ansatzes such as matrix product states and other randomized measurement schemes such as Hamiltonian-driven shadows should be explored with proper locality adjustments, for practical advantages such as hardware-aware measurements and scalable classical post-processing [58][59][60][61].

IV. NUMERICAL SIMULATIONS WITH NOISE
We now numerically investigate the noise robustness of our NSQST protocol, focusing on the same phaseshifted GHZ state as in Sec.III C. We consider two different sources of noise affecting the Clifford circuit used to evaluate our infidelity-based loss function using classical shadows (see Sec. II D).The first noise model, (a particular case of the model already introduced in Sec.II C), describes either measurement (readout) errors or gateindependent time-stationary Markovian (GTM) noise.The second noise model describes imperfect two-qubit entangling gates in the Clifford circuit.In the following, we introduce both noise models and discuss their effects on the fidelity of the reconstructed state.
Our first model is an amplitude damping channel applied before measurements.The amplitude damping channel is suitable for investigating the effect of measurement errors in the computational basis.The n-qubit amplitude damping noise channel AD n,p with channel parameter p is defined as where AD 1,p : Apart from modeling measurement noise, this noise channel also serves as a suitable model for studying gateindependent, time-stationary, and Markovian (GTM) noise [24].In this case, each gate that appears in the Clifford circuit U i is subject to the same noise map.The resulting noisy random Clifford circuit Ũi can be decomposed into EU i with E being a noise channel applied after the ideal Clifford unitary.
To demonstrate the noise robustness of our NSQST protocol, we first perform tomography on a phase-shifted GHZ state (having a relative phase of π 2 ).Despite the presence of the amplitude damping noise E = AD n,p , we simulate NSQST using the noise-free gradient expression ∇ λ L(I) in Eq. (15).As discussed in Sec.II D, the noise-free gradient expression in NSQST will still yield an estimate that is directed along the true gradient, being modified only with an overall prefactor.In contrast, the noise-free loss function L(I) and the true loss L(E) are related nontrivially in the presence of noise: This means that our estimated loss function no longer converges to zero, while the infidelity between the neural network quantum state and the target state approaches zero during training.23) and the exact infidelity (blue) for the amplitude damping channel ADn,p (the loss function is averaged over the last 100 iterations for each trial and then the average is taken over ten trials).The strength of the noise increases with increasing 1 − p.The noiseless infidelity loss function L(I) is then transformed into an estimated infidelity for the noisy case L(E) using Eq. ( 22) for the amplitude damping channel E = ADn,p to obtain the transformed cost function.The error bars represent the standard error in the mean over ten trials.In panel b, we show the average loss function and exact infidelity for the local depolarizing noise model with a two-qubit depolarizing channel applied after every CNOT gate in the appended random Clifford circuit Ui.The channel parameter 1−f characterizes the growing strength of the noise.We do not plot the transformed loss function in this case because the CNOT-dependent local depolarizing noise model does not have an analytic noisy shadow expression.
Figure 10a shows the simulation results for the effects of an amplitude damping noise channel applied before measurement.First, we observe that the average exact infidelity of the last 100 iterations remains small despite the growing noise channel strength.The increasing loss function value (red curves) is evidence of the growing variance in our gradient estimations, and will eventually lead to failure of the optimizer to converge to a state close to the target state.Intuitively, since the classical shadows method only uses the measured bit-string, but not the phase for post-processing, only the diagonal bit-flip errors in Eq. ( 21) contribute to the noise model and these are twirled into depolarizing noise by random Clifford circuits.Finally, the agreement between the exact infidelity (blue curve) and the transformed loss function (the right hand side of Eq. ( 22) represented by the orange curve) validates our theoretical account.Here, we have used Eq. ( 8) to find the depolarizing noise channel strength Note that f (E) may be hard to estimate in practice.However, since this parameter does not affect the direction of the estimated gradient, we expect training to converge to the same optimal parameters λ with or without noise.It is therefore not necessary to compensate for noise by computing the linear transformation in Eq. ( 22) as long as we can verify the successful convergence of training.
We proceed now with a discussion of the second noise model, which assumes that entangling gates are the dominant source of error.For numerical simulations, we decompose each random Clifford unitary U i into CNOT, Hadamard, and phase gate operations.Subsequently, a local two-qubit noise map is applied after each CNOT gate in U i .We consider the depolarizing noise channel with n = 2 and a fixed noise strength 1 − f .This noise model is not GTM and no longer has an analytic noisy shadow expression.However, we still expect NSQST to be fairly noise robust based on the numerically demonstrated robustness of classical shadows against many non-GTM errors, such as pulse miscalibration noise [24].
As shown in Fig. 10b, NSQST exhibits some measure of noise robustness even in the presence of a more realistic non-GTM noise model.This is reflected in the positive curvature of the blue curve for decreasing noise, 1−f → 0, leading to a weak-noise limit where the exact infidelity (blue curve) is small relative to the estimated infidelity (red curve).Our randomly-sampled six-qubit unitary U i has an average of 21 CNOT gates (see Appendix C), leading to a substantial accumulation of errors.Thus, the noise parameter 1 − p controlling one-time measurement errors is not comparable to the parameter 1−f controlling the noise on the individual CNOT gates.A transformed loss function curve is not presented in Fig. 10b because our local (two-qubit) depolarizing noise model does not yield an analytic f (E) expression.The robustness of the classical shadows formalism against many other non-GTM noise models (with an extra calibration step) has been well studied in Ref. [24], while our NSQST protocol holds a similar noise robustness without any extra calibration steps.

V. CONCLUSIONS AND OUTLOOK
In this work, we have proposed a new QST protocol, neural-shadow quantum state tomography (NSQST).We have demonstrated its clear advantages over state-of-theart implementations of neural network quantum state tomography (NNQST) in three relevant settings, as well as advantages over direct shadow estimation.We have further shown that NSQST is noise robust.Our study of the benefits of NSQST suggests that the choice of infidelity as a loss function has great potential to broaden the applicability of neural network-based tomography methods to a wider range of quantum states.
In Appendix D we describe technical developments (re-use of classical shadows and alternative Monte Carlo methods) that can be pursued to further enhance the performance of NSQST.Another direction for future work would be to tailor NSQST more closely to emerging quantum hardware platforms.This can be done by exploring NSQST with alternative shadow protocols.In particular, it would be interesting to investigate hardware-aware classical shadows that use the native interactions of the quantum device [58,59,62].In addition, future work should extend NSQST to mixed-state protocols [31].
Relative to classical shadow protocols, which only allow for efficient fidelity and local observable predictions, but no efficient state reconstruction, NSQST achieves the goal of reconstructing a physical state that approximates a target quantum state via a variational ansatz.The variational ansatz in NSQST comes with the convenience of a quantum state and can be used to predict many global observables of interest beyond the reach of direct shadow estimation [63].Moreover, NSQST inherits the advantages of NNQST.For example, we can incorporate symmetry constraints of the target state, reducing the computational resources needed [36,56,64].Once trained, relative to the large number of classical shadows that must be collected, the variational ansatz in NSQST may yield a more efficient classical representation of the state.Finally, as demonstrated in NSQST with pre-training, the trained variational ansatz approximating the target state can be fed into a second round of optimization, performed with respect to a new loss function.This possibility provides great flexibility in addressing a variety of tasks, including, for example, error mitigation in classical post-processing [18].
NSQST is an efficient state reconstruction method.It will be useful as a benchmarking tool, an important element for testing the performance of near-term quantum devices as they scale up.In particular, NSQST can be used to construct a "digital twin" [65] of the prepared target state, where a neural network quantum state can be used for experimentally-relevant simulation [66,67], cross-platform verification [68], error mitigation [18] and other uses.Having access to a digital twin of the target quantum state will become increasingly relevant for accelerating the development of quantum technologies.Further down the road, we also foresee great potential for NSQST as a stepping stone for interfacing classical probabilistic graphical models and quantum circuits, where data stored in quantum circuits can be transferred to classical memory and vice versa, leading to new hybrid computing approaches.For NNQST and NSQST, we use the transformerbased neural network quantum state ansatz directly adopted from Ref. [18].A central component in the ansatz is the transformer layer, which has a self-attention block followed by a linear layer.With a bit-string s = (s 1 , . . ., s n ) ∈ {0, 1} n as input, s is extended to s = (0, s) by prefixing a zero bit.Then, each bit sj is encoded into a D-dimensional representation space using a learned embedding governed by f jd , yielding the encoded bit e (0) jd with j ∈ {0, . . ., n} and d ∈ {1, . . ., D}.The encoded input is then processed using K transformer layers.
We outline the parameters involved in a single transformer layer indexed by k in the ansatz, and refer the reader to Ref. [18] for more details: where σ(ℓ j−1 ) = 1 1+e −ℓ j−1 is the logistic sigmoid function.Since the outcome at index j is conditional on the preceding indices j ≤ j, we can efficiently draw unbiased samples from the probability distribution p λ (s) by proceeding one bit at a time.The phase output φ λ (s) is obtained by first concatenating the output of the final transformer layer to a vector of length n, then projecting the vector to a single scalar value using a linear layer (separate from the linear layer used in obtaining p λ (s)).
For NSQST with pre-training, our p λ1 (s) is parameterized in the same way as in standard NSQST, except that we remove the phase output layer.The phase outcome φ λ2 (s) is encoded in a separate transformer-based neural network ansatz, where we remove the other linear layer (the one producing scalar-valued logits representing the probability amplitudes).Thus, the encoded quantum state in NSQST with pre-training has its probability distribution and phase output separately parameterized by model parameters λ 1 and λ 2 , respectively.state ansatz may remove or alleviate this issue, and numerical experiments of larger system sizes should be done to explore NSQST's limitations in the future.An additional complication can arise when estimating the inner product shown in Eq. ( 14) from a finite num-ber of Monte Carlo samples.In practice, the overlap is estimated in terms of a subset S of distinct bit-strings s using: Here, P (s) = f s /N s ≃ p λ (s) = |ψ λ (s)| 2 is determined from the frequency f s of the bit-string s found from N s samples drawn according to the probability distribution p λ (s).For a transformer-based neural network architechture, the samples can be generated efficiently bit-by-bit using the procedure described in Appendix A. Up to a constant factor, the right-hand side of Eq. (D1) can be interpreted as the exact overlap between a stabilizer state |ϕ i ⟩ and a fictitious state |Ψ λ ⟩ with wavefunction: (D2) The normalization constant approaches A S = 1 when P (s) = p λ (s) = |ψ λ (s)| 2 (e.g., when the sample set S includes all s).However, a problem arises when we sample only over a subset of possible bit-strings s.In this case, it may be that |ψ * λ (s)| ≪ P (s) for some s, leading to A S ≫ 1. Estimating the infidelity from classical shadows to obtain the NSQST loss function (Eq.( 11)) through Monte Carlo samples as in Eq. (D1) requires the estimated overlaps ⟨ϕ i |ψ λ ⟩ ≃ A S ⟨ϕ i |Ψ λ ⟩.When an incomplete sample set is taken, the factor A S can become very large, leading to an unphysical blow-up, potentially leading to estimated overlaps ≫ 1.In this limit, the Monte Carlo estimate is meaningless.A simple solution to this problem could be to truncate the set S → S ′ , allowing only for bitstrings s for which |ψ λ (s)|/ P (s) exceeds some threshold value, then we replace the normalization constant A S → A S ′ .For a given task, it may be difficult to establish truncation thresholds that maintain convergence to an accurate state.In the rest of this appendix, we give an alterative procedure that does not show the ill-conditioned "blow-up" from a finite Monte Carlo sample size, while avoiding predetermined truncation thresholds.
To avoid the pitfalls of representing a Monte Carlo average as in Eq. (D1), we consider a hybrid NSQST protocol.In this hybrid protocol, the classical shadows are only used to learn the phases φ λ2 (s).The probability amplitudes p λ1 (s) are learned using NNQST from measurements performed in the computational basis (similar to NSQST with pre-training): p λ1 (s) ≃ p Φ (s, B) with B = (Z 1 , Z 2 , • • • , Z n ).The difference between this new hybrid protocol and NSQST with pre-training is in learning the phases.To train the phase model, we calculate the gradient of the loss function with an alternative approximation for the inner product: The estimated loss function in this new hybrid protocol is the shadow-estimated infidelity between the target state |Φ⟩ and the state | ψ λ ⟩.The gradient of this infidelity can be calculated to optimize the phases from variations in the parameters λ 2 : We can efficiently evaluate the right-hand side of Eq. (D5) exactly for a sub-exponential number of distinct bitstrings s ∈ S. The optimization procedure is then limited only by the expressivity and accuracy of the sparse approximation ψ λ (s) for the neural network quantum state ψ λ (s), arising from a finite number of samples.This new hybrid NSQST protocol does not suffer from the "blow-up" described above and it may converge with a sub-exponential number of samples, especially when |ψ λ ⟩ is sufficiently sparse in the computational basis.This alternative strategy was unnecessary in most of our numerical experiments given the very small system size n = 6 and the very large number of Monte Carlo samples N s = 5000.

Appendix E: Additional plots
In this section, we provide additional plots relevant to the numerical simulation results presented in Sec.III.
In Fig. 13, the predicted total energy and mass from the three protocols are plotted, where we see that NNQST fails to yield a better prediction of total energy than NSQST in Fig. 13a.However, as shown in Fig. 13b, NNQST predicts mass H m more accurately than NSQST, which is a local observable from Eq. (18).
In Fig. 14, a typical optimization progress curve is plotted for the numerical results presented in Sec.III D. The iteration number is not adjusted and pre-training is repeated for every trial.

FIG. 1 .
FIG. 1. Overview of the NNQST and NSQST protocols.Panel a shows the NNQST protocol with the cross-entropy loss function L λ .The training data determine p Φ (s, B), the measured probability distribution of measurement outcomes s for measurements of the target state |Φ⟩ performed in the local Pauli basis B. The feedback loop on the right-hand side indicates the iterative first-order optimization for neural network training.Panel b displays the NSQST protocol described in Sec.II D, where the training data set consists of classical shadows only and where the network parameters λ are trained via an infidelity loss function L λ .The expression ρi (U † i , bi) is the stored classical shadow of the target state |Φ⟩ with the Clifford unitary U † i and bit-string |bi⟩.

FIG. 2 .
FIG. 2. Overview of the NSQST with pre-training protocol.The neural networks learning p λ 1 (s) and ϕ λ 2 (s) are separately parameterized, and p λ 1 (s) is pre-trained using NNQST with training data derived from measurements only in the computational basis.

FIG. 3 .
FIG.3.Tomography of the quantum state following a onedimensional QCD time evolution.Two Trotter steps are used for a total evolution time of t = 1.8.Panel a displays the average final-state infidelity for each of the three protocols.In each trial, we extract the (exactly calculated) average state infidelity L λ , averaged over the last 100 iterations (and further averaged over ten trials).The error bar is the standard error in the mean calculated over the ten trials.The embedded schematic shows the qubit encoding for a unit cell, containing up to three quarks (filled circles) and up to three antiquarks (striped circles).In panel b, the plot compares the expectation value of the kinetic energy, evaluated for the neural network quantum state found in the last iteration of each trial, and averaged over ten trials for each protocol.In panel c, the optimization progress curves are displayed for a typical trial, where the adjusted iteration refers to epochs for NNQST, but rather indicates increments of ten iterations for the two NSQST protocols (a total of 2000 iterations were run in these cases).Panel c shows the NNQST (blue) loss L λ in the top plot, the estimated NSQST (infidelity) loss function L λ (with and without pre-training, green and red, respectively) in the middle plot (fluctuations are dominated by the finite number N = 100 of classical shadows taken for each estimate), and the exact infidelity is shown in every (adjusted) iteration for all three protocols in the lower plot.

FIG. 4 .
FIG.4.Typical neural network quantum states following optimization, approximating the state after a one-dimensional QCD time evolution.Panel a displays a typical state found in the last iteration of NNQST training: the left plot shows the square root of the probability of the final neural network quantum state compared to the exact target state and the right plot shows the phase output of the state over the set of computational basis states s.To highlight the dominant contributions, the phase output has been truncated for states s with p λ 1 (s) < 0.1.For the purpose of better visualization, the overall (global) phase of the neural network quantum state is chosen by aligning the phase of the most probable computational-basis state to that of the target state.The dashed line corresponds to a phase of 2π, since we choose our phase predictions to be in the range [0, 2π].Panels b and c show typical final states from NSQST and NSQST with pre-training.We observe that both NSQST protocols succeed at learning the phase structure while NNQST fails at the same task.

FIG. 5 .
FIG.5.Training and results for a simulation of tomography on the time-evolved state for a one-dimensional AFH model.Four Trotter steps are used for a total evolution time of t = 0.8.Panel a compares the final state infidelity, averaged over ten trials for the three protocols, following the same procedure used for one-dimensional QCD time evolution.Panel b compares the predicted mean staggered magnetization in the x-direction (where S x j = 1 2 σ x j ), following a Trotterized time evolution under the AFH model, for all the three protocols.In Panel c, we show optimization progress curves for a typical run, with the NNQST (blue) loss L λ in the top plot, NSQST loss functions (with and without pre-training, green and red, respectively) in the middle plot, and the exact infidelity in every adjusted iteration for all three protocols in the lower plot.

2 FIG. 6 .
FIG. 6.Typical final neural network quantum states, trained on the time-evolved state of one-dimensional AFH model.Panel a displays a typical final state in the last iteration of NNQST optimization, generated using the same procedure from Fig. 4. Panels b and c show the typical final states from NSQST and from NSQST with pre-training, respectively.

FIG. 7 .
FIG. 7. Simulation of tomography on a six-qubit phase-shifted GHZ state.Panel a compares the final state infidelity, averaged over ten trials for each of the three protocols.Panel b shows typical optimization progress curves for NNQST (blue), NSQST (red), and NSQST with pre-training (green).

FIG. 8 .
FIG. 8. Comparison of NSQST with pre-training to direct shadow estimation.In each trial of NSQST with pre-training, 200 Clifford shadows are used as training data without resampling in every iteration and 1000 measurements in the computational basis are used in pre-training.For direct shadow estimation, 1200 Clifford shadows and 1200 Pauli shadows are used.Panel a compares the absolute error in the predicted kinetic energy, averaged over ten trials for each of the four protocols.Panel b compares the absolute error in the predicted fidelity to the ideal time-evolved state, averaged over ten trials for each of the four protocols.Panel c compares the absolute error in the predicted expectation value of a Pauli string observable of various weight in the Pauli-X basis, averaged over ten trials for each of the four protocols.The data points are slightly shifted relative to the ticks of the x-axis for a better display of error bars.

bFIG. 10 .
FIG.10.Simulation of tomography for a phase-shifted GHZ state in the presence of noise.Panel a displays the average loss function (red) defined in Eq. (23) and the exact infidelity (blue) for the amplitude damping channel ADn,p (the loss function is averaged over the last 100 iterations for each trial and then the average is taken over ten trials).The strength of the noise increases with increasing 1 − p.The noiseless infidelity loss function L(I) is then transformed into an estimated infidelity for the noisy case L(E) using Eq.(22) for the amplitude damping channel E = ADn,p to obtain the transformed cost function.The error bars represent the standard error in the mean over ten trials.In panel b, we show the average loss function and exact infidelity for the local depolarizing noise model with a two-qubit depolarizing channel applied after every CNOT gate in the appended random Clifford circuit Ui.The channel parameter 1−f characterizes the growing strength of the noise.We do not plot the transformed loss function in this case because the CNOT-dependent local depolarizing noise model does not have an analytic noisy shadow expression.

2 .
A matrix to process the output of the self-attention heads, O (k) de , with d, e ∈ {1, . . ., D}. 3. A weight matrix and a bias vector of the linear layer, W (k) de and b (k) d , with d, e ∈ {1, . . ., D}.Once we have passed the final transformer layer, scalarvalued logits ℓ j are obtained by using an extra linear layer.The conditional probabilities directly used in sampling are then given by

FIG. 13 .
FIG. 13.Additional plots for the quantum state following a one-dimensional QCD time evolution.Panel a displays the expectation values of the total energy, evaluated for the neural network quantum state found in the last iteration of each trial, and averaged over ten trials for each protocol.Panel b displays the expectation values of the mass Hamiltonian.

FIG. 14 .
FIG.14.Typical optimization progress curve from NSQST with pre-training and fixed Clifford shadows.Unlike the other plots, the iteration number on the x-axis is not adjusted and corresponds to every gradient update during optimization.
[69]lation circuit for the one-dimensional QCD model.The initial state preparation circuit (before the barrier) and a single Trotter step (after the barrier) are drawn using Qiskit[69].In our numerical experiments an evolution for time t = 1.8 is decomposed into two Trotter steps.Simulation circuit for the one-dimensional AFH model.The initial state preparation circuit (before the barrier) and a single Trotter step (after the barrier) are drawn using Qiskit[69].In our numerical experiments an evolution for time t = 0.8 is decomposed into four Trotter steps.