Designing quantum many-body matter with conditional generative adversarial networks

The computation of dynamical correlators of quantum many-body systems represents an open critical challenge in condensed matter physics. While powerful methodologies have risen in recent years, covering the full parameter space remains unfeasible for most many-body systems with a complex configuration space. Here we demonstrate that conditional Generative Adversarial Networks (GANs) allow simulating the full parameter space of several many-body systems, accounting both for controlled parameters, and stochastic disorder effects. After training with a restricted set of noisy many-body calculations, the conditional GAN algorithm provides the whole dynamical excitation spectra for a Hamiltonian instantly and with an accuracy analogous to the exact calculation. We further demonstrate how the trained conditional GAN automatically provides a powerful method for Hamiltonian learning from its dynamical excitations, and to flag non-physical systems via outlier detection. Our methodology puts forward generative adversarial learning as a powerful technique to explore complex many-body phenomena, providing a starting point to design large-scale quantum many-body matter.


I. INTRODUCTION
The dynamical properties of quantum many-body models remain one of the critical problems in condensed matter physics, lying at the heart of problems ranging from correlated superconductivity [1] to quantum spin liquid physics [2,3].Even with the appearance of powerful new methodologies in the last years [4,5], tackling specific regimes of quantum many-body models is an outstanding problem [6,7] and covering the full parameter space of a many-body Hamiltonian quickly is a nearly unfeasible task.This huge complexity is not a feature alone of quantum many-body physics, but it is also well known in many problems of image, voice, and video recognition [8][9][10].In these fields, a new family of algorithms known as Generative Adversarial Networks (GANs) [11] has allowed to tackle some of those intractable problems with high accuracy [12][13][14].
While supervised and unsupervised learning has been widely applied to quantum problems [15][16][17][18][19][20][21][22][23][24][25][26][27], generative adversarial learning remains relatively unexplored [28][29][30].The advantages of GANs over simple (supervised or unsupervised) neural network (NN) models are the ability of learning underlying distributions of complex data set (e.g., images) and the generation of new samples with the same statistics by only using input noise (and additional conditional parameters) [31,32].The generated output is of such high accuracy, e.g., photo-realistic images, that can not be achieved similarly with other generative models [33].Moreover, GANs naturally incorporate noise in the generative network architecture which enables to account for both uncertainty and diversity in the model.This includes multi-modal learning where one input can correspond to several correct outputs which can not be achieved by classical machine learning algorithms which generally learn a one-to-one mapping [34].
Here we show how conditional GANs (cGANs) allow simulating dynamical excitations of many-body Hamiltonians and furthermore provide efficient Hamiltonian learning and outlier detection.Taking as training examples a finite set of noisy many-body dynamical calculations, we demonstrate that the conditional GAN quickly learns to generate dynamical results for the whole parameter space (as illustrated in Fig. 1).Once the GANs are trained, the computational and generalization power of GANs over traditional methods comes into play: even to simulate new many-body Hamiltonians of big system size, the outputs of the GAN are almost instan-taneous and with an accuracy rivaling the exact calculations, enabling a detailed mapping of complex manybody systems without the need to calculate every parameter combination.Besides realizing a powerful simulator, the trained GAN automatically provides two additional features by exploiting the trained discriminator.First, the parameters of the Hamiltonian can be directly inferred from the simulated dynamical data by using the cGAN discriminator, a methodology providing a cGAN-based Hamiltonian learning algorithm.Second, the trained discriminator allows detecting non-physical results such as those stemming from wrongly computed dynamical many-body systems.Our work provides a first step towards designing quantum many-body matter with deep generative models, opening a pathway to address complex quantum many-body landscapes and ultimately combining theoretical and experimental data.
The manuscript is organized as follows.Sec.II introduces the general concept of cGANs and the quantum many-body methodology for computing dynamical correlators with tensor networks.As a first demonstration, Sec III exemplifies our cGAN methodology for a family of single-particle models.Sec.IV demonstrates the cGAN methodology for three families of quantum manybody systems, including a gapless many-body model featuring spinons, a model with topological order, and a fermionic Hubbard model.In Sec.V, we are showing the extrapolation capability of our algorithm and giving a quantitative benchmark of the cGAN.Section VI demonstrates how the trained cGAN provides both a methodology for Hamiltonian learning and outlier detection.Finally, Sec.VII summarizes our conclusions.Information about the GAN architecture and training data generation are given in App.A, and App.B and in App.C we are providing a supplementary analysis of the generator and discriminator network.

II. GENERATIVE ADVERSARIAL NETWORKS AND DYNAMICAL CORRELATORS A. Generative Adversarial Networks
Generative Adversarial Networks were proposed in 2014 as deep generative models in the context of unsupervised Machine Learning (ML) [11].They are generally built by combining two neural networks, the generator G and discriminator D, which are competing in a min-max game against each other.This allows the generator to become very accurate in mapping from a latent space vector z (i.e., a random input vector) to the data distribution of the real images.The generator network tries to trick the discriminator which has the job of distinguishing between real and generated images.During the training process, the parameters of both networks get updated simultaneously, minimizing the terms related to the generator log [1 − D(G(z))] and discriminator log[D(x)] which are

part of the GAN value function min
The input data x contains the information of the real images, p data is the distribution of the input images which we want to learn, and p z is the (normal) distribution of the latent space.During the training, the parameters of the generator (discriminator) are updated in order to minimize (maximize) the expectation values of the value function V (D, G).
Applications of GANs in computer science tasks usually include the generation of images including convolutional neural networks and have shown great success in the applications of image generation by using random inputs [31,35,36].These random inputs, however, inhibit us from controlling the output of the algorithm.An extension to the usual GAN are cGANs which give additional information to the neural networks in order to gain some control over the output of the algorithm without losing the generative power of this method [37].Some applications include, e.g,.image-to-image translation [12] or image editing [8,32,38,39].The computational power of (conditional) GANs has already found its way to physics, starting in high energy physics for the simulation of 2D particle jet images [40] and 3D particle showers [41], cosmology for emulations of cosmological maps [42], and in selected problems of quantum and condensed matter physics including the simulation of correlated Quantum Walk [28] and to simulate 2D Ising model near the critical temperature [29].Recently, conditional GANs have also been successfully applied for quantum state tomography and the reconstruction of density matrices [30].
In particular, conditional GANs allow for the incorporation of prior knowledge about a system and simultaneously account for a degree of diverse randomness in the output.This architecture corresponds to cGANs which have a vector of labels (y) in addition to the training data as input of the generator and discriminator.In the specific case of our manuscript, we consider conditional labels that are given by the different energy scales of a general Hamiltonian.Figure 2 shows the general architecture of the cGAN used in this work.This architecture is inspired by conventional GANs, yet with the key difference that conditional parameters are included as input for the generator and discriminator (shown in orange).The value function is also very similar to the one of Eq. 1 [37] min with conditional constraints y in the input of the discriminator and generator in their corresponding term in the value function.In the case of image generation, the auxiliary labels of the cGAN have discrete class values.In our case, we are using continuous labels which allows us to cover the full parameter space of a given Hamiltonian with a continuous cGAN.In contrast to conventional GANs, we now have the ability to simulate many-body systems with conditional parameters in the Hamiltonian.

B. Dynamical correlators with tensor-networks
Here we summarize the many-body method used to generate the training data.We will be interested in computing the dynamical correlator of a many-body Hamiltonian, taking the form where Â, B, Ĥ are many-body operators and |GS is the many-body ground state and E GS is the ground state energy.This spectral function corresponds to the dynamical spin structure factor for a spin system and the electronic many-body density of states for an electronic system.We now elaborate on the dynamical correlator S z (ω, n), that corresponds to the local spin structure factor [43][44][45].From the physical point of view, the dynamical spin structure factor signals the existence of spin excitations at a specific energy in a material [43][44][45].From the experimental point of view, spin excitations such excitations can be directly measured via inelastic spectroscopy with scanning tunnel microscope [46][47][48][49][50].The spin excitations are directly probed by tunneling electrons, in which an electron with spin up tunnels to the magnetic system, flipping its spin, creating a spin excitation, and tunnels outside of it [46].This process gives rise to a step in the differential conductance dI/dV [46], and in turn, directly appears as a peak in the d 2 I/dV 2 [50].The spin excitations computed in our manuscript are therefore directly measured experimentally, as demonstrated in a variety of experiments with scanning tunneling microscope [45,[48][49][50].
The dynamical correlator is computed using the tensor-network kernel polynomial algorithm [51][52][53][54][55][56][57][58].The many-body states and Hamiltonians are represented in terms of a tensor-network, using the matrix-product state formalism [59][60][61], the ground state is computed with the density-matrix renormalization group algorithm [4], and the Hamiltonian is scaled to the interval (−1, 1) to perform the Chebyshev expansion [51].The scaled Hamiltonian is denoted as H, and its scaled spectral function as χ, taking the form with T n (x) the Chebyshev polynomials and α n the coefficients of the expansion computed recursively, and including the Jackson Kernel [62].Finally, we note that while we focus here on the tensor-network representation of the states, an analogous procedure can be performed with neural-network quantum states [63].

III. SINGLE-PARTICLE SYSTEMS
While ultimately we will explore our generative algorithm for a quantum many-body system, it is instructive to first explore its applicability for a family of singleparticle models that can easily be solved.As the first proof of concept, we test our cGAN for a one-dimensional single-particle tight-binding system.The Hamiltonian in second quantization of these systems is given by  and were not accounted by for the theoretical model.We computed 4000 real systems and extended the training set with the data-enhancement method presented in App.B to 32 000 examples.[64] This training set size is therefore in the order of the MNIST data set of handwritten digits [65].The parameters µ and m are the conditional parameter of the GAN and are defined in the intervals µ ∈ [1.7, 2.3] t and m ∈ [−0.3, 0.3] t.
The idea is to train the generator to map from (µ, m) to the (local) density of states A(ω, n) (DOS) which is defined as where H is the tight binding matrix defined by Eq. 5, and δ the Dirac delta function.
The density of states A(ω, n) corresponds to the spectra of charge excitations of the system [44].In particular, it directly corresponds to the probability of an electron with specific energy to tunnel to a specific location [66,67].A non-zero density of states at certain frequency signals that a single electron would be able to tunnel into the material at such energy [66].From the experimental point of view, the electron spectral function can be directly probed via scanning tunneling spectroscopy [67][68][69].In particular, the differential conduc-tance defined as dI/dV allows to directly access the electron spectral function of a material [66,69], directly corresponding to the quantity computed in our manuscript.The density of states has been directly measured in a variety of setups, and in particular directly allows probing the spatial distribution of quantized modes [70][71][72].
Figure 3 shows the value of the DOS (z-axis) depending on the site (x-axis) and frequency (y-axis).We show the spatial-resolved DOS for 3 different conditional parameter combinations (µ, m) and compare the simulations of the cGAN in Fig. 3 (a,c,e) with the exact calculations in Fig. 3 (b,d,f).As observed in the figure, there is no visual difference between the real and generated DOS for each of the three parameter choices, a feature observed for generic examples.In particular, in Fig. 3 (c,d), the increase of µ gives rise to a frequency shift of 0.3 t compared to Fig. 3 (a,b), which is very well captured by the generated DOS of the cGAN in (c).Similar results can be seen in Fig. 3 (e,d), where the increased m-parameter induces a site imbalance between odd and even sites in the chain.In conclusion, the simulations of the algorithm capture the effects of both conditional parameters on the DOS with high accuracy and in arbitrary magnitude.The trained generator is able to generate new systems with arbitrary parameter choice of (µ, m) in the boundaries of the training interval, and even slightly outside, almost instantaneous with very high precision.In the next section, the same algorithm is applied to three different many-body systems which are computational more demanding than the single-particle system which can be seen as proof of principle.

IV. MANY-BODY SYSTEMS
In contrast to the single-particle case in the previous section, calculations of many-body systems with high accuracy are computationally much more demanding.This affects the training of the cGAN because creating an arbitrary large training set becomes one of the major bottlenecks.The idea is to use the minimal amount of data to train the network accurately and use methods of data enhancements to enlarge the training set (see App. B).This minimizes the computational effort and takes full advantage of the generative power of this algorithm.In this section, we test our cGAN algorithm for three different one-dimensional many-body systems including a S = 1/2-chain, a topologically non-trivial S = 1 system, and a doped Hubbard model [73].For each Hamiltonian, we have chosen specific conditional parameters and added hidden parameters that, e.g., account for residual perturbations in an experimental setup.

A. Gapless many-body S = 1/2 spin model
We start with the simplest many-body system we studied, an interacting S = 1/2 Heisenberg model realizing a quantum-disordered ground state.The Hamiltonian for the one-dimensional S = 1/2 system is given by with S n = (S x n , S y n , S z n ) the S = 1/2 many-body spin operators.The parameter J denotes the Heisenberg exchange coupling, N y a local alternating Neel magnetic field in the y-direction, and B x a uniform Zeeman field in the x−direction.In the absence of Neel, Zeeman, and disorder fields, this model realizes a well-understood isotropic Heisenberg model.In this limit, the system features gapless S = 1/2 spinon excitations [74] hosting a spin-singlet ground state with local zero magnetization in the thermodynamic limit [43] and can be analytically solved via Bethe ansatz [75].In the presence of finite Neel and Zeeman terms, the ground state of the system develops a finite order in the x and y directions S x n = 0, S y n = 0, yet hosting a zero local order in the z-direction S z n = 0.For our cGAN, the parameters N y and B x are the conditional parameters, defined in the intervals N y ∈ [0.0, 0.2] J and B x ∈ [0.0, 0.2] J, and ξ x and ξ y are introducing randomness (up to 0.05 J) to the training data generation.
We now focus on the spin excitations in real space computed with the dynamical spin correlator defined as where Ŝz n is the local spin operator in site n, |GS the many-body ground state, and E GS the ground state energy.The previous correlator directly probes many-body spin excitations in the spin chain and can be directly measured experimentally in real space [45] using inelastic spectroscopy [49,50,76] and electrically-driven paramagnetic resonance with scanning tunneling microscopy [77][78][79][80][81].We train the cGAN to map from the conditional parameters to the correlator in real space (N y , B x ) → S.
For the training we used 2250 many-body calculations with arbitrary conditional parameter combinations and used data-enhancement methods (shown in App.B) to increase the training set size to 36 000.
The results for the S = 1/2 system for 3 different parameter combinations are shown in Fig. 4 (a,b), Fig. 4 (c,d) and Fig. 4 (e,f).We compare the simulated systems in Fig. 4 (a,c,e) with the real many-body calculations in Fig. 4 (d,e,f).The parameter combinations cover different areas of the parameter space of the Hamiltonian of Eq. 7 and are chosen randomly.The cGAN simulates the spin excitations in z-direction with high accuracy and captures the important features including the spatial profile of the many-body modes in the full frequency range.Differences for the 3 parameter combinations occur in the form of a shift of the lowest excitation and the location and number of higher many-body modes.In Fig. 4 (a,b) the lowest excitation is at 0.25 J which is captured well in the generated system in (a).Especially for N y = 0.1 J and B x = 0.1 J the simulation (Fig. 4 (c)) is very close to the real spectrum (Fig. 4 (d)) comparing the energy onset of the excitation at around 0.5t as well as the relative magnitudes of higher many-body excitations.The same applies to the third parameter combination of N y = 0.2 J and B x = 0.2 J in Fig. 4 (e) and Fig. 4 (f), respectively.Differences between the simulations and real images can be related to, first, the induced noise up to 0.05 J and the random sampling of the cGAN from this noise distribution, i.e., every system generated by the cGAN shows small but observable differences.The second source of error is connected with the small amount of real training data which implies that for each arbitrary combination of conditional parameters only a small number of training examples exists (in the vicinity of the parameter space).Despite these features, the results for these values and arbitrary parameter combinations are very precise considering the comparatively small amount of training data in terms of GANs (we used only 2250 real data points compared the e.g. the MNIST data set of 60 000 examples) and the instantaneous generation of the spectra.

B. Interacting many-body system with topological order
The one-dimensional S = 1 spin chain is a topological non-trivial system that shows spin fractionalization in form of excitations of S = 1/2 spins below the bulk gap on the edges of the chain [82][83][84][85][86].This model represents one of the simplest examples of many-body fractionalization stemming from topological order.This system shows robust topological edge modes, resilient to perturbations that do not break the spin rotational symmetry of the model [84,87], and has been realized both in natural compounds [88] and artificial designer platforms [89].In stark contrast with the model of the previous section, the dynamical spectra of this topological model show persistent edge excitations, together with bulk modes, providing a substantially different qualitative behavior.
The Hamiltonian of the spin S = 1 Heisenberg model we consider is given by with S n = (S x n , S y n , S z n ) the many-body spin operators for S = 1.In comparison to Eq. 7, we have now chosen the dimerization of the nearest-neighbor exchange (∆ J ) and second-nearest-neighbor exchange (J 2 ) as conditional parameters.We note that external magnetic fields would break the protection of the low-energy topological excitations of the fractionalized spins of this system which we want to study, and, therefore, are not included.In turn, we introduce two noise terms in the model which respect the topological class, in particular spatially-dependent fluctuation in the exchange ξ J n and second-neighbor exchange ξ J2 n .Those two random fluctuations would account for small spatial inhomogeneities of the system in an experimental realization [88,89] stemming from local defects.The conditional parameters are defined in the intervals J 2 ∈ [−0.2, 0.2] J and ∆ J ∈ [−0.2, 0.2] J, and ξ J n and ξ J2 n are introducing randomness up to 0.05 J.In this case, the cGAN learns a mapping from (J 2 , ∆ J ) to the full spin correlator that denotes spin excitations in real space.It is worth noting that, due to the spin isotropy of Eq. 9, the correlator of Eq. 10 is proportional to Eq. 8 for the considered S = 1 model, and can be measured analogously in engineered spin chains with scanning tunneling probes [45,49,50,[76][77][78][79][80][81].
In Fig. 5 we have chosen three arbitrary parameter combinations in order to compare the real spin excitations in Fig. 5 (b,d,f) with the simulations of the cGAN in Fig. 5 (a,c,e).For all parameter combinations, the fractionalized S = 1/2 excitations emerge at the edges of the chain close to ω = 0 J and are mostly not affected by variation of the first and second-neighbor interactions.Due to finite size effects, the fractionalized excitations have a non-zero magnitude even in the middle of the chain for all parameter combinations, due to the dependence of the topological gap on the parameters J 2 and ∆ J .This effect is, however, of different magnitude for different choices of J 2 and ∆ J .In case of ∆ J = 0.03 J and J 2 = 0.19 J (Fig. 5 (e) and (f)), the S = 1/2 excitations appear mostly close to the edges at site n = 0 and n = 17.This behavior is captured well by the generated system in Fig. 5 (e).A stronger first-neighbor dimerization as well as second neighbor exchange interaction (Fig. 5 (a) and Fig. 5 (b)) results in closer lying excitations above the bulk gap at around 0.7 J.This effect is very accurately captured by the cGAN predictions in Fig. 5 (e)(b).The values of the energy levels, as well as relative magnitudes, are predicted with high accuracy in comparison to the exact tensor-network calculations for all arbitrarily chosen parameter combinations in the defined intervals.The visual accuracy obtained for this model even surpasses the case of the S = 1/2 system.This can be related to the spectra themselves which show more pronounced and separated features.To summarize this section, the cGAN is able to simulate the S = 1 system with high accuracy almost immediately in the range of the introduced randomness under consideration of the minimal amount of training data, same as in S = 1/2.

C. Interacting fermionic systems
We now move on to an interacting model with richer many-body phenomena, in particular incorporating both charge and spin degrees of freedom.The third system we are studying with the cGAN is the doped Hubbard model described by the Hamiltonian where ρ n,s = c † n,s c n,s , S y n = s,s σ y s,s c † n,s c n,s , with σ y s,s the spin Pauli matrix.The previous Hamiltonian is well known to feature a widely rich phase diagram away from half-filling [90] and provides a paradigmatic example of spin-charge separation [91].In particular, for µ = 0 and U t the electronic system is half-filled, and the spin sector of Eq. 11 maps to a Heisenberg model with an exchange coupling given by J ∼ t 2 /U for local S = 1/2 degrees of freedom.That limit corresponds to the model presented in Eq. 7.However, in the general case away from half-filling, the previous model shows much more complex spin excitation than Eq.11.The conditional parameters are the onsite Hubbard interaction U , chosen in the interval U ∈ [0.5, 1] t, and µ ∈ [0.0, 0.5] t that parametrizes the chemical potential of the system.The randomness is created with an external stagger magnetic field in y-direction with parameter ξ that alternates sign between neighboring sites (ξ n ∈ [0.0, 0.05] t).We will focus on addressing the many-body spin excitations of this interacting fermionic model, as given by the dynamical spin correlator now written with fermionic many-body operators ρ n,s = c † n,s c n,s .
The cGAN learns the mapping (U, µ) → S z (ω, n) and the results are presented in Fig. 6 showing the manybody excitations S z (ω, n) on the corresponding site (xaxis) in the frequency ω range between 0 and 5 t (y-axis).For the three different combinations of the conditional parameters (U, µ) shown in Fig. 6 (a,b), Fig. 6 (c,d), and Fig. 6 (e,f), the generated spectra of the cGAN Fig. 6 (a,c,e) show very good agreement with the tensornetwork calculations Fig. 6 (b,d,f).The variation of the onsite interaction U between 0.7 t and 0.9 t as well as the charge occupation varying between 0.1 t and 0.35 t does not affect the accuracy of the generated spin excitations.The features and changes of the corresponding parameter combinations, including energy gap as well as location and intensity of states, are all well captured in the simulations of the cGAN.
Considering the results for the three studied manybody systems, we observe that cGANs are able to capture dynamical correlators of one-dimensional systems with high precision.This methodology can easily be extended to different systems without further modifications of the network architecture.The almost instantaneous simulations provide a huge advantage over the numerically costly tensor-network calculations and enable to study the full parameter space of a Hamiltonian without additional computational effort.Despite the relatively small amount of training data (about one magnitude less than for conventional training of GANs as mentioned earlier in this section) the accuracy remains high and the benefits of the cGAN algorithm out-weight the computational costs of creating the training data.As seen in these examples, the cGAN algorithm provides faithful results for substantially different many-body systems, therefore suggesting that this methodology can be readily extended to other many-body systems.We now partition the training and test set as depicted in Fig. 13 in App.C where the cases contained in the orange squares are excluded from the training.Afterward, the performance of the cGAN is computed in the whole phase space (Fig. 7 (b,d,f)).The measure of the similarity between two images is in this case the structural similarity index measure (SSIM) [92][93][94][95].In contrast to other techniques e.g. the MSE, the SSIM does not compute absolute errors.It compares the structural information of two given images taking into account spatial correlations among pixels.The SSIM is defined in the interval between 0 and 1, where 1 is the maximum, only reached for two identical images.As a reference, in Fig. 7 (a,c,e) we are comparing real data with different hidden variables to generated data of the cGAN trained on the full parameter space.In Fig. 7 (b,d,f), real data with random hidden variables is compared with generated data of the cGAN trained on a restricted subspace.As shown in Fig. 7 (b,d,f), the similarity index between real and generated dynamical correlators shows a high value over the whole parameter space, quantitatively similar to Fig. 7 (a,c,e).Most importantly it is observed that the cGAN trained in the restricted space (Fig. 7 (b,d,f)) shows similar performance inside and outside the training region even in direct comparison with the cGAN trained on data including the restricted area (Fig. 7 (a,c,e)).This highlights that the cGAN, restricted to a subset of the parameter space, is capable of generating data outside its trained region.Another reason for the different landscapes of the similarity maps is the cGAN training process where the weights are randomly initialized.This always leads to a slightly different training outcome which in this case can be seen in slight differences in the similarity maps.This, however, does not affect the averaged accuracy over the full parameterspace.
In App.C, we study the SSIM map in the vicinity of the validation areas in more detail (see Fig. 14).To summarize the results, we observe that the restricted cGAN gives results whose similarity is analogous to the cGAN trained on the full parameterspace.Most importantly, we observe that the SSIM takes similar values inside and outside the excluded areas, indicating that the cGAN learns to faithfully generate data in parts of the phase diagram not used for the training.
We now focus on discussing the statistical analysis of the benchmarks.We show in Fig. 8 the analysis for the S = 1/2 chain, in Fig. 9 the analysis for the S = 1 chain, and in Fig. 10 the analysis for the Hubbard model.In particular, we compare the similarity index obtained for the network trained in the full space (Fig. 8 (a,c), Fig. 9 (a,c), and Fig. 10 (a,c)), and in the restricted space (Fig. 8 (b,d), Fig. 9 (b,d), and Fig. 10 (b,d)).It is clearly observed that, for the three considered models, the two cGANs have nearly the same performance, highlighting the extrapolation capability of the restricted cGAN.This finding clearly shows that cGAN effectively learns to extrapolate the dynamical data.This is clearly seen in Fig. 15 in App.C, where we show a comparison between a real dynamical correlator and a generated one, in a point in the phase space in which the network was not trained, showing a faithful agreement.Finally, to emphasize the role of hidden parameters, we show in Fig. 10(e) the similarity index between generated images with different randomness.This further shows that the small differences between images with the same parameters are well captured by the cGAN, and in particular have the same order of magnitude of fluctuations as the SSIM in the whole phase space.

VI. HAMILTONIAN INFERENCE AND DATA ASSESSMENT WITH THE GENERATIVE MODEL
In this section, we demonstrate how our conditional generative adversarial model allows us to tackle two additional tasks by exploiting the trained discriminator network as an automatic byproduct of the trained algorithm.The focus of this section is not the generator that was responsible for the generation of the spectra in the previous sections, but the discriminator which is to this point only used during the cGAN training process.The discriminator is trained to distinguish between real, physical systems and unrealistic ones.This feature provides the fundamental ingredient to perform parameter inference and anomaly detection.

A. Hamiltonian learning with the generative model
Here we show how the discriminator network of the generative model allows to directly extract the physical parameters of a certain dynamical correlator.The estimation of physical parameters from data is commonly referred to as Hamiltonian learning and has been explored with a variety of machine learning techniques [96][97][98][99][100][101].While these methodologies are usually specifically developed for this purpose, conditional generative algorithms provide this functionality as a direct consequence of their training.
The discriminator learns to assess if a certain dynamical correlator corresponds to the physical parameters given as conditional inputs.Due to instantaneous efficiency, this functionality can be directly applied to inquire the discriminator if a dynamical correlator corresponds to every single possible Hamiltonian.This procedure allows extracting the confidence that the discriminator has for the whole set of parameters as shown in Fig. 11.The probability estimations are shown in Fig. 11 (b), Fig. 11 (d), and Fig. 11 (f) for the three studied many-body systems.
For the S = 1/2 many-body dynamical correlator in Fig. 11 (a), the discriminator is able to predict the conditional parameter N y with high accuracy, yet yielding a whole range for B x .This is due to the fact that N y has a very strong impact on the dynamical correlator where on the other hand B x leaves the spectrum almost the same.Therefore, the discriminator detects a strong dependence on N y which appears in the exact tensornetwork calculations.Interestingly, the consistency of that dynamical correlator with several Hamiltonians simultaneously would represent a challenge for parameter extraction purely based on supervised learning due to the non-unique parent Hamiltonian [102], representing an advantage of generative-based parameter estimation.
We now move on to the gapped S = 1 chain.The parameter predictions for the S = 1 spectrum, shown in Fig. 11 (d), provide a single maximum in the parameter assessment, determining the real parameter with good precision.In comparison with the S = 1/2, for the S = 1 the parameters are uniquely determined by the provided dynamical spin correlator.This enhanced accuracy can also be rationalized from the existence of both bulk and edge excitations, that provide potentially complementary information to the discriminator.
Finally, we move on to the interacting fermionic Hubbard model.The predictions of the Hubbard model, shown in Fig. 11 (f), estimate the area of the exact conditional parameters well and provide a unique maximum for the estimated parameter.In comparison with the S = 1 model, the estimated area is larger and less steep, which can be related to the higher complexity of the spectra of the many-body model.In particular, the features of the spectra are not as clear and distinct as it is the case of the S = 1 model.Considering that we use the same amount of training data for each many-body system, the differences in accuracy may be related to the higher complexity of the dynamical correlator of the Hubbard model in comparison with the S = 1 chain.
Here, we elaborate on the advantage of the cGAN over traditional methods to estimate the phase space of a Hamiltonian.From the theoretical point of view, computing the dynamical correlator requires solving an interacting quantum many-body Hamiltonian.As a reference, computing the dynamical correlator of a S = 1/2 chain with exact dense diagonalization takes 1 second for 10 sites, 10 hours for 15 sites, and it would take 900 years for 20 sites.Many-body tensor-networks [4,[103][104][105] provide a dramatic speed up with respect to exact diagonalization, allowing to compute the dynamical correlator of an S = 1/2 with 20 sites in 30 minutes.However, when considering more complex models such as the Hubbard model, computing the dynamical correlator [54,105,106] for 20 sites increases already to 10 hours.All those times above correspond to a single Hamiltonian.In practice, if we are interested in exploring the full phase diagram of a Hamiltonian, computing dynamical correlators even with tensor networks quickly becomes unfeasible.This becomes especially critical for procedures that unavoidably involve exploring the full combination of parameters in the Hamiltonians, as is the case of Hamiltonian learning.In particular, with the cGAN algorithm demonstrated in our manuscript, we can generate dynamical correlators of a given Hamiltonian almost instantaneous.Under consideration of the required training data, the cGAN approach offers an significant speedup over the previous FIG.12. Outlier detection of faulty unphysical many-body dynamical spectra, emulated by using insufficient bond dimension in tensor-network calculations for the S = 1/2 (a), S = 1 (b), and Hubbard model (c).The green rectangle marks the bond dimension for which the dynamical correlator is considered as real by the discriminator of the cGAN.mentioned computational methods, especially for studies of a huge region of the parameter space.This point is explained in detail below.In practice, this implies that the Hamiltonian inference problem that requires 10 days with many-body tensor-networks, can be performed with our cGAN algorithm in 30 seconds.This dramatic speed up demonstrates that cGAN are high valuable algorithms to study quantum many-body systems, for a task that can be directly applied to experimentally measured data.
We would like to note that the existence of a wide confidence region is a direct consequence of the nature of our problem.In particular, the models we are considering have a set of known parameters, acting as conditional parameters of the GAN, and most importantly a set of unknown parameters to the algorithm that enter as the generative noise of the GAN.In particular, this is directly observed in Fig. 7(a,c,e), that correspond to the similarity between real images with different hidden variables.Physically, this implies that we are focusing on systems in which we know some physical parameters, but that potentially contain a variety of hidden variables not considered experimentally and that can potentially slightly modify the final result.This would be the typical situation in an actual experiment, in which the hidden parameters account for disorder, extra terms in the Hamiltonian, or perturbations to the systems that are not directly controllable.In other words, experimentally Hamiltonians can have extra terms not originally considered in the model, and this uncertainty is explicitly incorporated in our algorithm through the noise of the generative network.As a result, the confidence in the parameter extraction of Fig. 11 directly reflects the fact that the hidden variables can slightly impact the dynamical correlators of the system, giving rise to an intrinsic uncertainty when doing the parameter extraction.In particular, in the absence of the generative noise, the confidence of Fig. 11 would be much narrower, as no hidden variables would be included in the model.
For the three considered, models the accuracy of the estimation shows small variations across the different parameter realization and noise level, yet overall giving a good estimation of the vicinity of the exact conditional parameters for each of the studied systems.While we performed here parameter extraction solely with the dynamical correlators, it is worth noting that an analogous procedure can be extended by training a generative model with combined time-dependent [101,107] or ground state observables [108,109].Finally, it is worth noting that while here we focused on simulated dynamical correlators, this procedure can be readily applied with experimentally measured spin excitations [49,50], providing a procedure for experimental Hamiltonian extraction with conditional generative adversarial networks.

B. Generative model as a many-body assessor
When observing complex phenomena in a quantum system, a key question is if the observed behavior corresponds to the targeted physical state of the system or reflects an undesired artifact of the underlying methodology or setup [110][111][112].In particular, many-body calculations often require a degree of controlled accuracy that is model and method-dependent.However, estimating if a certain many-body phenomenon represents a physical system solely from the observation of the dynamical excitations represents an outstanding challenge even for human experts.Here we address how the discriminator provides a direct algorithm to assess if a certain dynamical correlator corresponds to a physically-meaningful system, or rather reflects an artifact in the underlying methodology.
The trained discriminator can directly assess if a certain input corresponds to a real result, as a direct consequence of the competitive training with the generator.This procedure provides a discriminator-based outlier detection at no cost after the training of the generative model.To study this effect, we generate different dynamical correlators computed with different degrees of accuracy, which in our tensor-network formalism is directly controlled by the bond dimension of the matrix product state.The discriminator is then used to detect spectra with insufficient accuracy corresponding to illconverged results.Figure 12 shows the results of the outlier detection for each many-body system.For the S = 1/2 system in Fig. 12 (a), the discriminator detects outliers for a bond-dimension lower than d bond = 15, for the S = 1 system the numerical accuracy becomes insufficient below d bond = 30, and for the Hubbard model, the bond dimension has to be close to d bond = 40, in order to be considered a physically meaningful result by the discriminator.The increasing bond dimension required to pass the discriminator test is a direct consequence of the increasing local Hilbert space of the underlying models and reflects the higher entanglement of the respective many-body states.
Here, we elaborate on the reason behind the degradation of the data as d bond is lowered.The tensor-network calculations were performed with bond dimension d bond , which is high enough to provide fully-converged dynamical correlators for three models considered.As the bond dimension is lowered, the quality of the calculation gets worse, yet the worsening is model-dependent.In particular, for the S = 1/2 model (Fig. 12 (a)) a bond dimension d bond = 30 still provides dynamical correlators with reasonable quality, whereas for the Hubbard chain (Fig. 12 (c)) the quality decreases much faster as the bond dimension is decreased.A more detailed study of the Hubbard model (Fig. 12 (c)) in the bond-dimension range d bond = 30 − 40 is provided in App. C.
It is worth noting that all previous assessment is performed including noisy ξ α terms in the Hamiltonian, demonstrating that the generative model distinguishes between physical noise in the Hamiltonian parameters and artifacts stemming from the computational procedure.Furthermore, while in this section we focused on simulated data, an analogous procedure can be extended to experimental data, providing a methodology to assess experimental measurements using generative adversarial learning.
For the models we considered in this work, we have not explored the possibility of using the GAN algorithm to extract the critical points of the model.This would certainly be a possibility of great interest in the future.From the practical point of view, a procedure analogous to Nature Physics 13, 435 (2017) could be implemented with the dynamical correlators using learning by confusion.Let us consider the specific case of a phase transition for a quantum-disordered magnet to a magnetically ordered state.In this situation, the magnet would host magnon excitations [43] that have a well-defined energy versus momentum relation, which would directly be reflected in the dynamical correlator.In stark contrast, for a quantum disordered magnet, spin excitations would correspond two-spinon modes [43] that lead to a continuum of S = 1 excitations that lack a well-defined energy-momentum dispersion.This qualitative difference between the two phases directly signals the nature of the ground state.We note that this is analogous to the magnetization observable used in Nature Physics 13, 435 (2017).As a result, using the strategy of learning by confusion with the discriminator would automatically allow extracting the critical point.

VII. CONCLUSIONS
To summarize, we demonstrated how continuous conditional generative artificial networks allow simulating dynamical correlators for many-body systems which are almost indistinguishable from exact many-body calculations.In stark contrast with conventional supervised algorithms, our methodology allows to simultaneously account for hidden variables unknown to the model, intrinsically account for randomness in the models, reduce by about one order of magnitude the required many-body data required for the training, and exploit the discriminator for Hamiltonian inference and anomaly detection.
In particular, we have demonstrated our methodology with three different types of many-body Hamiltonians, starting with a gapless S = 1/2 model featuring spinon excitations, an interacting system with topological order and topological boundary modes, and an interacting fermionic system at arbitrary electron fillings.After the training process, the cGAN algorithm is able to simulate these systems instantaneous for arbitrary combinations of conditional Hamiltonian parameters.Furthermore, the trained cGAN is not only able to simulate these systems with the generator, the trained discriminator can also be utilized for estimating the parameters of a Hamiltonian from data and for the detection of outliers and wrong-labeled data without the requirement of additional training.These two features can be directly extended with the trained algorithm to data in order to determine unknown underlying Hamiltonians, detect artifacts, and ultimately they can be directly ap-plied to experimental data.Furthermore, we achieved a significant increase in performance of Hamiltonian learning over conventional methods which demonstrates the power of cGANs to study quantum many-body problems very efficient.
Our results establish a first step towards exploiting generative adversarial machine learning to simulate and design many-body matter.Beyond the results demonstrated here, it is worth noting that the trained cGAN algorithm can be used as a tool in experiments in order to investigate underlying unknown Hamiltonians of systems, either with the generator or discriminator, considering the speed-up for computations of big system sizes and the possibility to simulate Hamiltonians with arbitrary parameters.It is also worth noting that our results use a fully-connected deep neural network for each the generator and discriminator, leaving plenty of room for further future optimizations with deep convolutional neural networks for image compression and feature extraction.Those improvements will allow increasing the accuracy of the cGAN, combining different systems into one single algorithm, and increasing the system size with pre-training on smaller systems which are computationally more feasible to generate.

APPENDIX Appendix A: GAN architecture
For each many-body system of Sec.II we are training a separate cGAN.The explicit network architecture and training parameters can be found in Tables I and II as well as in the code [73].In general, we are using fullyconnected deep neural networks for the generator and discriminator with a maximal layer dimension of 4048.For the hidden layers, we use the LeakyRelu activation function [113].Kernel regulizers are included in every dense layer applying penalties on layer parameters during the training.The output activation function of the discriminator is the sigmoid function, and of the generator the output function is the tanh-function [11].Additional Gaussian noise added into the discriminator (see Table II) helps to stabilize the training process.
Training -We are training a separate cGAN for each system.We are using a batch size of 100 and 40 epochs for the training.Choosing a smaller batch size of 50 for 5-10 additional epochs may increase the accuracy of the networks.For the generator, we are using the Adam optimizer with a learning rate of 0.001 and for the discriminator stochastic gradient decent.The images of the real space excitation spectra are flattened into a 1D array of size 900 as input for the fully-connected neural network of the generator.For each many-body system, we created 2250 spectra with random conditional parameter combinations.We calculated the dynamical correlators as described in Sec.II B for the many-body Hamiltonians introduced in Sec.IV.In order to minimize the required calculations for the creation of the training set, we are using two methods to enhance the number of examples without doing further tensor-network calculations.First, we are mirroring the 1d systems of 18 sites around site 9 and therefore double the number of systems.Second, we are mimicking the background noise of the random parameters without affecting the conditional parameters.We are changing the intensities depending on the frequency regime by defining a function that adds fluctuations around each energy level seen in the spectra of Sec.IV.The data augmentation function is defined as with random numbers ξ 1 ∈ [0.04, 0.1] and ξ 2 = 2π * n/2.5 with n ∈ [1,4].This function adds a different amount of intensity to the dynamical correlator depending on the frequency ω.In an experiment, this procedure would mimic any potential form factor that would affect the intensity of a many-body transition intrinsic to the measurement setup.This method allows adding an arbitrary amount of training examples with the same conditional parameters but with different noise values.In our case, we could enlarge the number of systems from 2250 original tensor-network calculations to 36 000 which represent our full training set.The pre-processing procedure can be found in [73] and consists of a re-scaling of the input spectra between 0 and 1, as well as a separate re-scaling of the conditional parameters.The orange areas correspond to regions for which no example is provided to the cGAN algorithm.The orange regions were chosen randomly and no strong dependence on the results is found with respect to their location.
As a first demonstration, we show in Fig. 14 (a) quantitative benchmark of the cGANs, one trained in the full parameter space and one trained in a restricted subset.The quality factor compares the similarity between real data with random hidden variables and the cGAN generated data with the cGAN trained on the full parameter space (Fig. 14 (a,c,e)), and trained on a subset of parameters (Fig. 14 (b,d,f)).The parameter space inside the orange and red squares is not included in the training for the restricted cGAN (see also Fig. 13), but it is included in the validation to benchmark the extrapolation capabilities of the network.In particular, we compute and where S real , S cGAN are the dynamical correlators computed with the tensor-network formalism and the cGAN, χ is a set of random hidden variables and λ are the physical parameters.The functional D measures the similarity between two images, and is taken as the structural similarity index measure (SSIM) [92][93][94][95].The similarity between two sets of data is therefore given by the SSIM, where the closer the value to 1 the greater the agreement.We elaborate on the SSIM below.
The maps of Fig. 14 (a,c,e) show the D full for the three models of this manuscript, and the maps of Fig. 14 (b,d,f) show D subset for each model.In particular, the cGAN trained on the full parameter space in Fig. 14 (a,c,e) shows similar small fluctuations between both images compared to the subset of Fig. 14 (b,d,f).The fluctuations appear due to the hidden variables and should therefore be taken as the similarity of reference to benchmark the subset cGAN in the unknown parameter space FIG. 15.Examples for predictions on the validation set, for the network trained in the subspace.it is clearly observed that the generated images faithfully correspond to the real ones even for the phase space in which the network was not trained.
inside the squares.For each point of the phase space, the values of the hidden variables are taken randomly for the computed Hamiltonian.The comparison of the similarity measure inside the unknown parameter space highlights the extrapolation capability of the cGAN.It is observed that the restricted cGAN gives results whose similarity is analogous to the full-trained cGAN.Most importantly, it is observed that the SSIM takes similar values inside and outside the excluded areas, signaling that the cGAN learns to faithfully generate data in parts of the phase diagram not used for the training.
We now elaborate on the reason for choosing the structural similarity index measure instead of a plain pixelwise difference.Given the existence of diversity in the dynamical spectra for a set of parameters due to the hidden variables, spectra corresponding to the same parameters will have small differences.These small differences include small shifts in the peaks, and are a direct consequence of the hidden variables.A pixel-wise difference would yield a huge error even for images whose difference between peaks is very small, and therefore would not be a faithful measure of the similarity between these images.An algorithm accounting for the similarity between images must therefore not consider such small shifts as a source of error, and rather focus on the overall features of the image.The structural similarity index [92][93][94][95] allows measuring the similarity between two images incorporating perceptual phenomena [95], and in particular focusing on the spatial correlations of the image, which correspond to the features that allow to assess the quality of the generative network.It is important to note that a simple pixel-wise difference would not be able to account for the strong inter-dependencies of the pixels when they are spatially close.The structural similarity index allows accounting for the similarities between images that just differ on the hidden variables, in particular focusing on both luminance masking and contrast masking terms [95] that appear due to the peaks of the dynamical correlators.

Discriminator
The cGAN algorithm is trained with high-quality many-body data, with d bond large enough so that all the results have converged.The training examples contain however different values for the hidden variables, that give rise to small differences in the dynamical correlators.While both the hidden variables and under converge in d bond give rise to small differences, distinguishing them is a highly non-trivial task.Results with different hidden values are physically significant, whereas ill-converged results in d bond are outliers that should be flagged as defective data.The key point of our outlier detection is that our algorithm learns to distinguish fluctuations arising from the hidden variables from fluctuations coming from an ill-converged result.While for a human is clearly easy to distinguish different images that have small fluctuations, distinguishing if the fluctuations correspond to different hidden variables (i.e.acceptable data) or illconvergence (i.e.defective data) is a remarkably challenging problem even for human experts.To the best of our knowledge, no algorithm exists for performing this distinction.
To demonstrate that the outlier detection also allows to flag unphysical results in the Hubbard model, we show in Fig. 16 the dynamical correlators for the Hubbard chain with higher bond dimensions, in particular from d bond = 30 − 40.It is clearly observed that, while the dynamical correlators show a high similarity displaying small fluctuations, the discriminator is capable of distinguishing the small fluctuations coming from an illconverged calculations from the fluctuations of the hidden variables.

FIG. 1 .
FIG. 1.Comparison of the dynamical correlator computedwith an exact many-body formalism (left) and a trained conditional generative adversarial neural network (right).After training, the cGAN allows generating many-body spin correlators of analogous quality to the many-body formalism in the whole parameter space.The trained cGAN accounts simultaneously both for controlled Hamiltonian parameters and hidden disorder effects.

FIG. 2 .
FIG. 2. Illustration of the cGAN architecture used in this work.The cGAN consists of two deep neural networks, the generator G and discriminator D, playing a min-max game against each other.The generator learns the (labeled) data distribution from the real images and the discriminator tries to distinguish between real and generated images.The network parameters are updated during the training via the D-Loss and G-Loss.

FIG. 3 .
FIG. 3. Real space DOS of a one-dimensional tight-binding system of length n = 18 generated with the cGAN (a,c,e) in comparison with the exact tight-binding calculations (b,d,f).The x-axis labels the site, the y-axis the frequency, and the z-axis the (local) DOS (Eq.6).Shown are 3 combinations of the conditional parameters (µ, m) in (a,b), (c,d) and (e,f), as defined in Eq. 5.

FIG. 4 .
FIG. 4. Real space S z spin correlators of a one-dimensional S = 1/2-spin chain of length n = 18 generated with the cGAN (a,c,e) in comparison with exact tensor-network calculations (b,d,f).The x-axis labels the site, the y-axis the frequency, and the z-axis the full spin correlator (Eq.8).Shown are 3 combinations of the conditional parameters (Ny, Bx) in (a,b), (c,d), and (e,f), defined in Eq. 7.

FIG. 5 .
FIG.5.Real space S z spin correlators of a one-dimensional S = 1-spin chain of length n = 18 generated with the cGAN (a,c,e) in comparison with exact tensor-network calculations (b,d,f).The x-axis labels the site, the y-axis the frequency, and the z-axis the full spin correlator (Eq.10).Shown are 3 combinations of the conditional parameters (∆J , J2) in (a,b), (c,d), and (e,f), defined in Eq. 9.

FIG. 6 .
FIG.6.Real space S z spin correlators of a one-dimensional Hubbard model of length n = 18 generated with the cGAN (a,c,e) in comparison with exact tensor-network calculations (b,d,f).The x-axis labels the site, the y-axis the frequency, and the z-axis the spin correlator in z-direction (Eq.12).Shown are 3 combinations of the conditional parameters (U, µ) in (a,b), (c,d), and (e,f), as defined in Eq. 11.

FIG. 7 .
FIG. 7. Similarity maps of the considered parameterspace for the S = 1/2 model (a,b), the S = 1 model (c,d) and the Hubbard model (e,f).Panels (a,c,e) are obtained comparing the similarity between different real data and the cGAN trained on the full parameterspace.Panels (b,d,f) are obtained comparing real data with the cGAN trained on a restricted subset.The training of the restricted cGAN was performed excluding examples inside the orange (and red) region.It is observed that the similarity between both cGANs does not show significant differences besides small fluctuations due to the intrinsic randomness.

FIG. 8 .
FIG. 8. Statistics of the SSIM for the S = 1/2 model.Panels [(a) and (c)] is the cGAN trained on the total parameter space, panels [(b) and (d)] correspond to cGAN trained is the restricted space.Panels (a) and (b) are the SSIMs for the whole parameter space (randomly sampled for 400 parameter combinations) and panels (c) and (d) are the SSIMs for the validation set only.It is observed that the two cGANs have nearly the same performance, highlighting the extrapolation capability of the restricted cGAN.

FIG. 9 .
FIG. 9. Statistics of the SSIM for the S = 1 model.Panels [(a) and (c)] is the cGAN trained on the total parameter space, panels [(b) and (d)] correspond to cGAN trained is the restricted space.Panels (a) and (b) are the SSIMs for the whole parameter space (randomly sampled for 400 parameter combinations) and panels (c) and (d) are the SSIMs for the validation set only.It is observed that the two cGANs have nearly the same performance, highlighting the extrapolation capability of the restricted cGAN.

FIG. 10 .
FIG. 10.Statistics of the SSIM for the Hubbard model.Panels (a) and (c) is the cGAN trained on the total parameter space, panels (b) and (d) correspond to cGAN trained is the restricted space.Panels (a) and (b) are the SSIMs for the whole parameter space (randomly sampled for 400 parameter combinations) and panels (c) and (d) are the SSIMs for the validation set only.The two colors in (c,d) correspond to the two different areas excluded.It is observed that the two cGANs have nearly the same performance, highlighting the extrapolation capability of the restricted cGAN.Panel (e) shows the SSIM for a single parameter, averaged over different 200 different generated images of the cGAN, showing that the similarity changes with each simulation of the cGAN due to the intrinsic randomness.

FIG. 11 .
FIG. 11.Hamiltonian learning with the discriminator network of the cGAN for the many-body spectra of a onedimensional S = 1/2 system (a), S = 1-system (c), and the Hubbard model (e).The dynamical correlators S(ω, n) are defined in Sec.IV.The corresponding predictions of the probabilities of the discriminator, defined as D(x|y) (see Eq.2) with x as input spectra and y as conditional parameters, are shown in (b,d,f) in a squared scale.The black crosses mark the exact location of the conditional parameters of the input spectra.

FIG. 13 .
FIG.13.Training-validation split in the full parameter range.The orange areas correspond to regions for which no example is provided to the cGAN algorithm.The orange regions were chosen randomly and no strong dependence on the results is found with respect to their location.

FIG. 14 .
FIG. 14. Similarity maps for the S = 1/2 model (a,b), the S = 1 model (c,d) and the Hubbard model (e,f).Panels (a,c,e) are obtained comparing the similarity between different real data and the cGAN trained on the full parameter space.Panels (b,d,f) are obtained comparing real data with the cGAN trained on a restricted subset.The training of the restricted cGAN was performed excluding examples inside the orange (and red) region.It is observed that the similarity between both cGANs does not show significant differences besides small fluctuations due to the intrinsic randomness.The quality of the generations of both cGANs are furthermore analogous inside and outside the region excluded in the training.

TABLE II .
Architecture of the discriminator network.