Learning the constitutive relation of polymeric flows with memory

We develop a learning strategy to infer the constitutive relation for the stress of polymeric flows with memory. We make no assumptions regarding the functional form of the constitutive relations, except that they should be expressible in differential form as a function of the local stress- and strain-rate tensors. In particular, we use a Gaussian Process regression to infer the constitutive relations from stress trajectories generated from small-scale (fixed strain-rate) microscopic polymer simulations. For simplicity, a Hookean dumbbell representation is used as a microscopic model, but the method itself can be generalized to incorporate more realistic descriptions. The learned constitutive relation is then used to perform macroscopic flow simulations, allowing us to update the stress distribution in the fluid in a manner that accounts for the microscopic polymer dynamics. The results using the learned constitutive relation are in excellent agreement with full Multi-Scale Simulations, which directly couple micro/macro degrees of freedom, as well as the exact analytical solution given by the Maxwell constitutive relation. We are able to fully capture the history dependence of the flow, as well as the elastic effects in the fluid. We expect the proposed learning/simulation approach to be used not only to study the dynamics of entangled polymer flows, but also for the complex dynamics of other Soft Matter systems, which possess a similar hierarchy of length- and time-scales.


I. INTRODUCTION
Polymeric materials are ubiquitous in our modern industrial societies, having transformed our food, infrastructure, and modes of transportation.Not surprisingly, the 20th century has been dubbed the "Polymer Age" by Rubinstein and Colby [1].There is a growing demand for producing more high-functioning polymeric products, and to do this in a cost-effective way.Unfortunately, there is still much we do not understood about polymer physics, particularly with regards to the fabrication method of sophisticated polymer products.At present, one of the preferred manufacturing methods for polymeric materials is polymer melt processing, where a molten polymer is extruded or molded into the desired shape, before allowing it to cool and solidify [2].To accomplish these requirements, we need to understand not only the macroscopic flow behavior of the polymer process, but also the microscopic dynamics of the polymer chains, in order to reliably control the resultant properties of the product.However, it is not easy to understand such properties using only experimental observations, due to the hierarchy of length-and time-scales needed to characterize the microscopic polymer dynamics and the macroscopic flow.Computer simulations, which provide an alternative and complimentary approach, have become an indispensable tool for studying such systems.
Molecular Dynamics (MD) simulations provide access to the detailed dynamics of polymer chains, but they are unable to deal with the macroscopic flow, because of the prohibitive cost.Computational Fluid Dynamics (CFD) simulations, with an appropriate constitutive equation for the stress tensor of the polymeric material [3], can predict the macroscopic flow behavior, but cannot provide any microscopic information on the constituent polymer chains.To address this issue, and elucidate the microscopic origin of the flow problems at hand, Multi-Scale Simulation (MSS) methods, which make it possible to simultaneously consider the dynamics at both scales, have been extensively developed over the last twenty years.Within the MSS approach, the macroscopic and microscopic degrees of freedom are coupled through the stress and strain-rate tensor fields.Originally proposed by Laso and Öttinger in 1993 [4,5], with the so-called CON-NFFESSIT model, these types of approaches represent the state-of-the-art in polymer rheology, as they provide a rigorous connection between the microscopic and macroscopic degrees of freedom [6][7][8][9][10][11][12][13][14][15][16].However, given the computational complexity, they have been limited to simple flow geometries and small system sizes, and have yet to be widely adopted within industry.
In this paper, we demonstrate how to leverage the power of Machine-Learning techniques to accelerate the MSS to the point where they are competitive with existing macroscopic descriptions, without any significant loss of accuracy.In particular, we will show that it is possible to learn the constitutive relation from training data generated from small system size microscopic simulations.To this end, we adopt a simple microscopic description, which models the polymers as non-interacting Hookean dumbbells.We note that the proposed method is appli-cable to any microscopic polymer model, the Hookean dumbbell model has been chosen for its simplicity.As is well known, in the limit when the number of dumbbells goes to infinity, the time evolution equation for the stress of such an ensemble converges to an analytic expression.This makes it possible to give a stringent assessment of our proposed ML approach.We then perform simulations at fixed strain-rates, in order to measure the time-evolution of the stress.This information is used as the training data for a Gaussian Process (GP) regression, in order to learn the corresponding constitutive relation.A GP approach avoids over-fitting of the data and allows us to incorporate unknown and/or noisy data within a Bayesian framework, in a convenient and efficient manner [17][18][19][20].The learned constitutive relations are then used in macroscopic flow simulations, opening the possibility of probing length-and time-scales that would be unreachable with standard MSS techniques.Previous work by Zhao et al. has used a similar approach to learn the constitutive relation of generalized Newtonian fluids [21], but as proposed, the method cannot be applied to non-Newtonian viscoelastic materials that exhibit a history dependent flow.A recent extension of this method has used GP to learn the effective viscosity and relaxation time needed to parameterize a given viscoelastic constitutive relation [22].In both cases, however, the functional form of the constitutive equation is a fixed input of the method.Here we show that this restriction can be lifted, and that the constitutive relation itself can be learned.
Compared to full MSS (using Hookean dumbbells), we obtain speedups of around two orders of magnitude, and we expect this will only increase when more realistic (computationally expensive) polymer models are used, as the cost of performing the macroscopic flow simulations remains constant.Finally, we note that the proposed learning strategy, which is here used to learn the constitutive relation of polymeric flows from microscopic data, is not limited only to polymeric materials.In fact, we envision similar approaches being used to bridge between length-and time-scales in other Soft-Matter systems, such as colloidal dispersions or cellular tissues.

II. MULTI-SCALE SIMULATIONS A. Macroscopic Model
In order to consider the memory effects inherent to polymer flows we adopt a Lagrangian particle description to describe the dynamics of the fluid.This allows us to account for the convection of the polymer chains and the corresponding strain-rate history dependence on their dynamics.In particular, we will use the Smooth Particle Hydrodynamics (SPH) method(see Appendix A for details) [23].The system is discretized into fluid particles, carrying mass, momentum, and all relevant hydrodynamic variable for the problem under consideration.Let x i and v i be the position and velocity of particle i; its time-evolution is determined by the following equations where ρ i is the density of the fluid particle and F (x i ) is any external force acting on the particle (at position x i ).The first term on the right-hand side of Eq. ( 2) corresponds to the forces on the particle due to internal stresses in the fluid (with p the pressure field).The stress σ comes from the time-dependent state of the polymer chains, i.e., the orientation and stretching of the dumbbells.The pressure p i is defined via the following quasiincompressible equation of state with c s the speed of sound and ρ 0 the initial density.Since we are interested in low-Reynolds number flows, we use γ = 1 [24].All that remains is to specify how σ is computed.The simplest approach would be to adopt a constitutive equation (e.g., Oldroyd-B), instead, within a MSS approach, we place microscopic polymer simulators inside each fluid particle, in order to directly couple the microscopic and macroscopic degrees of freedom [8,9,[13][14][15].The choice is then reduced to that of defining an appropriate microscopic model for the polymeric fluid.

B. Microscopic Model
To describe the rheological properties of the polymeric fluid, We choose the simplest possible microscopic model, that of non-interacting Hookean dumbbells.Thus, we place N p polymer chains inside each of the N f fluid particles, with each polymer chain represented by two point particles connected by a Hookean spring (Fig. 1).While more realistic microscopic models are known, such as the finite-extensible Hookean dumbbell model [25], the Rouse model [26], the Kremer-Grest beads-spring model [27], the Doi-Edwards reptation model [28], and the slip-link models [29][30][31][32][33], the basic Hookean dumbbell model offers one main advantage over the others: In the limit when the number of dumbbells goes to infinity, the exact analytical constitutive equation for the stress is known and corresponds to that of a Maxwell viscoelastic fluid.This provides us with analytical results against which we can test our proposed learning strategy.However, the learning method we propose is not contingent on this choice, and in fact, will be most useful when considering more sophisticated polymer models, for which full MSS can become prohibitively expensive.
Since we do not consider interactions between dumbbells, the configuration of the system can be described solely in terms of the distance vector r between the two beads composing the dumbbell.The dynamics of the chains are then determined by the following Langevin equation for r [25,34] where H is the spring constant, κ the velocity gradient tensor, ζ the friction coefficient, and ξ a random-force satisfying the fluctuation-dissipation theorem, ξ = 0 and ξ(t)ξ(t ) = 4k B T ζIδ(t − t ), with I the unit tensor.The polymer contribution to the stress tensor can be expressed in Kramers' form as [35] where n is the number density of dumbbells, k B the Boltzmann constant, and T the temperature.From Eqs. ( 4)-( 6), the following constitutive equation for the stress tensor can be derived which corresponds to the upper-convected Maxwell model.

C. Nondimensionalized Equations
To facilitate the simulation and analysis, we will rewrite the previous equations in non-dimensional form, introducing basic units for both macroscopic and microscopic descriptions.These two sets are labeled with an uppercase (M) and lowercase (m) superscript, for macroscopic and microscopic units, respectively.At the macroscopic level, the characteristic scales are set by the length L and velocity U of the flow problem under consideration, and the corresponding fluid-advection time scale τ (M) = L/U .Together with the shear-viscosity of the fluid η s , and the initial (target) density ρ 0 , we use these characteristic scales to set the units of density, time, length, and stress to be ρ 0 (M) = ρ 0 , t 0 (M) = τ (M) , l 0 (M) = L, and σ 0 (M) = η s /τ (M) = η s U/L, respectively.With these units, the equations governing the macroscopic dynamics, Eqs.( 1)-(3), become where a tilde (•) denotes an adimensional variable.Here, the control parameters are the Reynolds number, defined as Re = ρ 0 U L/η s = ρ 0 U 2 /σ 0 (M) , and the dimensionless artificial sound speed Cs 2 = c 2 s ρ 0 (M) t 0 (M) /η s .At the microscopic level, the characteristic scales are given by the equilibrium length of the dumbbells l eq = 3k B T /H, the dumbbell relaxation time τ (m) = λ = ζ/4H, and the shear viscosity of the Maxwell fluid η s ≡ nk B T λ.To facilitate comparisons between the microscopic and macroscopic models, we will use the same time and stress units for both, t 0 (m) = t 0 (M) and σ 0 (m) = σ 0 (M) = nk B T τ (m) /τ (M) , while the microscopic unit of length is taken to be the equilibrium dumbbell length l 0 (m) = l eq .Thanks to the coupling between the macroscopic flow and the microscopic chain dynamics, the Deborah number, defined as the ratio of time-scales associated to the microscopic dumbbell relaxation and macroscopic fluid advection, De = τ (m) /τ (M) , has appeared in the definition of the unit stress, σ (m) = σ (M) = nk B T De.With these microscopic units, Eqs. ( 4)- (7) with the Deborah number De as the only control parameter.Note that κ = κt 0 (m) and ξ = t 0 (m) /(4k B T ζ)ξ, with ξ = 0 and ξ t ξ t = Iδ( t − t ).Finally, we stress that, while we have chosen τ (M) as unit of time, it can also be useful to use the dumbbell relaxation time τ (m) ≡ λ as reference.Scaled time values using both units are directly related by the Deborah number, with t = t/τ (M) = (t/λ)De.
In summary, the MSS method amounts to solving Eqs. ( 8)-( 10) and ( 11)- (12).With the Reynolds number Re, the Deborah number De, and the dimension-less artificial sound speed Cs as the main control parameters.Unless stated otherwise, we focus on the Deborah number regime 10 −3 ≤ De ≤ 10, at low-Reynolds numbers Re 1; with the sound-speed Cs = 10 chosen to keep the density variations within 3% [24], while still allowing for a relatively large simulation time step ∆ t 10 −6 .The strain-rate tensor is evaluated at the macroscopic level and used as input for the microscopic simulators, in order to evolve the configuration of the polymers (dumbbells).Then, the polymer contribution to the stress is computed and used as input to the macroscopic flow simulation, in order to update the fluid velocity, and the process is repeated (see Figure 1).This method has been successfully used to study a polymer meltspinning process [14], as well as flows of well-entangled polymer melts in a contraction-expansion channel (with the dumbbell model replaced by the Doi-Takimoto Slip-Link model) [15].Unfortunately, while the predictive capabilities of such a bottom-up approach represent the current state-of-the-art in the field of polymer simulations, their heavy computational cost has mostly limited them to relatively simple flow-geometries in 2D.To address this issue, we propose a learning strategy based on Gaussian Processes, in order to uncouple the microscopic and macroscopic degrees of freedom in the governing equations.However, our method remains Multi-Scale, in the sense that the microscopic model is used to generate the training data used to learn the appropriate constitutive equation, i.e. how to evolve σ.This will allow us to consider the time-evolution of the stress at the macroscopic level, but in a way that satisfies the dynamics of the underlying microscopic polymer model.

A. Gaussian Processes (GP)
Formally, a Gaussian Process is defined as "a collection of random variables, any finite number of which have a joint Gaussian distribution" [19].It provides a (prior) probability distribution over functions f , allowing us to use known values of the function, the so-called "training" data, to infer the values of the function at new "test" positions.Let f (x) be a function from R D to R, which is sampled at N values of the input x, x i ∈ R D (i = 1, . . ., N ).We denote by X = (x 1 , . . ., x N ) the D × N design matrix, and f = f (X) = (f 1 , . . ., f N ) the corresponding output matrix, with f i = f (x i ).The f i are considered to be correlated random variables, where the correlation between any two of them, f i and f j , is assumed to be a function only of the input values, x i and x j .The joint distribution for the f i is a multi-variate Gaussian, specified in terms of an average function µ(x) and a correlation function k(x, x ).
The probability of observing function values f at X, given µ and k, is where δf = f − µ, and K(X, X) denotes the N × N correlation matrix, whose (i, j)-th entry is defined as Note that a GP is uniquely defined in terms of its average and correlation functions, Without loss of generality, and in the absence of any information about f , one can take µ(x) = 0, which leaves only k to be specified.Note that no assumptions have been made regarding the functional form of f , the correlation function k only determines the higher-order properties of the family of functions from which f is drawn, such as continuity, differentiability, and periodicity.Following Rassmussen and Williams [19], we also use the following shorthand notation to specify a GP which should be interpreted according to Eq. ( 14).The fact that the function values at different positions (f (x) and is what allows us to make predictions.Basically, the data for f (x), measured at the "training" points x, allows us to "learn" how the data is correlated.This is done by inferring the hyper-parameters Θ of the correlation function k, given the training data set.These hyperparameters Θ determine the precise shape of k, and thus the properties of the random functions f drawn from the GP.Once we have learned how, and to what degree, the function values are correlated with each other, we use this information, together with the known values of f at the training points x, to predict the values of the function f at new "test" locations x .Assuming the N input values consist of n training points and m test points, we partition the input and output data into training and test data sets, to arrive at the following (prior) joint distribution where X = (x 1 , . . ., x n ) is now the design matrix for the training points and X = (x 1 , . . ., x m ) that of the test points.Thus, the correlation sub-matrices K(X, X), K(X, X ) = t K(X , X), and K(X , X ) have dimensions n×n, n×m, and m×m, respectively.We stress the fact that Eqs. ( 17) and (18) are equivalent, at this point we have simply relabeled the points as belonging to either the training or test data sets.In addition, if the training (3) The probability distribution for the constitutive relation at new "test" inputs (κ , σ ) is then specified by a conditional GP, i.e., σ | σ ∼ N (ν, Σ).We take the average value to be the best estimate, σ = ν, and use this within a macroscopic simulation, in order to update the stress at each point in the fluid.The prediction uncertainty is given by the corresponding covariance Σ, and it can be used to perform on-the-fly diagnostics [21].
data is not known exactly, but subject to noise, we can account for this by adding the corresponding contribution to the covariance sub-matrix.For example, in the presence of Gaussian noise with variance σ 2 , we would simply replace K(X, X) with K(X, X) + σ 2 I n×n .
The benefit of using GP comes from the Gaussian form of the distributions, since this allows us to perform most of the calculations analytically.In particular, the conditional distribution for the function values at the test points, conditioned on the training data, can be obtained from Bayes' rule as p(f | f ) = p(f , f )/p(f ), and it results in yet another GP [19] The prediction for the function values at the test points is then given by the mean ν, with the covariance matrix Σ providing a measure of the uncertainty at each point.Conceptually, one can interpret this prediction as the result of drawing random functions from the prior f ∼ N (µ, K), and keeping only those that are consistent with the measured training data.The average and variance of the functions that remain will coincide with ν and Σ, i.e., f ∼ N (ν, Σ).
We have used a squared-exponential kernel for all our GP regressions.For a 1D regression problem, the kernel is defined as Here, Θ = (Γ, l) are the hyper-parameters that must be inferred from the data, with Γ specifying the amplitude of the function variance and l the characteristic length-scale over which the function is varying in the x-dimension.This commonly used Squared-Exponential kernel results in GPs that are infinitely differentiable, but other choices are possible, and might be more suitable for a given learning problem.Kernels for higher dimensional function spaces can be defined by taking sums or products of 1D kernels [19,36].As an example, the following are valid GP kernels for 2D functions, with input x = (x 1 , x 2 ) In principle, we have independent sets of hyperparameters for each dimension, but for simplicity, when using additive kernels we will assume that the amplitude of the variance is equal along all dimensions, i.e., Γ = Γ 1 = Γ 2 .

B. GP Accelerated Multi-Scale Simulations
The idea of learning a constitutive relation for polymer flows from microscopic data is not entirely new.Indeed, previous work by Zhao et al. [21] has considered this precise problem, in order to perform macroscopic polymer flow simulations without having to introduce an arbitrary constitutive relation.In this work, they have assumed a generalized Newtonian model, and used simple-shear simulations to learn the apparent viscosity η (app) = σ xy / γ, of (inelastic) non-Newtonian fluids as a function of the shear-rate γ.When performing the macroscopic flow simulations, the maximum shear rate (computed from the second invariant of the strain rate tensor) is taken as the local shear rate, and used within a GP regression scheme in order to obtain η (app) , and thus the local shear stress σ xy = η (app) γ.In practice, this amounts to placing a GP prior on the stress tensor itself However, as acknowledged by the authors, this excludes many interesting rheological properties of (viscoelastic) non-Newtonian fluids, as it does not allow for any type of history dependence in the flow, and relies on a separation of time-scales between the microscopic and macroscopic dynamics.This history dependence, which arises from the internal stresses, is one of the most significant features of polymeric flows.Subsequent work by Zhao et al. [22] has considered viscoelastic flows, but the learning is restricted to parameterizing a constitutive relation with a predetermined functional form.Therefore, the goal of the current work is to generalize the learning strategy, so that it can be used to model viscoelastic polymeric fluids without having to specify any constitutive relation.Following Zhao et al. [21,22], we also use small-scale microscopic simulations of polymer chains at fixed strainrates to generate the training data necessary to learn the constitutive relation.However, we now place a GP prior on the time-derivative of the stress-tensor not on the stress-tensor itself (or the effective viscosity).This difference allows us to consider the time-dependent memory effects crucial to describe the dynamics of polymer chains.Even in the absence of polymer entanglement, this memory effect is non-negligible, thanks to the finite relaxation time of the polymer stretching and reorientation.No assumptions are made regarding the form of the constitutive equation, except for the fact that it should be expressed in differential form, as a function of the local instantaneous stress σ and strain-rate κ tensors.This includes most commonly used differential models, such as the Maxwell, Jeffreys, and Oldroyd fluids [3].Finally, while it is possible to consider correlated output, we use a separate GP for each independent component of the stress tensor, resulting in D(D + 1)/2 GP regressions in D-dimensions.
Here, we have assumed a separation of length-scales between the microscopic and macroscopic descriptions, such that at the microscopic level the system can be considered to be homogeneous, i.e., all field gradients are effectively zero.It is only at the macroscopic-level where field gradient induced phenomena are incorporated.The appropriate microscopic length scale needed for this approximation to be valid will depend on the precise flow problem being studied.For the systems presented here it is given by the characteristic size of the polymer chains.It is still possible (in principle) to apply the same type of learning strategy to microscopic models in which fieldgradients are not homogeneous.In such cases, the constitutive relations would be functions of the local stresses, strains, and their gradients.However, such considerations lie outside the scope of the current work.
A schematic diagram of the three-step learning strategy, consisting of data generation, learning, and prediction steps, is given in Fig. 2. For the first step, we performed fixed strain-rate simulations of the 3D microscopic model, corresponding to either simple-shear or planar elongational flow with γ and ε the shear and elongational flow rates, respectively.These particular flows are chosen because the learned constitutive relations will be used in simulations with an imposed 2D flow, where the z-direction is assigned to be neutral.However, if one wants to apply this strategy to a more complex situation, additional applied flows, with different deformation rate tensors, will need to be used during the learning process.All simulations were started from a random and isotropic initial configuration of the polymer chains ( σ = 0).The simulations were performed until a steady-state was reached, such that σ 0. Each simulation provides us with a trajectory (t, σ(t)), from which we can compute σ(t).We randomly chose a fraction of these tuples (κ, σ, σ) to serve as training data for the GP regression, with input x = (κ, σ) and output f (x) = σ.To compute σ we use a finite-difference approximation over a characteristic timescale ∆t (c) .To reduce the noise in the measurements for σ and/or σ, it is recommended to apply a data smoothing operation, the details of which are given below.
The second step is to train the model, i.e. to determine the hyper-parameters Θ of the kernel function k(x, x ), as well as any unknown uncertainties in the test data for σ.The posterior probability distribution for the hyperparameters, given the model and measured training data, is determined from Bayes' theorem as where the likelihood p(f | X, Θ) is given by the corresponding GP, Eq. ( 14), and p(Θ) is a suitably chosen prior for the hyper-parameters.A full Bayesian treatment, in which we integrate out the hyper-parameters and propagate the uncertainties during the simulation is possible, but for simplicity, we will consider only a pointwise solution.For the 1D problem considered below, we take the posterior average estimated from Hamiltonian Monte-Carlo (HMC) simulations.For the 2D problem, we instead use a stochastic gradient-based optimization method [37] to maximize the log-posterior, and find the "optimal" value Finally, the optimized Θ 0 are used to parameterize the conditional distributions for the test data f |f , as defined in Eq. (19).In practical terms, we use the (conditional) mean ν = σ as our prediction for the constitutive relation, such that the stress of each fluid particle is updated according to where σ i is a function of the training data x = (κ, σ), as well as the instantaneous (test) strain-rate and stress tensors at the position of particle i, x = (κ , σ ) = (κ i (t), σ i (t)).We refer to this method, which uses a constitutive relation learned through a Gaussian Process regression scheme, within a macroscopic flow simulation, as a GP accelerated Multi-Scale Simulation, or GP-MSS.We are mainly interested in learning this constitutive relation from microscopic polymer simulations, but to check the consistency of this approach, we will also consider learning from the constitutive relations themselves (e.g., from the upper-convected Maxwell model).

C. Algorithmic Complexities
To understand the benefits of our proposed GP-MSS approach with respect to a standard MSS, we should consider the complexities of both algorithms.Since both methods employ the same macroscopic description for the flow, any gains or losses will be found in the calculation of the stresses.For the MSS, this is done by solving for the microscopic dynamics of the polymer chains.Assuming a coarse-grained entanglement model, the time and memory requirements will scale as O (N f × N p × z), with z the number of entanglement points per chain (for the case of non-interacting dumbbells we have z = 1).Our experience shows that N p × z should be of the order of 10 4 − 10 5 or larger, in order to obtain reliable stress measurements.The complexity associated to the GP learning procedure is divided in two: the training step and the prediction step.The former is the most expensive of the two, scaling in time as O n 2  training , and in memory as O (n training ) [38,39].However, we note that this procedure only needs to be done once, the resulting constitutive relation can then be used to perform arbitrarily complex flow simulations.To evaluate the performance of the GP-MSS, we focus then on the cost of making new predictions to update the stresses, as given by Eq. 28.This involves evaluating the average of the conditional GP at the test positions X , corresponding to the values of κ and σ for each fluid particle.From Eq. ( 19), we see that we must compute expressions of the form K(X , X) • K(X, X) −1 • δf (X) , where the term in parenthesis depends only on the training points X.This term, which evaluates to a vector of length n training , can also be precomputed.The prediction step can then be reduced to a simple matrix-vector multiplication, which scales as O (N f × n training ).However, the memory requirements are still only O (n training ).
The asymptotic ratio of the GP-MSS to MSS stress calculation time is then O (n training /(N p × z)), which highlights the importance of generating a minimal highquality training data set.The rule-of-thumb is to have the number of training points be smaller than the number of degrees of freedom in the microscopic polymer chain simulator.For the MSS simulations considered here, we need N p 10 5 dumbbells to obtain reliable flow predictions.Using a random sampling protocol to generate the training data, we can achieve this same level of accuracy using n training 10 3 training point.Although this does not account for the constant proportionality factors associated to each of the calculation costs, this two order of magnitude speedup is in agreement with the results of our simulations.
The issue of generating an optimal training dataset, which in this case will maximize the time-saving with respect to the MSS, while keeping the same level of accuracy, is at the heart of most ML problems.It is quite likely such a protocol will have to be tailored to the specific microscopic model one wishes to study, as well as the macroscopic flow regimes to be simulated.For the systems studied here, random sampling proved to be enough, but it is not sure whether or not this will generalize to more complex setups.

IV. RESULTS
We will consider two basic flow problems in order to validate our proposed learning strategy (see Fig. 3): (1) simple-shear flow and (2) pressure driven flow.Given their symmetry, the flow in both systems is effectively one-dimensional, but a complete description of their dynamics requires that we account for all components of the stress, and their coupling to the flow.Furthermore, if this approach is to be applied to general geometries, it should be capable of learning the appropriate form of the constitutive relation for the stress without any simplifying assumptions (although it can be useful to introduce this additional information in specific cases).To show how our learning procedure can be extended as the dimen-FIG.3. Schematic representation of the two standard flow problems we have used to test our learning strategy, simpleshear flow and pressure driven flow.For the former, we consider a simplified 1D description, while for the latter we take into account the full 2D nature of the stress and strain-rate tensors.sionality of the system increases, we will study (1) the simple-shear flow problem in 1D and the (2) pressuredriven flow problem in 2D.
For the simple-shear flow case we assume that v y = 0 and the strain-rate and stress tensors have only one nonzero component, i.e., the xy component.The learning problem is then 2D, since the constitutive relation is of the form ˙ σ xy ( κ xy , σ xy ).For the pressure-driven case we consider the full 2D nature of the stress and strainrate dependence, such that the learning problem is now 7-dimensional, ˙ σ αβ ( κ xx , κ xy , κ yx , κ yy , σ xx , σ xy , σ yy ).For the flows we are considering κ αz = κ zα = 0, and since Tr ( κ) = 0, we have that κ xx = − κ yy .Instead of explicitly introducing such relationships into the model, we have preferred to learn them directly from the training data.As we have chosen a system of non-interacting Hookean dumbbells for our microscopic polymer model, we are able to compare our results with the exact analytical constitutive equation, Eq.( 13).Thus, we can check the convergences and sensitivity of our results, both in terms of the number of dumbbells N p used in the microscopic simulations, as well as the number of training points used in the GP regression n training .Finally, since the absolute value of the local stress-tensor is not a physically measurable quantity, as opposed to the forces or velocities, we prefer to evaluate the accuracy of the learning strategy by comparing the error in the macroscopic predictions for the forces in the fluid and the flow velocities.

A. Simple-Shear Flow (2D learning)
To arrive at an effective 1D description for this simpleshear flow problem we have assumed the stress is a function of y only, and that the system starts from a relaxed state σ αβ (t = 0) = 0.The Maxwell constitutive relation, Eq. ( 13), then takes the following form where σ xx (t) = σ yy (t) = σ zz (t) = σ xz (t) = σ yz (t) = 0.
Microscopic training data was generated by performing fixed shear-rate simulations for a system of N p = 10 3 , 10 4 , 10 5 non-interacting dumbbells in 3D for two different Deborah numbers, De = 1 and 10.In each case, we used nine different values of the adimensionalized shearrate κ xy within the range [−150, 150], including κ xy = 0.The time step was set to ∆ t = 10 −4 and the simulations were performed up to a maximum time of t max = 5De.The trajectory of the stress σ xy (t) was then smoothed using a Gaussian filter with a width of 5 × 10 −3 τ (m) , in order to reduce the noise in the estimates for ˙ σ xy .The time-derivative of the stress, obtained from the smoothed data, was computed using the following finite-difference approximation The resulting ( t, σ xy , ˙ σ xy ) data is then partitioned into ten (possibly overlapping) randomly selected intervals over σ xy , of width (max ( σ xy ) − min ( σ xy ))/10.Averaging over the points contained in each interval allows us to define a corresponding pair of input x = ( κ xy , σ xy ) and output f (x) = ˙ σ xy training points.The variance 2 in ˙ σ xy , within each interval, is used as an estimate of the measurement error, and added to the corresponding covariance sub-matrix K(X, X), in order to perform the GP regression (see Eq. 18), i.e., K(X, X) ij = k(X i , X j ) + 2 i δ ij .We use a product kernel for this 2D regression problem x = (x 1 , x 2 ) = ( σ xy , κ xy ), such that We thus have three hyper-parameters, Γ, and the two length scales for σ xy and κ xy , so that Θ = (Γ, l 1 , l 2 ).We use independent and uniform logarithmic priors, p(ln Θ i ) ∝ const, which are scale-invariant, such that To train the model, we performed Hamiltonian Monte-Carlo simulations, using the No-U-Turn Sampler (NUTS), with the PyMC3 python package [40][41][42].The optimal parameters Θ 0 are then taken to be the averages over the posterior distribution Θ avg , and used to define the conditional GP for the constitutive relation.Fig. 4 shows the results of this learning procedure for the case of De = 10.We are able to learn the Maxwell constitutive relation, Eq. ( 29), providing excellent predictions over a wide range of parameters.However, the prediction error increases the farther we get from the training points, as expected.This highlights the importance of generating a well chosen training data set, representative of the region in function space one is interested in.It is reassuring to note that even with the naive random sampling strategy outlined above, we are able to obtain precise predictions for the constitutive relation.While this is due to the simplicity of the function, we will show below that this approach also generalizes to more complicated functional forms and higher dimensions.Using this learned constitutive relation, we performed flow simulations under simple shear, at Re = 10, and compared the results to those obtained from standard MSS (with microscopic dumbbell simulators), as well from the Maxwell constitutive equation.
Considering the one-dimensional nature of the flow problem, one can use a simple Eulerian description instead of the Lagrangian one.Thus, the system was discretized in the vertical y direction, using 128 grid points, with the velocity of the top and bottom walls set to v = 1 and u = 0, respectively.Starting from a quiescent fluid, a velocity wave will start at the top wall, and propagate through the channel, bouncing back and forth at the walls, before a steady-state linear velocity profile is obtained [43].This transient regime is more pronounced at high De, as evidenced by the three kicks in the timeevolution of the stress at De = 10, corresponding to the arrival of the wave-front.For comparison, at De = 1 only one kick is observed (see Fig. 5 (a-b)).The excellent agreement obtained between the MSS and GP-MSS predictions for the stress is further evidence of the success in learning the constitutive relation.The stress fluctuations obtained from the MSS at long times are a consequence of the large statistical fluctuations that come from using a finite number of dumbbells.It is encouraging to see that the GP-MSS predictions do not show such behavior.This is because our learning strategy properly accounts for the measurement error in the training data, allowing us to infer the "true" function.To further test our ability to capture the history-dependence of the flow, we have also considered an oscillatory shear flow (not shown), and obtained a similar level of agreement between MSS and GP-MSS predictions.We note that a generalized Newtonian approach, which assumes the stress σ is a function of κ, would not be able to capture this memory effect [44].
Fig. 5 (c-d) shows the maximum absolute error in the predicted velocities, among all points in the system, as a function of the number of dumbbells N p used in the microscopic simulations.This error is evaluated with respect to the velocities obtained from macroscopic simulations with the exact constitutive relation.Not surprisingly, given the good agreement in the stress profiles, the velocities also coincide.There is however a small offset in the transient regime [43], with the GP-MSS velocity wave showing a slight delay (advance) with respect to the MSS or Maxwell predictions at De = 1 (10).This is the main source error reported in Fig. 5 (c-d), and is due to the small number of training data around this particular point in ( κ, σ) space.Two aspects deserve to be highlighted: First, in all cases we have considered, the error of the GP-MSS results is of the same order of magnitude as the MSS results.While the error can be two times larger, for times near the beginning of the startup flow, it can also be considerably smaller, particularly at steady-state.This is most obvious for the case with small number of dumbbells N p = 10 3 .Second, the error decreases considerably as N p → ∞, as this decreases the statistical errors in both the MSS and the training data used to learn the constitutive relations.However, we note that the accuracy of the GP-MSS will also be affected by the quality of the training points.In the case of De = 10, for example, we happened to obtain slightly better results for N p = 10 4 than for N p = 10 5 .

B. Pressure-Driven Flow (7D learning)
For the pressure driven-flow problem we will consider the full 2D nature of the system.The learning problem then consists of three GP regressions, one each for ˙ σ xx , ˙ σ xy , and ˙ σ yy , all of them functions in a seven-dimensional space x = ( κ xx , κ xy , κ yx , κ yy , σ xx , σ xy , σ yy ).Given the increased complexity with respect to the effective 1D flow considered previously, which resulted in a 2D learning problem, we have simplified the procedure to generate the training data.This is not directly related to the type of flow, but rather to the dimensionality of the problem.Microscopic 3D simulations for N p 10 5 non-interacting dumbbells at fixed strain-rate are performed.We considered both simple-shear and planar elongational flows within the range | κ αβ | ≤ 150.We chose 11 different values for each of κ xx , κ xy , and κ yy , in addition, results for κ yx were obtained by taking the transpose of those at κ xy .The simulation time step was fixed to ∆ t = 10 −6 , the maximum time was set to be t max = 10De, and the corresponding σ(t) trajectories where saved and used to generate the training data.For this, we randomly selected N 10 3 points ( t, κ γδ , σ αβ ), and included the initial state σ αβ (t = 0) = 0.Then, we use each point to define a time interval of width 0.1τ (m) , with τ (m) the polymer relaxation time.For the training data, we take the stress to be the average stress within the time interval, whereas the time derivative of the stress ˙ σ αβ is taken to be the difference at the two end-points, i.e., using a coarse-grained time-step of ∆t (c)  0.05De.The measurement error for ˙ σ αβ cannot be ignored, but it is also not easy to evaluate in this high-dimensional space.Therefore, we introduce this error αβ as an additional hyper-parameter that should be learned from the data.It is in such situations where the benefit of adopting a Bayesian approach pays off.
To summarize, we place a GP prior on each σαβ , such that σαβ = f (x) ∼ N (0, K(X, X)).For the covariance matrix, we have K(X, X) ij = k(X i , X j ) + 2 δ ij , with the measurement error assumed to be constant, equal for all x.Here, we have used an additive first-order kernel, such that with Λ = 1, • • • , 7 specifying one of the seven possible input components of x = ( κ xx , κ xy , κ yx , κ yy , σ xx , σ xy , σ yy ).The choice of an additive kernel is motivated by the fact that it allows for non-local interactions, making it less susceptible to the curse of dimensionality and allowing for better extrapolation in regions far way from the training data.In contrast, a multiplicative Kernel will rapidly revert to the mean away from the training data[36, Ch.2.4 and 6.1].This is not an issue for the 2D learning problem considered previously, as it was easy to generate a large enough sample of training points.For each 1D kernel k (Λ)  we have one associated hyper-parameter or length-scale l Λ , for a total of seven length-scale hyper-parameters.
Together with Γ and the (unknown) measurement error αβ associated to the training data for ˙ σ αβ , this results in a total of nine hyper-parameters needed to learn each ˙ σ αβ .Note that we are assuming that the measurement error is constant, i.e., it does not depend on x, although it can be different for the different components of the constitutive relation.The optimal values were obtained by maximizing the log-posterior, Eq. ( 25), assuming a constant prior for the hyper-parameters (p(Θ) = const).For this, we use ADAM [45], a stochastic gradient descent algorithm [37,46], as implemented in GPyTorch [38,39].
This learning procedure results in three distinct functions ˙ σ xx , ˙ σ xy , and ˙ σ yy of seven variables.The most interesting, non-trivial components of the constitutive relations (for the flows we are considering) are the xx and xy components, visualized in Fig. 6 for three different number of training points n training = 1, 3 and 6 × 10 3 generated from microscopic simulations of N p = 10 5 dumbbells [47].As for the 1D problem considered above, we compare our results with data generated from the exact Maxwell constitutive relation (N p → ∞), and, as expected, obtain better agreement as n training increases.Finally, for comparison purposes, we have also learned the constitutive relation from training data generated from the exact solution, i.e., the Maxwell model (N p → ∞).
We used the learned constitutive relations to perform 2D simulations for the pressure driven flow problem, comparing our GP-MSS results with those of the conventional MSS and the exact Maxwell constitutive relation.We perform SPH simulations for all three cases, in a channel whose width L x = 0.6 (along the flow direction) is about half its height L y = 1.The fluid is discretized using 540 particles, initially arranged on a regular lattice within the system, corresponding to a 18 × 30 (width × height) array of particles.The initial distance between each particle is b = 1/30, the smoothing length is h = 1.5 b, and the cut-off length is R c = 3 h.We performed simulations at Re = 10 −2 , for De 10 −2 , which is enough to observe elastic effects in the flow.Fig. 7 shows the time-evolution of the velocity at the center of the channel ( y = 0.5) for De = 1, 2 and 10 × 10 −3 , as obtained from GP-MSS using the constitutive relation learned from exact training data (N p → ∞), SPH simulations using the Maxwell constitutive equation, and a corresponding Newtonian fluid (De = 0).The default parameters for these 2D (SPH) MSS and GP-MSS are summarized in Table I.
As expected, at steady-state the system is indistinguishable from a Newtonian fluid.However, at short times t 0.1 effects due to the elastic energy stored in the dumbbells are visible.This is seen in the velocity oscillations around the Newtonian values, for which the overshoot can result in speeds that are more than three times the steady-state value.These elastic effects become more important as De is increased, and even results in large negative (transient) velocities at De 10 −2 .The differences between the GP-MSS results, using the learned constitutive equation, and those from the exact constitutive equation are negligible.While these GP-MSS relied on a constitutive relation learned from exact noiseless data (N p → ∞), using a finite number of dumbbells N p to generate the training data yields similar level of agreement, as will be shown below.
First, although there is a small difference between the learned constitutive relation and the exact solution (see Fig. 6), particularly for the σ xx component, the macroscopic predictions are in excellent agreement.This can be seen when looking at the forces in the fluid, as shown in Fig. 8 for three different positions along the height of the channel.Indeed, GP-MSS results using constitutive equations learned (n training = 10 3 ) from the exact solution (N p → ∞) or from microscopic simulations (N p = 10 5 ) are indistinguishable from each other at this scale, and they coincide with macroscopic simulation results using the Maxwell constitutive relation, although there is a small lag in the forces for the N p = 10 5 MSS case.Second, we show that increasing the number of training points results in more accurate constitutive re-lations, and thus more reliable macroscopic flow simulations.We used the three constitutive relations of Fig. 6, generated from n training = 1, 3 and 6×10 3 training points, to perform GP-MSS, and compared the predicted velocity profiles with the exact solution, as given by SPH simulations using the Maxwell constitutive relation [48].Fig. 9 shows the maximum absolute error in the velocity as a function of time.Increasing the number of training points dramatically reduces the error in the simulations.Furthermore, the simulations using the constitutive relation learned on N p = 10 5 dumbbells give the same level accuracy as those using the constitutive relation learned from the exact solution.All things being equal, increasing the number of training points will give better results; however, what matters is the quality of the training data set.This is the reason why the best results are obtained with the constitutive relations learned from the exact data, even though only a relatively small number of training points are used.

V. CONCLUSIONS
We have developed a learning strategy that is able to infer the constitutive relation of polymer melt flows from a small number of microscopic or coarse-grained polymer simulations.For this, we have used a Bayesian learning approach based on Gaussian Process (GP) regressions.GPs provide a probability distribution over functions, allowing us to infer the most likely values, given known training data.In addition, we can estimate the uncertainty in the predictions, as well as incorporate unknown or incomplete data (e.g., measurement errors).Previous work has shown how one can use this type of approach to learn the effective viscosity of a polymer melt flow, as a function of the local shear-rate [21], or to parameterize a viscoelastic constitutive relation [22].Here, we demonstrate that a learning scheme that includes memory effects can be developed, which is crucial in order to describe the flow dynamics of entangled polymers in complex flow geometries.This method has great potential for polymer processing, as it will allow us to consider flow problems relevant to industrial settings.In addition, we believe that similar learning strategies can be designed for other soft-matter systems, where the presence of multiple length-and time-scales gives rise to complex dynamical behavior that is expensive to simulate directly.
To validate the method, we have adopted the simplest possible microscopic polymer model, that of an ensemble of non-interacting Hookean dumbbells, since the exact constitutive equation is known in this case.This model was used in fixed strain-rate κ simulations, under simple-shear and planar elongation, to generate the required training data.For this, the time-evolution of the stress σ(t) was used to estimate the time-derivative σ.Assuming that the constitutive relation can be written in differential form, as a function of the local strain-rate and stress, the goal is to learn the function σ(κ, σ).Thus, the training data consists of input points (κ, σ) and the corresponding output σ.We randomly selected a subset of 10 3 points and used them within a GP regression scheme in order to determine the optimal posterior distribution p( σ | κ, σ, κ , σ ) for the constitutive equation σ at new input test points (κ , σ ), for which σ is not known.This conditional probability distribution is a GP, with average and covariance that are functions of both the training and test points.
We set the average σ , over the posterior distribution, to be our best prediction for the constitutive relation, and this function was then used within a macroscopic simulation in order to predict the flow behavior.In this way, we were able to carry out all our simulations at the macroscopic level, without having to impose any constitutive relation.Again, all we assumed was that the time-derivative of the stress is a function of the local stress and strain-rate tensors.The appropriate constitutive equation is learned from a (relatively) small number of microscopic simulations.The resultant method, which we have referred to as GP-MSS, gives results that are as accurate as conventional MSS, at a fraction of the cost.With our non-optimized python+numpy code, the difference in runtime between the full MSS (using 10 5 dumbbells) and the GP-MSS (with n training = 10 3 training points) is ∼ 100 times for the 2D pressure driven flow problem.We note that the GP-MSS run only marginally slower than simulations using the Maxwell constitutive relations.Thus, we get the best of both worlds, achieving run times comparable to macroscopic simulations, without sacrificing the accuracy provided by a microscopic polymer description.We expect the speedup afforded by the GP-MSS to improve dramatically when more realistic, and complex, microscopic polymer models are used.However, the reliability and efficiency of the GP-MSS will depend on the quality and size of the training dataset.Thus, care should be taken when devising the data generating protocol.As an added benefit to our approach, we note that it is also possible to maintain (consistent) information on the microscopic degrees of freedom, for example, by adopting a multi-fidelity representation [49,50].In this case, a small number of microscopic simulators could be introduced in order to provide accurate (localized) stress measurements, which are then fused together with the approximate predictions provided by the learned constitutive relation.In future work we will apply this learning approach to tackle the problem of entangled polymer melt flows.This will allow us to consider 3D flows in complex geometries, which have so far remained out of reach for standard MSS techniques.The Smooth Particle Hydrodynamics (SPH) method, a particle-based method originally developed to solve as-trophysics problems, provides a computationally convenient way to solve the Navier-Stokes equation in a Lagrangian framework [23].The fluid is discretized into a number N f of fluid particles that carry mass, momentum, and energy.Any function of the system can then be expressed as an interpolation over the (disordered) fluid particles, which serve as interpolation points.Consider a scalar quantity A and a vector quantity V , which depend on position.The value of A (x) or V (x), at a given position x, which need not correspond to the position of any of the fluid particles, is given by with similar expressions for higher-order tensor fields.
Here, W (x, h) is a smoothing or interpolating kernel that should integrate to unity and tend to a delta function in the limit when the smoothing length h goes to zero.In this work, we adopt a Gaussian interpolating kernel .We note that derivatives of these functions can be easily evaluated, as the derivative operator can be transferred to the kernel, for which analytic results can be computed in advance.Thus, Numerically, these integrals are replaced by sums over the fluid particles, i.e., the interpolating points, such that with m i and ρ i the mass and density of the i-th fluid particle.To reduce the computational burden, these sums have been truncated to only include fluid particles within a cutoff region R(x) centered at x, such that x j − x ≤ R c = 3h.For notational simplicity, in what follows we will denote these truncated sums using a primed summation symbol.As an example, the density of each element can be computed by taking A = ρ and evaluating the function at x = x i , with A i ≡ A(x i ) and with the mass a constant.
To solve the equations of motion for the fluid particles, Eqs. ( 1)-( 2), we need to evaluate the forces acting on each particle, which involves computing the divergence of the stress tensor.Instead of directly discretizing ∇ • σ, the "golden-rules" of SPH state that formulas should be rewritten to place the density inside the differential operators[?].In this way, the forces are evaluated using the following expression Therefore, the time derivative of the fluid particle velocity v i , Eq. ( 2), is given by Finally, the equations of motion for the SPH particles are discretized in time and integrated using the following second-order scheme Rigid boundaries, be they fixed walls or moving particles, can be incorporated within the SPH framework by discretizing them with boundary or wall particles.However, using the standard SPH method introduced above, an unnatural flow is observed in the presence of such boundaries.To remove these artifacts, the values of the first and second derivatives of the physical quantities of interest can be used within the weighted averages [51], to arrive at the so-called Modified SPH (MSPH).The appropriate expressions can be obtained by starting from the second-order Taylor expansion of the quantity of interest.For a scalar quantity A (x j ) we have Multiplying both sides of Eq. (A10) by the volume element m j /ρ j times the smoothing func-tion W ij and summing over all particles, we obtain x ji x ji : ∇∇A i which relates A to its first and second order derivates.To solve this equation, we then need the corresponding relations for ∇A and ∇∇A, which are given in Eqs.(A12)-(A13) Eqs. (A11)-(A13) can be conveniently expressed in matrix form as follows: where f i = t A i , ∇A i , ∇∇A i is a vector whose entries are formed from A i and its derivatives.In 2D this results in six independent components, thanks to the commutativity of the partial derivatives, such that where commas are used to denote partial derivatives, Likewise, the vector t i is composed using W ij and it's derivatives where Finally, the B i matrix is defined as with Θ L ij the components of the Θ ij vector, given by Within the MSPH method, we use f i = B −1 i • t i to define the physical quantity A i of particle i at position x i .Similar expressions can be defined for vector quantities and higher order tensors.

Boundary Conditions
To enforce the no-slip boundary condition at the fluid/solid interface, we use a virtual particle method [15,52].The virtual particles are placed by reflecting the outermost-layer of wall particles with respect to the boundary.Then, the velocity vector at the positions of these virtual particles v (virtual) i is computed using the MSPH method, and the velocities of the symmetric wall particles are set according to v .In this way, the weighted average of the particle velocities at the boundary is guaranteed to be zero.

FIG. 1 .
FIG. 1. Schematic representation of the MSS method used to study the micro/macro coupling of polymeric flows.The fluid is discretized into fluid particles carrying mass and momentum, as well as microscopic polymer simulators.Solving for the dynamics of the polymers, under the macroscopically obtained velocity gradient κ, allows us to compute the microscopic polymer contribution to the local stress σ.The resultant stress distribution is then used to solve for the macroscopic flow dynamics.

FIG. 2 .
FIG. 2. Schematic representation of the GP-MSS strategy used to learn the constitutive relation of polymeric flows from microscopic data.(1) We perform small-scale microscopic simulations at fixed strain-rate κ, in order to obtain the input (κ, σ) and output σ training data.(2) Placing a GP prior on the constitutive relation, σ ∼ N (µ, K), the training data is used to learn the hyper-parameters Θ (specifying the function variance and length-scales) of the GP, by maximizing the posterior distribution for Θ.(3)The probability distribution for the constitutive relation at new "test" inputs (κ , σ ) is then specified by a conditional GP, i.e., σ | σ ∼ N (ν, Σ).We take the average value to be the best estimate, σ = ν, and use this within a macroscopic simulation, in order to update the stress at each point in the fluid.The prediction uncertainty is given by the corresponding covariance Σ, and it can be used to perform on-the-fly diagnostics[21].

1 FIG. 4 .
FIG. 4. (color online) The learned constitutive relation for the 1D simple shear flow problem at De = 10 using Np = 10 3 dumbbells for the microscopic model.(top) Training points, GP prediction, and the exact Maxwell constitutive equation (Np = ∞ limit).(middle) Color map showing the exact constitutive relation ˙ σ Maxwell xy , as a function of κxy and σxy.(bottom) Color map showing the absolute error between the GP prediction and the exact solution.The markers in the bottom two graphs show the location of the training data set.

FIG. 5 .FIG. 6 . 3 FIG. 8 . 3 FIG. 9 .
FIG. 5. (color online) (a-b) Time evolution of the stress obtained from simple-shear flow simulations (Re = 10) with the learned constitutive equation (GP-MSS) as well as those from the full MSS, using microscopic dumbbell simulators (Np = 10 3 ).Dark (light) colored lines correspond to small (large) values of the channel height y.At t = 0, a velocity wave starts at y = 1 and propagates down the channel, bouncing back at the lower wall.(c-d)Maximum absolute error in the velocity, for all points in the channel, obtained from the MSS and GP-MSS predictions.The error is computed with respect to the results of the (exact) Maxwell constitutive equations, | vx − v (Maxwell) x |. Results for different number of dumbbells Np = 10 3 , 10 4 , 10 5 used in the microscopic simulations are shown.