Slow growth of magnetic domains helps fast evolution routes for out-of-equilibrium dynamics

Cooling and heating faster a system is a crucial problem in science, technology and industry. Indeed, choosing the best thermal protocol to reach a desired temperature or energy is not a trivial task. Noticeably, we find that the phase transitions may speed up thermalization in systems where there are no conserved quantities. In particular, we show that the slow growth of magnetic domains shortens the overall time that the system takes to reach a final desired state. To prove that statement, we use intensive numerical simulations of a prototypical many-body system, namely the 2D Ising model.


I. INTRODUCTION
Nonequilibrium relaxation processes have been extensively studied during the last decades. One important scope in this research is shortening the duration of the heating (or cooling) transient process that precedes thermalization. Indeed, annealing techniques are a popular tool to accelerate the evolution towards the equilibriumor stationary-state through a slow temperature decrease (not only in physics: the famous simulated annealing algorithm mimics in a computer the annealing strategy followed in the laboratory [1]). Thus, it is not surprising that recent extensions of the counterintuitive Mpemba Effect [2], allowing to cool down faster the hotter of two systems (or heat up faster the cooler system), have stirred considerable attention. Indeed, we now understand which are the general conditions allowing for faster coolings, or faster heatings, in Markovian systems [3,4], granular matter [5,6], spin glasses [7], water [8], the quantum Ising spin model [9] and very recently the generalization to Markovian open quantum systems [10]. On these grounds, Amit and Raz have designed a novel strategy, useful in systems with time-scale separation, in which precooling the system results in a faster heating [11].
Yet, time scale separation is not possible at a secondorder phase transition, where critical slowing-down [12,13] evinces a continuum of time scales [see, e.g., Eq. (3) below]. Under these circumstances, unraveling the mechanism that drives the dynamics is the key to potentially control the evolution. The growth of the ordered domains when the system enters the symmetry-broken phase [14][15][16] emerges as a natural candidate for this mechanism. Energy is stored in ordered domains whose size and growth arrest or accelerates the dynamics. In particular, the relevance of the domain growth for the dynamic slowdown in spin glasses is now clear [17][18][19][20][21][22][23][24].
Here we show that the overall relaxation time can be shortened in the absence of time-scale separation by manipulating the system's internal structure of ordered domains. We focus on the study of the ferromagnetic twodimensional (2D) Ising spin model. In particular, we study through numerical simulations an unexplored outof-equilibrium heating protocol, in which the bath temperature starts below the critical temperature and is later heated above the critical point. Surprisingly, we find that this manipulation of the bath temperature induces a speed-up in the energy evolution of the system, which is due to a slow domain-growth process. The mechanism is illustrated by comparing different initial preparation of the system. This paper is organized as follows: In Sec. II we recall some essential facts about the 2D Ising model and the quantities of interest. The one-and two-step thermal protocols are described in Sec. III, where we also show the number of simulations performed for each protocol. In Sec. IV, we discuss the isothermal evolution of a system in both the ferromagnetic and paramagnetic phases. Section V is devoted to show how the leading time corrections can be canceled out in the two-step protocol. In Sec. VI, we discuss the speed-up of the equilibration procedure for the two-step protocol. The conclusions are contained in Sec. VII. The most technical parts of our work are explained in two Appendixes: In Appendix A, we show how we have implemented the Monte Carlo (MC) dynamics, and in Appendix B, we explain how to perform the spatial integrals of the two-point correlation function.

II. MODEL AND QUANTITIES OF INTEREST
We consider the ferromagnetic Ising model in two dimensions, one of the most thoroughly studied models in statistical mechanics [25]. Specifically, the spins σ x = ±1 occupy the nodes x of a square lattice of size L × L with periodic boundary conditions. In our Hamiltonian, the spins interact with their lattice nearest neighbors as The thermal bath is described through the (dimensionless) inverse-temperature κ = J/(K B T ). We have set J = 1 energy units. A second-order phase transition at κ c = log(1 + √ 2)/2 separates the paramagnetic phase at κ < κ c , from the ferromagnetic phase at κ > κ c .
We simulate two dynamic rules for the model in Eq. (1): the Metropolis and heat bath (HB) algorithms (see, e.g., [26,27]). Both belong to the so-called model A dynamic universality class [28], where conserved quantities are lacking. We choose L = 4096 in our simulations [29], large enough to represent the thermodynamic limit. Our time step corresponds to a full-lattice sweep. The technical details about how we have implemented both dynamics are contained in Appendix A.
Special attention will be payed to the time evolution of the coherence length ξ(t), namely, the typical linear size of the ferromagnetic domains at time t. Note that the correlation length ξ corr indicates the spatial range of correlations within a domain. Only in the paramagnetic phase ξ = ξ corr . The classification of length scales in the ferromagnetic phase would be even subtler in the presence of Goldstone bosons [30].
Our second important quantity is the energy density E(t), a thermometric quantity [7] for which the equilibrium value E eq ≡ E(t → ∞) is given by the well-known Onsager result. Both E(t) and ξ(t) are obtained from the correlation function where . . . indicates the average over N R independent trajectories or replicas obtained with the same thermal protocol (see next section for the number of replicas used in practice). Indeed, E(t) = −2C(r • ; t), where r • = (0, 1) [or (1, 0), because these are the two vectors spanning the square lattice], and ξ(t) is computed from space integrals of C(r; t) [31] (see also Appendix B).

III. THERMAL PROTOCOLS
We consider two distinct thermal protocols, and each of them is simulated twice (with the Metropolis and HB dynamics). In our one-step protocol, a fully disordered Coherence length ξ as a function of time t, as computed with the Metropolis algorithm for our one-step protocol and several values of κ (the width of the curves is twice the statistical error; κ increases from bottom to top). Only in the paramagnetic phase, κ < κc ≈ 0.44068679, the coherence length reaches its equilibrium value at long times. In the ferromagnetic phase ξ(t) ∼ √ t at long times (dashed line). Inset: Comparison of the HB and Metropolis dynamics in the paramagnetic phase. Although ξ(t) grows significantly faster for Metropolis, the equilibrium limit at long times is the same for both dynamics.
spin configuration (corresponding to infinite temperature) is put in contact with a thermal bath at inversetemperature κ, at the initial time t = 0. The coherence length ξ(t) grows with time (see Fig. 1) until (in the paramagnetic phase) its t → ∞ limit is approached.
In the two-step protocol, a fully disordered spin configuration is initially placed at an inverse temperature in the ferromagnetic phase, κ start > κ c , where it evolves until ξ(t) reaches a target value ξ start . At that point, which corresponds with our initial time t = 0, the bath temperature is instantaneously raised to enter the paramagnetic phase, κ end < κ c , and kept fixed afterwards. Note that ξ start may be larger than ξ end ≡ ξ(t → ∞; κ end ), and that a one-step protocol with κ < κ c is a particular case of the two-step protocol with ξ start = 0 and κ end = κ.
We take κ end = 0.435 and 0.4378, where ξ end is very large (see Fig. 1). Indeed, the product (κ c − κ end ) ξ end (κ end ) remains constant as κ end → κ c [25]. This behavior is explained by the fact that the correlation length coincides with the coherence length ξ in the paramagnetic phase, and the value ν = 1 of the thermal critical exponent (remember that ξ end is the coherence length at κ end for long times, when the equilibrium is reached). Of course, this scale invariance is found only close enough to the critical point (in the so-called scaling region). 2. The two-step protocol with κ start = 0.46, κ end = 0.4378 and ξ start = 120, where we simulate 1024 replicas.
We have chosen to simulate a larger number of replicas in the paramagnetic phase in order to reduce the statistical error in the correlation function C(r; t). This error reduction is fundamental for computing the spacial integrals of C(r; t), from which we compute the coherence length (see Appendix B).

IV. THE ISOTHERMAL EVOLUTION
As shown in Fig. 1, the dynamic behavior in the two phases is very different. In the paramagnetic phase, κ < κ c , both ξ(t) and E(t) approach exponentially their equilibrium value [12,32]: where O(t; κ) stands for E(t; κ) or ξ(t; κ), and ρ O (τ, κ) is a continuous distribution of autocorrelation times τ . The largest timescale τ κ (which diverges at κ c ) ensures that finite-time corrections decay as e −t/τκ , or faster. Although using Metropolis or HB does make a difference, see the inset of Fig. 1, the equilibrium value at large t is independent of the dynamics.
Instead, in the ferromagnetic phase, the largest timescale exists only as a finite-size effect. Indeed, when The only free parameter is the slope b, and we include only data with ξ(t) > 200. Inset: Data for several values of κ, whose value increases from top to bottom. Notice that for a given value of κ, the Metropolis and HB data sets fall on the same curve. The continuous lines for values of κ in the ferromagnetic phase are fits to the Ansatz E(t; κ) = Eeq(κ)+bκ/ξ(t; κ) for κ. Again, the only free parameters are the slopes bκ, and in each fit, we include only data with ξ(t) > 20. (b) Data for κ = 0.435 (paramagnetic phase). The solid horizontal line is the exact Onsager solution. The error bars are smaller than the symbols. Notice the null slope at equilibrium, which implies that the growth of the magnetic domains does not affect the energy.
κ > κ c and barring short-time corrections, domains grow as ξ ∼ √ t [14] until ξ ∼ L (see Fig. 1). Now, it is well known that in the ferromagnetic phase, and excluding fast initial relaxations, E(t) and ξ(t) are tightly connected [12]: (E(t)−E eq ) ∝ 1/ξ(t), because the excess energy is located at the boundaries of the magnetic domains (and moreover, the lower critical dimension is 1 for Ising systems). This is the behavior found for κ > κ c : see Fig. 3(a), and the lower three curves in the inset of that figure. Perhaps more surprisingly, we find that this strong connection extends to the paramagnetic phase where, close to the equilibrium, the magnetic  , if one represents E(t) as a function of 1/ξ(t) for κ < κ c , the data from Metropolis and HB dynamics fall on a single curve (yet, the two time behaviors are remarkably different; see the inset of Fig. 1). Furthermore, notice the null slope of such a curve at equilibrium endpoint [see Fig. 3(b)], which implies that the growth of the magnetic domains does not affect the energy. The connection between E(t) and ξ(t) suggests an intriguing possibility. Given that ξ(t) grows faster in the ferromagnetic phase (see Fig. 1), it is conceivable that E(t) could equilibrate faster in the paramagnetic phase through an excursion to the ferromagnetic phase.

V. CANCELING LEADING TIME CORRECTIONS
As suggested by Eq. (3), in the paramagnetic phase, the two-step protocol behaves at long times as (see Fig. 4 Excess energy E(t) − Eeq vs e −t/τκ , as obtained with the HB dynamics for our two-step protocol.
where A(ξ start ) is an amplitude (the one-step protocol is recovered for ξ start = 0). Interestingly enough, A(ξ start ) changes sign as the initial coherence length grows. This phenomenon is clearly seen in Figs. 4(a)-4(c) and 5(a)-5(c). It follows that there exists a ξ * start such that A(ξ * start ) = 0, which entails an exponential speed-up. It turns out that ξ * start is independent of κ start [see Figs. 4(b)-4(d) and 5(b)-5(d)]. However, computing ξ * start is difficult because of the statistical uncertainty in A(ξ start ). In order to estimate ξ * start we have taken several values of ξ start around our initial guess for ξ * start , obtained from Fig. 4 (Metropolis) and Fig. 5 (HB). The value of τ κ in Eq. (4) is estimated by fitting to the Ansatz (4) our most accurate data, namely, that with ξ start = 0. We then estimate A(ξ start ) using the same Ansatz (4) with τ κ fixed to the previously obtained value. We have chosen the fitting window as 0 ≤ e −t/τκ ≤ 0.2.
Our strategy consists in finding an interval (ξ + start , ξ − start ) where ξ * start is contained (see Tables I  and II). We require A to be positive (resp. negative) with high probability (i.e., larger in absolute value than twice its statistical error) at ξ + start (resp. ξ − start ). Finally, we estimate the value of ξ * start as the average of ξ + start and ξ − start , and its error as the semi-difference of ξ + start and ξ − start (see Table III). ∆A is the statistical error obtained from the fit to Eq. (4). The value of τκ used to fit the data is 433 (14) for Metropolis, and 1470(50) for HB. The final results for ξ * start shows a compatibility between both dynamics (see Table III). The values of ξ * start should be compared with the coherence length at equilibrium ξ end (κ), namely, ξ end (0.435) = 69(1) and ξ end (0.4378) = 135(2), both for the Metropolis dynamics. Furthermore, when κ end is varied, both ξ * start and the equilibrium coherence length ξ end scale as 1/(κ c − κ end ).
These two traits, namely, independence of κ start and the correct scaling dimension for ξ * start , are hallmarks of universality. Indeed, we speculate that the scaleinvariant ratio that we find for the Metropolis dynamics will be common to all models with scalar order parameter in the model A dynamic universality class [28]. Our results for the HB dynamics are indeed consistent with this speculation.

VI. EQUILIBRATION SPEED-UP
Let us put to use the existence of an exponential speedup. From now on, we shall be referring to t total , namely, the time spent by the system at both κ start and κ end . In fact, we operationally define the equilibration time t 0.1% eq as the time such that E(t total ) differs from E eq by less than 0.1% for any t total > t 0.1% eq ; see Fig. 6 [33]. In other words, the equilibration time t 0.1% eq corresponds to the last time the energy crosses either the line 1.001E eq or the line 0.999E eq (see Fig.7). We need this operational definition because, strictly speaking, thermal equilibrium is unreachable in any finite time. Now, let us consider the situation for the two-step protocol with ξ start < ξ * start and where A(ξ start ) > 0 [cf., Eq. (4)]. In this range of small ξ start we encounter ξ start = ξ Kovacs start where the energy at the time of the temperature change equals E eq (this is the appropriate situation to study the Kovacs effect [18,34]). Now, for those protocols with ξ Kovacs start < ξ start < ξ * start , the energy at the time the temperature changes is smaller than E eq [35] but, at long times, E(t total ) tends to E eq from above because A(ξ start ) > 0. It follows that E(t total ) cannot be monotonic. In fact (see Fig. 6), E(t total ) for those protocols first grow to a local maximum, then decrease towards E eq . The local maximum of E(t total ) entails a discontinuity for the equilibration time t 0.1% eq (cf., Fig. 7) at the value of ξ start for which the height of the local maximum of E(t total ) coincides with the upper limit of the 0.1% band around E eq [36]. Therefore, the equilibration time t 0.1% eq for the two-step protocols just after the discontinu- ity could be shorter than its one-step counterpart by a factor of three.
We can repeat the above computations for the HB dynamics. If we represent the energy density vs the total simulation time t total , we obtain Fig. 8. The plot of the equilibration time t 0.1% eq as a function of ξ start is given by Fig. 9. The behavior of these two quantities is analogous to the one found for Metropolis. The main difference is the time scale. Indeed, Metropolis approaches equilibrium faster than HB.

VII. CONCLUSIONS
We have shown that precooling may result in faster equilibration at higher temperatures in a prototypical system without timescale separation (namely, the ferromagnetic 2D Ising model at its critical point). The driving mechanism is the tight connection between the internal energy and the size of the ferromagnetic domains, characteristic of a broken-symmetry phase [12]. Indeed, the size of the domains can be manipulated to our advantage through a nonequilibrium protocol. In particular, we have found an exponential speed-up in the equilibration of the energy. The speed-up arises when the size of the magnetic domains formed during the excursion to the low-temperature phase is a well-defined fraction of the equilibrium correlation-length at the higher temperature. We have found numerical evidence for the universality of this fraction (presumably, the universality is restricted to model A dynamics with scalar order parameter [28]). Namely, we have shown how to design a practical protocol that exploits this universal mechanism. Nowadays, many CPUs support 256-bit words in their streaming extensions. This allows us to perform basic Boolean operations (AND, XOR, etc.) in parallel for all the 256 bits, in a single clock cycle. We can take advantage from this parallelization by codifying 256 distinct spins in the same 256-bit word. This strategy is called multispin coding [37]. Furthermore, we simulate a single system, so we pack 256 spins from the same lattice. This is known as MUlti-SIte (MUSI) multispin coding (or Synchronous multispin coding [38]). The main problem with MUSI multispin coding is generating 256 independent random numbers for the 256 spins coded in a word. A careless approach to the generation of these 256 random numbers would break the parallelism, thus making MUSI multispin coding useless.
For the sake of clarity, we explain first our geometrical set up, and then describe how we solved the problem of the independent random number for each spin in the 256-bit word.

Our multispin coding geometry
In our MUSI simulation, we packed 256 spins, from the same lattice, in a single 256-bits word, forming a superspin [39]. Specifically, we employed the packing introduced in Ref. [32]. We present the packing for a and the superspin coordinates (i x , i y ) are related by: b y = 0, 1, · · · , M − 1; and i y = 0, 1, · · · , L M − 1 . (A2) As a consequence, M 2 physical coordinates (x, y) are assigned to the very same superspin coordinates (i x , i y ). The bit index i b = M b y + b x is the position inside the superspin (so 0 ≤ i b < M 2 ), and unambiguously identifies the physical coordinates (see Fig. 10 for a graphical example of the packing process of a L = 16 square lattice using M = 4).
There is a crucial feature of our chosen geometry. Take a superspin (i x , i y ). Each of the M 2 spins packed in the superspin (i x , i y ) has a nearest neighbor along the (say) forward x-direction in the original lattice. All these M 2 neighboring spins are themselves packed in the same superspin, namely, the nearest neighbor in the forward x-direction in the superspin lattice (see Fig. 10).
Another important property of Eqs. (A1) and (A2) is that the parity of the superspin site i x + i y and the original site parity x + y coincide (provided that L/M = 2n, with n ∈ N). Note that the square lattice is bipartite: with our nearest-neighbor interactions, sites of even parity interact only with odd-parity sites (and vice versa). This feature suggests a checkerboard updating scheme, in which all even sites are update in the first step and odd sites are updated afterwards. Indeed, keeping the (say) odd spins fixed, the ordering of the update of the even spins is immaterial. In particular, we may update simultaneously the 256 spin that we pack in a superspin.
We define our time unit as a full-lattice sweep, in which the even sublattice is updated first and the odd superspins are updated afterwards.

Random numbers
For reasons of clarity, we shall be referring always to a single bit, s x (t), in the superspin [s x (t) = 1 if the spin it codes is 1 at time t, while the bit is zero if the spin is −1]. All the Boolean operations that we shall explain below are performed simultaneously on the 256 bits s x (t) contained in the superspin.
The scope of the game is obtaining a change bit B x (t), which is one if (and only if) the spin at site x is to be flipped at MC time t: The computation of B x (t) is naturally decomposed in two steps (the first is a deterministic step, while the second step involves randomness): 1. In the first step, we compute the energy change ∆E that flipping spin s x (t) would cause. Now, ∆E can take five values, ∆E ∈ {−8, −4, 0, 4, 8}. Hence using standard Boolean operations, one computes five bits {c −8 in such a way that only the bit with superscript equal to ∆E is one. The remaining four bits are, of course, zero.

The second step depends on the dynamics, either
Metropolis or HB, and determines whether or not the spin-flip is allowed. This is represented by other five bits {b −8 x (t)}, which are set to one with probability p(∆E), as we explain below.
We obtain B x (t) from these bits as (for brevity, we will omit their dependence on x and t): a. The probabilities The probabilities for setting to one the random bits b depend on the precise algorithm we are using: a) Metropolis: b) Heat-bath: b. Some remarks on statistical independence Equation (A4) tells us that we are going to use only one of the five bits b, although we cannot know in advance which one. Indeed, the AND conditions make irrelevant those bits b whose superscript differ from ∆E. Hence, the five b bits do not need to be mutually independent (instead, the b bits corresponding to different x or t must be statistically independent). We can make use of this observation to reduce the number of b bits that we need to compute: c. Generating independent random bits with arbitrary probability As we have seen, we need to generate a stream of independent random bits, which are set to one with a probability p given by Eq. (A5) or (A6). The textbook solution [38] fails the independence requirement (unless p can be exactly represented with a small number of bits). On the other hand, the rather high critical temperature in our problem makes the Gillespie method [39] inefficient because p is too large. We solve this problem by adapting a strategy [40] that somewhat interpolates between the textbook and the Gillespie methods.
To simplify the explanation, let us consider only one of the five bits {b −8 , b −4 , b 0 , b 4 , b 8 }, for example b 4 . We obtain b 4 as b 4 = d 1 OR d 2 , where d 1 and d 2 are two independent random bits which are set to one with probabilities p 1 and p 2 , respectively. It is easy to check that the probability p(4) for having b 4 = 1 is p(4) = p 1 +p 2 (1−p 1 ). Hence, if we choose p 1 in some convenient way (see below), we need to set p 2 as The overall idea is the following: if we can efficiently generate d 1 with a probability p 1 very close to (but smaller than) p(4), we will find ourselves with a p 2 small enough for an efficient use of the Gillespie method. Specifically, we require that p 1 be exactly representable with m bits We obtain d 1 by generating an integer-valued random number k, 0 ≤ k < 2 m , with uniform probability. We set Notice that m determines the efficiency of the algorithm. On one hand, enlarging m can be detrimental because we generate k by obtaining m independent random bits which are set to one with 50% probability. On the other hand, a large m allows us to have p 1 very close to p(∆E), and hence a very small p 2 . A tradeoff needs to be found, by minimizing the total number of calls to our random number generator.
An important simplification is that we are allowed to use the same random integer k for all the ∆E, only the threshold k * (∆E) varies. In this way, we obtain all the bits d 1 for every b 8 and b 4 .
As for the second bit d 2 , we have two different probabilities p 2 , one for b 8 [p 2 (8)], and other for b 4 [p 2 (4)]. Let p max = max{p 2 (4), p 2 (8)} and p min = min{p 2 (4), p 2 (8)}, so we can implement the Gillespie method for p max , which gives the bits d 2 for the ∆E with bigger p 2 probability. The bit d 2 corresponding to p min is set to one if two conditions are met: (1) the bit d 2 corresponding to p max is set to one, and (2) an independent random number r, extracted with uniform probability in [0, 1), turns out to be smaller than p min /p max .
Finally, we need to discuss our computation of the random integers k. In fact, we need to generate 256 independent k numbers, because we codify 256 spins in our superspins. After some reflection, we decided to use the xoshiro256++ generator [41], which ensures the same level of randomness on each of its 64-bits. We employed a 256-bits streaming extension to implement a parallel version of xoshiro256++, composed of four independent xoshiro256++ random sequences. Hence, each call to our generator produces 256 independent random bits which are 1 with 50% probability. Therefore, m calls to xoshiro256++ produces 256 independent k random numbers.
In our simulations, the optimal value of m has turned out to be m 10. In this way, we have found it possible to compute the spin-flip bit for 1024 spins with approximately 42 calls to our random number generators (namely, 40 calls for d 1 and two calls for the computation of d 2 ).
Appendix B: Space integrals of C(r; t) correlation At long distances, the correlation function C(r; t) decays as f (−r/ξ(t))/r a , where ξ is the coherence length, f (x) is a cut-off function, and a is an exponent. Out-of-equilibrium, the function f (x) decays super-exponentially, while, at equilibrium, the decay is only exponential. Unfortunately, the exponent a and the precise form of the function f (x) are known only at equilibrium. Nevertheless, following Ref. [19], we can obtain ξ(t) from an integral estimator.
Hereafter, we shall use C(r; t) as a shorthand for C(r; t) with r = (r, 0) or r = (0, r). Introducing the integrals we can define ξ k,k+1 (t) ≡ I k+1 (t)/I k (t), which is proportional to ξ(t). In our case, we have chosen as an estimator of ξ(t) the ratio ξ 1,2 (t), which has been well studied in the literature (see e.g., Refs. [22,32,42], and references therein). The difficulty in the computation of this integrals arises from the big relative errors [∆C(r; t)/C(r; t)] present in the large-r tail of C(r; t) (this problem is well known in the context of the analysis of autocorrelation times in MC simulations [26]). Our solution, which is inspired by Ref. [22,32,42], combines two strategies: (1) reduction of the relative error in the correlation function by considering a sufficiently large number of replicas (see Sec. III), and (2) the use of a smart way to estimate I k (t). Our estimation of I k (t) is the sum of two contributions: the numerical integral of our measured C(r; t) up to a noise-dependent cut-off, and a tail contribution estimated by using a smooth extrapolation function, namely, F (r) = Ae −(r/ξ F ) β /r b with b = 1/2.
Strictly speaking, the functional form of F (r) is only valid in the paramagnetic phase [42]. In the ferromagnetic phase, the exponent of the algebraic term r b is not known. However, because the exponential term decays very fast (β ≈ 2), the precise value of b is not very relevant. On the other hand, when the equilibrium is reached in the paramagnetic phase, where β = 1, the slow decay of C(r, t) makes the tail contribution very relevant (this is why we have needed a large number of replicas in order to have a good estimation of the tail contribution in the paramagnetic phase).
The complete procedure to compute I k is as follows: 1. We obtain a spatial cut-off determined by the size of our statistical errors. The noise-dependent distance r cut is the shortest distance such that C(r cut + 1; t) is smaller than three times its error. Of course, r cut is time-dependent.
2. Estimate the region [r min , r max ] employed in the fit to F (r). We start by locating the position r * of the maximum of the quantity r 2 C(r; t); the value of such maximum is ρ = r * 2 C(r * ; t). Next, we determine r min and r max , which must verify the inequalities r * < r min < r max < r cut . We define r min (resp. r max ) as the first r where r 2 C(r; t) < a min ρ (resp. r 2 C(r; t) < a max ρ). We take a min = 0.9 and a max = 1/3. We regard a failure to meet the condition r max < r cut as a sure-tell sign of the need of increasing the number of replicas.
3. Finally we calculate the integrals. There are two possibilities (in both cases we estimate errors with the jackknife method [43,44]): (a) If r max − r min > 8, we fit C(r; t) in the interval [r min , r max ] to F (r) (the fit parameters are the amplitude A, the length scale ξ F , and the exponent β). We estimate the integral as the sum of two contributions, namely, the integral of C(r; t) from r = 0 to r max and the integral of F (r) from r max to r = 20ξ F . In both terms, we use a second-order Gaussian quadrature (when an interpolation of C(r; t) is needed, we employ a cubic spline interpolation).
(b) If the region [r min , r max ] is small (i.e. r max − r min ≤ 8), we integrate only numerically C(r; t) up to r cut .