Coarse-grained Second Order Response Theory

While linear response theory, manifested by the fluctuation dissipation theorem, can be applied on any length scale, nonlinear response theory is fundamentally of microscopic nature. We develop an exact theoretical framework for analyzing nonlinear (second order) response of coarse grained observables to time-dependent perturbations, using a path-integral formalism. The resulting expressions involve correlations of the observable with coarse grained path weights. The time symmetric part of these weights depends on paths and perturbation protocol in a complex manner, and, furthermore, the absence of Markovianity prevents slicing of the coarse grained path integral. Despite this, we show that the response function can be expressed in terms of path weights corresponding to a single-step perturbation. This formalism thus leads to an extrapolation scheme, which circumvents the mentioned difficulties, and where measuring linear responses of coarse-grained variables suffices to determine their second order response. We illustrate the validity of the formalism with the examples of an exactly solvable four-state model and the near-critical Ising model.

While linear response theory, manifested by the fluctuation dissipation theorem, can be applied on any length scale, nonlinear response theory is fundamentally of microscopic nature. We develop an exact theoretical framework for analyzing nonlinear (second order) response of coarse grained observables to time-dependent perturbations, using a path-integral formalism. The resulting expressions involve correlations of the observable with coarse grained path weights. The time symmetric part of these weights depends on paths and perturbation protocol in a complex manner, and, furthermore, the absence of Markovianity prevents slicing of the coarse grained path integral. Despite this, we show that the response function can be expressed in terms of path weights corresponding to a single-step perturbation. This formalism thus leads to an extrapolation scheme, which circumvents the mentioned difficulties, and where measuring linear responses of coarse-grained variables suffices to determine their second order response. We illustrate the validity of the formalism with the examples of an exactly solvable four-state model and the near-critical Ising model.

I. INTRODUCTION
Many systems of practical and scientific relevance are of intrinsic stochastic nature with properties dominated by fluctuations, e.g. colloidal particles, protein folding networks, molecular motors or stochastic heat engines [1]. Such systems lend themselves to descriptions of statistical physics, of which a variety for in and out of equilibrium scenarios exist: The famous Jarzynski's equation [2] and Crooks theorem [3] concern the work done while driving the system far from equilibrium. In contrast, (nonlinear) response theory treats arbitrary observables, starting near equilibrium with the fluctuation-dissipation theorem, which relates the linear response to equilibrium fluctuations [4,5]. Higher orders in perturbation have also been derived, e.g., for Markov jump processes [6], using path integrals [7], or in terms of correlation functions [8][9][10][11][12][13][14][15]. Nonlinear response theory has also been applied experimentally, enabling measurement of the second order response from an equilibrium average [16].
The mentioned approaches of nonlinear response theory typically rest on the assumption of all relevant degrees of freedom (d.o.f.) being known and measurable. This in many cases not being experimental reality thus poses the additional challenge of coarse graining.
Taking the example of a colloidal particle in a simple solvent, the bath d.o.f can easily be integrated out because they relax fast as compared to the colloidal timescales, and can (thus) be assumed to be in an equilibrium state [17,18]. Approaches such as Mori-Zwanzig projection operators formalise this idea by identifying a subset of slow d.o.f to be relevant and integrate out fast d.o.f. [19][20][21]. Indeed, fluctuation relations and response * fenna.mueller@theorie.physik.uni-goettingen.de † urna@rri.res.in ‡ matthias.kruger@uni-goettingen.de theory have been shown to hold approximately under the assumption that subsystems reach a local equilibrium [22][23][24]. Other types of coarse graining preserve fluctuations [25] or use other physical or computational restrictions, as done for polymer physics [26,27] or biophysics [28,29]. Adding a second colloidal particle to our example illustrates the next level of complexity: If the position of one colloidal particle is unknown, experimental estimation of potentials, entropy production and probability distributions may be incorrect, as shown experimentally in Ref. [30]. This is for example the case if a driving protocol acts on the unknown degree. Such questions in relation to entropy production, work and other thermodynamic notions in stochastic processes have been analyzed under coarse-graining, both theoretically [23,24,31,32] and experimentally [30,33]. But what about the nonlinear (second order) response in coarse-grained systems? As detailed below, nonlinear orders remain challenging in coarse grained systems, even if entropy productions are found correctly.
Ref. [34] developed second order response theory in a system coarse-grained to a finite number of states, proposing and verifying an extrapolation scheme for the second order response from linear contributions. Notably, this approach does not rely on a separation of time scales as demonstrated explicitly for a model system [34]. While Ref. [34] restricts to perturbations which remain constant after an initial, instantaneous jump, in this manuscript, we generalize this approach to include arbitrary time dependence.
Starting from microscopic response theory from path integrals, we derive a response theory for a finite number of coarse-grained states. Coarse graining the path integrals yields coarse grained path weights, including entropy production, but also the more difficult time symmetric part of the corresponding weights, from which the second order response can be found.
These formal expressions can be used in practice, e.g., via an extrapolation scheme. In this scheme, performing a linear response experiment (or simulation) is sufficient to obtain the second order response. We show how to measure the second order response for any protocol from linear perturbations with one step only, thereby greatly facilitating the measurement. This concept is illustrated and verified in an analytically solvable jump process and in simulations of the 2d Ising model.

II. SYSTEM AND NONLINEAR RESPONSE THEORY
In this section we present nonlinear (second order) response theory, starting from the microscopic description, which is then coarse grained to macroscopic observables.

A. Microscopic description
Consider a classical system of interacting degrees of freedom, e.g. a fluid, with phase state at time s denoted by x s ∈ Γ, which is in general of high dimensionality. Assuming that the state x s is of sufficient microscopic resolution, (x s ) s∈[0,t] is a Markov process. In absence of perturbations, the system is in equilibrium at temperature T = 1/(k B β), with Boltzmann constant k B . When perturbed, the system is out of equilibrium, a situation which we aim to analyze here. We therefore start by reviewing an expansion of the system around equilibrium in terms of path integrals [7,35].
We introduce a volume form on the space of paths p(ω)Dω, so that p(ω) is the probability (density) to find the path ω = {x s } s∈[0,t] . The average of a state observable O(x t ), which depends on the state of the system at time t, is given by being an integral over paths ending at time t, weighted by p. Consider a perturbation by a potential ν(x), acting on the system for times s ≥ 0, carrying as prefactors a dimensionless perturbation strength ε and a dimensionless protocol h(s) of order unity, so that the full perturbation is given by εh(s)ν(x). The aim of response theory, as for instance developed in Refs. [7,35], is to express the path probability in the perturbed non-equilibrium system, p ε,h (ω), in terms of the equilibrium path probability p eq (ω) and orders of the perturbation strength ε. This is done in terms of a Radon-Nikodym derivative, which relates different probability measures in the Radon-Nikodym theorem [35]. Here, it relates the probability densities [35] p ε,h (ω) = e −a ε,h (ω) p eq (ω), introducing an action a quantifying the deviation from equilibrium. It is illustrative to consider the time reversed process, described by backward paths. These are given by θω = π{x t−s }, where π refers to the kinematical sign reversal, such as flipping the sign of velocities, and evolve under the reversed protocolh(s) = h(t − s). Integration over the backwards path weight, p ε,h (θω), yields where O eq denotes the equilibrium average. Eq. (3) uses that the system is in equilibrium at time t = 0, see appendix A1 for details. Eq. (3) inspires a decomposition of the action a = d − s/2, into its time symmetric and anti-symmetric parts d and s, respectivly. The time-anti symmetric part s ε,h = log p ε,h (ω)/p ε,h (θω) is called the entropy production. For the potential perturbation given above, it has the form as shown for specific examples in Ref. [7] and quite generally in Ref. [35]. The time-symmetric part d ε,h = − 1 2 log(p ε,h (ω)p ε,h (θω)/p 2 eq (ω)), sometimes denoted dynamical activity, depends on more details. No explicit form can be given without specifying the system's dynamics [7].
An expansion in terms of the perturbation strength ε and subtracting the path integral over backwards paths, as considered in Eq. (3), yields We introduced the notation f = df dε | ε=0 so that s is immediately found from Eq. (4). The derivative d of the time-symmetric component is given in terms of the derivative of p, [35], Examples for different dynamics may be found in Refs. [7,35]. We finally introduce a notation for the n-th order response of the non equilibrium average O , [36] which we analyze up to n = 2 in this manuscript.

B. Coarse-grained description
We now turn to a coarse-grained version of the stochastic process introduced above. This is inspired by the circumstance that experimental resolution is naturally limited, so that in general only coarse grained observables can be monitored. Furthermore, developing nonequilibrium thermodynamics for macroscopic variables is an important goal of statistical physics. The coarse graining as performed here allows for a practical extrapolation scheme, as detailed below.
We thus consider a countable number of coarsegrained, discrete (stochastic) states X s ∈ Γ , with a function ϕ uniquely mapping Γ to Γ , i.e. X s = ϕ(x s ).  Figure 1. Illustration of coarse graining: A continuous microscopic phase space is coarse grained into states 1, 2, 3, 4. The coarse grained description (e.g. mimicking experimental resolution) is unable to distinguish the two microscopic example states indicated by a square and a pentagon, but can distinguish the star from the other two. Figure 1 illustrates this mapping of a microscopic continuous state space Γ to Γ = {1, 2, 3, 4} consisting of four coarse-grained states. Note that this approach does not rely on a separation of time scales of slow and fast variables and thus remains valid even when the coarsegrained process is not Markovian. The spirit of this coarse graining is hence distinct from the idea that underlies typical approximations based on local equilibrium.
A crucial physical assumption or requirement is that the perturbation potential ν acts on the coarse level as well, in other words, ν is a function of coarse-grained states. In the introductory example of colloids, it means that the perturbation acts on those colloids whose positions are monitored. To emphasize this, we introduce a potential V (X) acting on coarse-grained states, so that ν(x) = V (X) for all x satisfying ϕ(x) = X. Importantly, under this assumption, the entropy production is a functional of coarse-grained paths Ω = {X s } s∈[0,t] (see Ref. [23] for a statement of similar spirit). We define the coarse-grained entropy production; Index Ω indicates that the integral runs over all micro paths belonging to the coarse-grained path Ω, and P eq (Ω) is the equilibrium weight for path Ω. Eq. (8) follows directly from Eq. (4), and noting that ν( which demonstrates that linear response theory can be applied at any length (or coarse graining) scale. In order to obtain the second order response, we coarse grain the second order response Eq. (5), making use of the coarse-grained entropy production given in Eq. (8), We identified the coarse-grained D as the average of d over micropaths belonging to Ω similar to the coarse-grained entropy production of Eq. (8). In Eq. (10) we split the integration over microscopic paths by integrating over paths belonging to a given coarse grained path Ω first and then integrating over the latter. This is expressed by a coarse-grained path integral, which can be written as by discretizing time into N lattice points and making use of the discrete nature of X. Other ways of representing DΩ can be found in Appendix A 2. In Eq. (10), we used once more that the entropy production takes the same value for microscopic paths ω belonging to the same Ω. Thereby, the macroscopic parts of the action factorize so that Eq. (10) takes a form similar to Eq. (5). D (Ω) in Eq. (11) can be written, using (6), where we introduced (derivatives of the) non-equilibrium weight P h (Ω). One important difference between Eqs. (13) and (6) lies in the Markov property of x, which is absent for X: While p ε,h (ω) can be cut into pieces according to the Chapman-Kolmogorow-Equation, this is not possible for P ε,h (Ω).
The main challenge that remains is determination of D (Ω). How D (Ω) appears in practice will be analyzed in Section III by decomposing the time dependence of the protocol into various discrete steps. Section IV verifies and illustrates these findings via analytical solutions of a four state model, and section V will employ an extrapolation scheme for the Ising model.

III. FROM STEPWISE PERTURBATION TO THE SECOND ORDER SUSCEPTIBILITY
Eq. (10) describes the second order response O (2) in terms of a path integral with linear contributions at most. However, evaluating the path integral holds the challenge of finding D (Ω) in Eqs. (11) or (13). In this section we demonstrate that, starting with protocols of finite number of discrete steps, D (Ω) turns into a tensor of finite order. We discuss the simplifications arising if the coarse grained process is Markovian in Appendix A 4.

A. A single step perturbation
The case of a perturbation with a single step in time, i.e., was considered in Ref. [34], and for the sake of completeness, we repeat the derivation here. The entropy production in this case reads We introduced i = X 0 and j = X t , the states at times 0 and t. This form of S reduces the path integral in Eq. (10) into a sum of terms, [34] Here, ij is a path integral with fixed start and end states i and j which also yields the joint probability According to Eq. (13), the time-symmetric component is given by For the step perturbation of Eq. (14), the protocol equals its reverse and the protocol reversal appearing in Eq. (13) is obsolete. We note that, for a single step, D (Ω) turns into a matrix D ij , which is related to the linear response of the coarse-grained probability P ij . The latter can be measured easily, giving rise to the extrapolation scheme introduced in Ref. [34], and as also discussed in more detail below in section V.

B. A two step perturbation
We add one more step to the protocol at time 0 ≤ τ ≤ t, introducing the corresponding state k = X τ . Denoting the step sizes ∆h 0 and ∆h 1 , the protocol is then Recalling the definition of S ij and S (Ω) in Eq. (15), (8) yields the entropy production for two steps Similarly to Eq. (16), the path integral turns into sums over states at times 0, τ and t, Consistent with the notation above, the path integral ikj DΩ is restricted to the states i, k, j such that it yields the probability to be in state i at time s = 0, in k at time τ and in state j at the time of the measurement t Applying this notation, we can identify the timesymmetric contribution from integrating out Eq. (13) where we introduced the probability P ikj under time and protocol reversal. More specifically P ikj is the probability to measure j at time s = 0, k at time s = t − τ and i at time ). In order to arrive at P we have swapped time reversal and integrating out coarse-grained paths. By construction, D must be linear in the protocol h, so that it can be decomposed into where now, D ikj [Θ s ] is related to the situation of single perturbation step at time s. We have thus obtained a tensor D with three indices, which is connected to coarse-grained probabilities P , as before [37].
C. Second order susceptibility for any protocol In this section, we consider the response to a general protocol h by deriving a formula for the second order susceptibility in terms of "one step probabilities". Based on grounds of time translational symmetry, we express the second order response for a protocol h in terms of the second order susceptibility χ, [38] Additionally, the so defined χ(t 1 , t 2 ) can be determined from the second order response under a protocol with one and two steps, since their time derivatives correspond to δ distributions at the jump times. We may thus find χ from the relations given in the previous subsections.
Comparing the definition of the second order susceptibility, Eq. (26), to the response formula given in terms of indices (22) and using the linearity of D in Eq. (25), yields valid for 0 ≤ τ ≤ t. Cases for arbitrary times χ(t 1 , t 2 ) are obtained by plugging in the respective arguments τ = t 1 − t 2 and t = t 1 .
Eq. (27) is an intermediate result: It gives the second order response function χ for any arguments in terms of sums over indices of tensors S ij and D ikj obtained in Eqs. (A4), (A5) and (15).
As a final simplification we note that the entropy production in Eq. (27) carries only two indices so that we can sum over the remaining index. This sum eliminates one index from the expressions D ikj P eq ikj , which always occur jointly (compare Eqs. (A4) and (A5)). One term is given by summing over the center index k in the probabilities k P ikj (τ, t) = P ij (t). We add notation to include time and protocol, as these are varied below. For example P ikj [Θ s1 ](τ, t) denotes to measure state i at time s = 0, state k at time τ and state j at time t under a perturbation switched on at time s 1 . With this notation in mind, the summation over k yields where we used the linearity of first index, Notably, the perturbation in Eq. (29) starts at negative times −τ < 0 and probabilities cover a time interval of t − τ between measurements due to integrating out the first state. After renaming indices, we finally obtain for the second order susceptibility, This expression is symmetrical under exchange of its arguments t 1 = t and t 2 = t − τ , as expected from the definition of the susceptibility, Eq. (26). For the single step perturbation the second order is given by the response function for equal time arguments O (2) [Θ 0 ] = χ(t, t), i.e. setting τ = 0. This is consistent with Eq. (16) since the addends of the second order responsibility become the same for equal time arguments.

IV. ILLUSTRATION AND VERIFICATION: THE FOUR STATE MODEL
In this Section we use a simple example system, namely, a (driven) four state model which can be solved analytically, to verify and illustrate the concepts introduced in Section III.

A. Model and coarse graining
The second order response can be expressed in terms of the entropy production and a time-symmetric component, as in Eqs. (10), (27). As a proof of concept we consider a Markov jump process with four states Γ = {A, B, C, D}, see Fig. 2, which is then coarse grained to a two state one. Such Markov jump processes may be used to describe a variety of systems, see e.g., Ref. [35]. The conditional probabilities p αδ (s, t) for occupying state δ at time t if occupying α at an earlier time s is described by the Master equation (an equivalent equation holds for occupation densities) with the rate q αδ (t) for the transition from the state α to δ and setting q αα = − δ =α q αδ in order to have probability conservation for incremental times, c.f. Ref. [35]. More explicitly, we use time-independent rates q AB = q BA = q CD = q DC = r for side-links, and the center links have rates q BC (t) = e εh(t) and q CB = 1 (with all other rates being 0). The only time-dependent rate of the center link q BC (t) will be used to drive the system. The rate matrix q(s) is given explicitly in the appendix, Eq. (B1).
This system is coarse-grained into two states X = 0 and X = 1 by assigning ϕ(A) = ϕ(B) = 0 and ϕ(C) = ϕ(D) = 1. The two coarse-grained states are connected by the center link BC and the associated rates q BC , and q CB of the underlying Markov process, which yields a non-Markovian two state process. Notably, in the limit r 1, the resulting two state process is Markovian, while it is strongly non-Markovian in the opposite limit r 1. Choosing r = 0.1 for the Figures 4 and 5 results in the system being in the latter regime.
The associated two state potential is given by V (0) = 0 and V (1) = 1 fulfilling q BC (t) q CB = e −βεh(t)(ν(C)−ν(B)) , called the microscopic reversibility condition [3] or local detailed balance [35]. This is a sufficient condition to have an entropy production of the form given by Eq. (4) [35]. For single step perturbations the Master equa-  tion can be solved analytically. For more complex protocols, the solution is formally given by a time-ordered exponential, which may be expanded in orders of ε using a Dyson-expansion. This allows us to illustrate our approach of computing the second order susceptibility from linear quantities analytically.

B. One, two, and three steps
We compute the second order susceptibility χ(t, t − τ ) from linear contributions S ij and D ij P eq ij for perturbations switched on at different times ±τ , according to Eq. (30). Eq. (30) for the average in the coarse-grained two state system with O(j) = j reduces to The entropy production which contributes is S 01 = 1 and the relevant time-symmetric components D 01 for different perturbations are shown in Fig. 3. The explicit form of the second order susceptibility in the four state model is given in the appendix, Eq. (B2). Employing this function and using Eq. (26) enables prediction of the second order response for arbitrary protocols O (2) . Here, we demonstrate this by means of a protocol h = Θ 0 + Θ 1/2 + Θ 5/2 with three steps. As shown in Figure 4, the response formula for the second order O coincides with the explicit solution. This is an example of employing the second order susceptibility from linear contributions to correctly predict the second order response under a protocol with several steps.

C. Continuous protocol: Exact and discretized
As noted above, Eq. (26) readily describes any protocol, which we further illustrate by using a protocol of a sinusoidal oscillation of the form The resulting response is shown in Fig. 5. In addition to the response corresponding to the protocol of Eq. (33), we show the response to discretized versions of the protocol in the upper part of Fig. 5. This illustrates the possibility of an additional coarse graining along the time axis of a certain protocol, which is one natural way of implementing Eq. (30) in practice (see also Sec. V below). How fine a discretization is needed? As seen in the graph, discretizing with an increment of unity (resulting in n = 5 steps in the given time range) yields pronounced deviations from the exact result. On the other hand, discretizing with an increment of 1/3, resulting in n = 15 steps, yields more precise results. This can be understood from the curves in Fig. 4, where (the shortest) relaxation time, or the response time, is of the order of unity. This analysis suggests that the time increment should be small compared to that response time to accurately resolve the perturbation protocol.

V. EXTRAPOLATION: ISING MODEL
In this section we illustrate the validity of the timedependent coarse-grained response theory for an interacting system with many degrees of freedom using the example of a near-critical Ising model. Let us consider a 2-d lattice of size L × L with periodic boundaries; each lattice site i contains a spin η i = ±1 which interacts with its nearest neighbouring spins. Let the coarse grained variable X correspond to a single site, say site k, so that we have X = 1 2 (1 + η k ). In other words, all spins except for spin k will be coarse grained away, and play the role of a complex (non-Markovian) bath. This scenario may mimick the experimental situation where a system is perturbed and monitored at a local position in space. We thus introduce a magnetic field, which acts on the spin k, i.e., a potential V (X) = η k = 2X − 1. The Hamiltonian describing the system at any time s is, (setting the spin coupling to unity) The explicit time dependence attributed to the magnetic field via the protocol h(s) gives rise to a perturbation of the system from its equilibrium state.
In the absence of the magnetic field, i.e, with g = h(s) = 0, the system shows a para to ferromagnetic transition at temperature T c = 2.269 in the limit of thermodynamically large size L (having set the Boltzmann constant to unity). Here we consider a system of size L = 16 at a slightly super-critical temperature T = 2.45. This finite sized system shows a non-zero magnetization at this temperature, which randomly flips its sign on a slow time-scale. We thus expect the resulting bath for the tagged spin η k to be highly non-Markovian.
For the sake of simplicity, we take the two-step protocol introduced in Sec. III B [see Eq. (20)] with ∆h 0 = ∆h 1 = 1 and consider the response of the observable O(X) = X. We also use a time independent offset magnetic field of strength g = 2.0, which renders the equilibrium system non-symmetric, yielding a finite second order response.
Unlike the four-state model, the susceptibility and response function cannot be calculated analytically here and we take recourse to Monte-Carlo simulations. To be specific, we use the Glauber dynamics, where a randomly selected spin flips with rate min{1, e −β∆H }, ∆H being the change in energy due to the proposed flip and β = T −1 is the inverse temperature of the system. One Monte-Carlo step consists of L 2 attempted flips, which defines the unit of time.  Figure 6. The relevant time-symmetric components D 01 contributing to the second order response for the Ising model measured from numerical simulations. Here we have considered a fixed τ = 20, and the linear is calculated using ε = 0.05. The blue curve is obtained using the best fit for P ε 10 [θt−τ ] while the grey curve shows the original data (see the main text and Appendix B 2 for details).
To demonstrate the validity of the response formalism, we compare the response O (2) (t) predicted by Eq. (26) with O (2) per (t), obtained from directly applying a larger perturbation. The latter is extracted accurately from, where · ε denotes the expectation value in the presence of the perturbation protocol of Eq. (20) with strength ε and X eq is the expected value in equilibrium. We use measurements with strengths ±ε to avoid errors of O(ε 3 ) [34].
On the other hand, the response theory predicts the second order susceptibility via Eq. (26). For the protocol of Eq. (20) with ∆h 0 = ∆h 1 = 1 it reduces to, where χ(t 1 , t 2 ) is given by Eq. (30). As mentioned before, for O(X) = X, the sum reduces to only one term, namely, i = 0, j = 1. Moreover, in this case, S 01 = β(V (1) − V (0)) = 2β, and we only need to measure the linear parts of D ij under single-step perturbations at times 0 and ±τ.
Using the Monte-Carlo simulations and applying a (small) perturbation of strength ε = ±0.05, we measure the linear responses of the relevant path probabilities P ij [h](t). The corresponding matrices D are computed using Eq. (B4) in the Appendix B 2. Figure 6 shows plots of the contributing D 01 , evaluated for the three different protocols as needed, compare Fig. 3. As mentioned above, qualitative differences to Fig. 3 result from the fact that here, a finite second order response remains in the long-time limit. The presence of a slow time-scale is visible in the slow relaxation of the curves in Fig. 6.
For the particular case P ji [θ t−τ ] the statistics is low and the derivative is obtained by fitting the P ±ε ji [θ t−τ ] to a compressed exponential form and taking the difference of these fitted functions; see Appendix B 2 for more details. The dark blue curve shows the D 01 [θ τ ] obtained using this fit; the light grey curve shows the original data.
The second order response is obtained using Eq. (36) along with Eq. (30). Figure 7 compares the directly obtained susceptibility O (2) per (t) (symbols) with the predicred response O (2) (t) (solid lines) for two different values of τ. At late times t → ∞, the susceptibility reaches a stationary value which is independent of τ and is nothing but the equilibrium second-order response for a perturbation ε(∆h 0 + ∆h 1 )V (X) = 2ε(2X − 1). This can be calculated by a series expansion of the Boltzmann weight and turns out to be 8β 2 X eq (1 − 2 X eq )(1 − X eq ) as shown in detail in the Appendix B 3. This value is indicated by a black dashed line in the figure.
It is worth mentioning that the procedure used in this section generalizes the extrapolation scheme introduced in Ref. [34] to arbitrary time-dependent perturbations : the second order response, which is relevant for a comparatively stronger perturbation, can be predicted from measuring path probabilities close to equilibrium (i.e., within the linear response regime).

VI. CONCLUSIONS
We developed a second order response theory for coarse grained observables, which is valid for perturbation protocols of arbitrary time dependence, thereby advancing over Ref. [34]. One application of this theory is an extrapolation scheme, where measurements within the linear regime can predict the second order. The mentioned linear measurements can thereby be performed for the simple perturbation protocol of a single switch-on event, and the second order for arbitrary protocols follows.
The necessary spatial resolution, i.e., the degree of coarse graining possible in this approach, is set by the perturbation. Repeating the introductory example of two colloidal particles: If the perturbation acts only on one of the two colloids, the other one can be coarse grained, i.e., its position does not have to be monitored. An important difference to approaches which use fast and slow variables is thus that, in the presented scheme, the coarse grained variables are allowed to be of non-Markovian type. The scheme can be applied to any time dependence of protocol. As is the case for spatial resolution, it is the protocol which sets the (experimental) time resolution required to apply this scheme. However, as found in explicit examples, a temporal resolution that is fine compared to the reaction time of the coarse grained variables is also sufficient.
Technically, this scheme relies on resolution of the entropy production, so that entropy production and the time symmetric part of the action decouple when coarse graining. This work is thus naturally in agreement with (macroscopic and stochastic) thermodynamics and with the known fluctuation relations. Its new contribution compared to these lies in the description of the nonthermodynamic symmetric part of the action. Future work will consider higher orders of perturbation, as well as possibilities of combining this scheme with approaches that rely on separation of fast and slow time scales. It may also be insightful to combine this approach with estimates of the entropy production for cases where the potential acts on partly inaccessible d.o.f [39]. panding in orders of ∆h 0 and ∆h 1 yields The two forms appearing in Eq. (25) and Eq. (A3) are given by Here, we expanded the probability under the backwards protocol Θ τ = Θ 0 − Θ t−τ by using its linearity in ∆h i These equations are the basis for integrating out one index, for example the state k at time τ 1 in Eq. (A4) and initial state i in Eq. (A5). Summing over possible states in a joint probability yields j P(X t2 = i, X t1 = j, X t3 = k) = P(X t1 = i, X t3 = k), thus yielding probabilities P ij for different protocols, see Eq. (28) and (29) respectively.

Markov case
The results derived in sections II B and III do not rely on a Markovian property of the coarse grained variables. There might be practical cases however, where the degrees of freedom under consideration are Markovian, for example if a local equilibibrium approximation for the integrated degrees is justified. In that case, (X s ) s∈[0,t] is a Markov process, and hence follows the formulas of the microscopic response formalism in section II A (see specifics in Refs. [7,13]). Notably, the linear contribution of the time symmetric part is given as a superposition of instantaneous values, denotedd(x s ), as explicated in Ref. [13]. We can thus decompose and for the probability with the conditional probablity p kj introduced before Eq. (31). Eq. (A9) is in contrast to the case of non-Markovian processes where states at different times couple due to memory effects. With only the quantity D ik [Θ 0 ] appearing in Eq. (A 4) (evaluated at different times), the Markov case thus takes the complexity of the single step protocol described in Sec. III A. This simplifies the extrapolation scheme as introduced in Section V, as only P ij [Θ 0 ] and P eq ij need to be measured in order to find the second order response for any protocol. so that the row sums are 0 and as explained r is a dimensionless parameter. For r 1 this system exhibits much slower rates within the macrostates than connecting the two coarse-grained states, which is of order 1. Still our extrapolation technique succeeds (Fig. 4). This illustrates that our method does not rely on separation of time scales as also demonstrated in Ref. [34]. For the average of the coarse-grained observable O(X) = X the second order susceptibility is computed from Eq. (32).
As mentioned in the main text, for the particular case of P ij [θ t−τ ](t), instead of calculating the derivative directly from the numerically measured path probabilities, we use a functional fit. We first fit P ε 10 [θ t−τ ](t) − P ε 10 [θ t−τ ](τ ) to a functional form a(1 − exp [−b(t − τ ) c ]) (remember that the path probability is zero for t < τ in this case) with a, b, c as fitting parameters. The derivative is then calculated using Eq. (B3) along with these fitted functions. For the sake of completeness, we provide the values of these fitting parameters in Table I.

Static response in the Ising model
The long-time limiting value of the second order response in the Ising model can be computed from the equilibrium Boltzmann ddistribution. Under the perturbation protocol (20), at the long-time limit, the system reaches an equilibrium state characterized by configuration weights where Z ε is the equilibrium partition funcion and H 0 is the Hamiltonian in the absence of the perturbation. The second order response of any observable O can be calculated by expanding the above weight around ε = 0, multiplying by O and summing over all possible configurations. This straightforward excercise leads to a formal expression, +2 O V 2 − 2 OV V For the case O(X) = X and V (X) = 2X − 1 with ∆h 0 = ∆h 1 = 1 the above expression simplies to, O (2) = 8β 2 X eq (1 − 2 X eq )(1 − X ) eq (B6) where we have used the fact that X 2 = X.