Cost and Precision of Brownian Clocks

Brownian clocks are biomolecular networks that can count time. A paradigmatic example are proteins that go through a cycle thus regulating some oscillatory behaviour in a living system. Typically, such a cycle requires free energy often provided by ATP hydrolysis. We investigate the relation between the precision of such a clock and its thermodynamic costs. For clocks driven by a constant thermodynamic force, a given precision requires a minimal cost that diverges as the uncertainty of the clock vanishes. In marked contrast, we show that a clock driven by a periodic variation of an external protocol can achieve arbitrary precision at arbitrarily low cost. This result constitutes a fundamental difference between processes driven by a fixed thermodynamic force and those driven periodically. As a main technical tool, we map a periodically driven system with a deterministic protocol to one subject to an external protocol that changes in stochastic time intervals, which simplifies calculations significantly. In the non-equilibrium steady state of the resulting bipartite Markov process, the uncertainty of the clock can be deduced from the calculable dispersion of a corresponding current.

Brownian clocks are biomolecular networks that can count time. A paradigmatic example are proteins that go through a cycle thus regulating some oscillatory behaviour in a living system. Typically, such a cycle requires free energy often provided by ATP hydrolysis. We investigate the relation between the precision of such a clock and its thermodynamic costs. For clocks driven by a constant thermodynamic force, a given precision requires a minimal cost that diverges as the uncertainty of the clock vanishes. In marked contrast, we show that a clock driven by a periodic variation of an external protocol can achieve arbitrary precision at arbitrarily low cost. This result constitutes a fundamental difference between processes driven by a fixed thermodynamic force and those driven periodically. As a main technical tool, we map a periodically driven system with a deterministic protocol to one subject to an external protocol that changes in stochastic time intervals, which simplifies calculations significantly. In the non-equilibrium steady state of the resulting bipartite Markov process, the uncertainty of the clock can be deduced from the calculable dispersion of a corresponding current.

I. INTRODUCTION
Periodic behavior is ubiquitous in living systems, from neural oscillations [1] to circadian cycles [2,3]. An example of a well studied biochemical oscillation is the phosphorylation-dephosphorylation cycle of the KaiC protein [3][4][5][6][7]. This phosphorylation-dephosphorylation cycle functions as a circadian clock allowing a cyanobacterium to tell time [4], i.e., to oscillate in synchrony with day-night changes. Another example of a biochemical oscillation that is related to a phosphorylationdephosphorylation cycle of a protein happens in the activator-inhibitor model recently analyzed in [8]. More generally, biochemical oscillations are typically associated with a protein that goes through a cyclic sequence of states. Any such protein can be taken as an example of a Brownian clock.
The question we investigate in this paper concerns the relation between precision and dissipation in Brownian clocks. Given that the clock should have a certain precision, what is the minimal energy budget required to run a clock with this precision?
We model a Brownian clock as an inhomogeneous biased random walk on a ring. The different states of the clock can be interpreted as different states of a protein that influences a biochemical oscillation; changes in these states would correspond to, e.g., conformational changes or phosphorylation steps. We consider two classes of clocks. First, we analyze a clock driven by a constant thermodynamic force that can be generated by, for example, ATP. For this class, the general thermodynamic uncertainty relation we obtained in [43] (see also [44][45][46][47][48]), establishes the best precision that can be obtained given a certain energy budget. Within this class a precise clock requires a minimal energy dissipation.
The second class is represented by a clock that is driven by a periodic external protocol. Systems driven by such protocols reach a periodic steady state and are known as "stochastic pumps" [49][50][51][52][53][54][55][56][57][58][59]. Experimental examples of such systems are the generation of rotational motion in an artificial molecular motor driven by an external protocol [60] and the pumping of ions across membranes in red blood cells driven by an oscillating electric field [61]. We show that a clock in this class can achieve high precision with an arbitrarily small energy budget. Hence, a clock in this second class is fundamentally different from a clock driven by a fixed thermodynamic force.
The mathematical treatment of systems that reach a periodic steady state, which are driven by deterministic protocols, is typically difficult. In particular, calculating the dispersion associated with the clock can be quite challenging [62]. For our investigation on the fundamental differences between the two classes we consider a generic theoretical framework for which the protocol changes at random time intervals [63]. Such protocols have been realized in experiments [64,65]. Within this theoretical framework the system, i.e., the clock, and the external protocol together form a bipartite Markov process [20,36,37,66,67]. This property considerably simplifies calculations; in particular, it allows us to calculate analytically the dispersion of the clock. Using these analytical tools we find the optimal parameters that lead to a clock that can achieve high precision with arbitrarily low dissipation. With this proper tuning in hands, we confirm numerically that the corresponding clock with a deterministic protocol can also achieve high precision with vanishing dissipation.
For protocols that change at stochastic times, we prove that given a periodic steady state with a certain probability distribution, it is always possible to build a steady state of a bipartite Markov process, which comprises the system and the external protocol, that has the same probability distribution.
This paper is organized as follows. In Sec. II we discuss a clock driven by a fixed thermodynamic force. Our main result comes in Sec. III, where we show that a clock driven by an external protocol can combine high precision with arbitrarily low dissipation. We conclude in Sec. IV. Appendix A contains the thermodynamics of systems driven by external stochastic protocols. In Appendix B we prove the equivalence between a periodic steady state and a steady state of a bipartite process composed of both system and external protocol. More details for the model analyzed in Sec. III are given in Appendix C.

II. BROWNIAN CLOCK DRIVEN BY A FIXED THERMODYNAMIC FORCE
The simplest model of a Brownian clock is a biased random walk on a ring with N , possibly different, states and arbitrary rates [68], as illustrated in Fig. 1 for N = 4. The transition rate from state i to state i + 1 is k + i , whereas the transition rate from i to i − 1 is k − i . Time is counted by the number of full revolutions of the pointer. Whenever the pointer undergoes the transition from state N to state 1, one unit of clock "time" has passed. Since the clock is stochastic, a backward step from state N to state 1 could happen. If, in the next step, the pointer moves from N to 1, one should not attribute the passing of a second time unit to such a sequence of events. Hence, one counts a backward steps from N to 1 as a (−1) unit to prevent such over-counting. The stochastic variable that counts time thus is a fluctuating current X that increases by one if there is a transition from N to 1 and it decreases by one if there is a transition from 1 to N .
In the stationary state, the average X is given by the probability current where we have introduced the diffusion coefficient The clock is driven in the clockwise direction by, for example, a chemical potential difference A that is related to the transition rates by the generalized detailed balance condition [30]. This condition for this clock reads where Γ ± = N i=1 k ± i and we set Boltzmann constant k B multiplied by the temperature T to k B T = 1 in the equations throughout the paper. Each revolution of the clock cost an amount A of free energy. Hence running the clock for a total time T costs an average free energy The uncertainty of the clock, the cost of running it and its number of states N are constrained by a universal thermodynamic uncertainty relation [43], which we discuss in the following.
For a biased random walk with uniform rates k + and k − , the current is J = (k + − k − )/N and the diffusion coefficient is D = (k + + k − )/(2N 2 ) [68]. For this case, the cost C in Eq. (5) times ǫ 2 in Eq. (2) gives Cǫ 2 = 2DA/J = (A/N ) coth[A/(2N )], where we used Eq. (4) that implies A/N = ln(k + /k − ). It turns out that for a fixed affinity A, this product is indeed minimized for such uniform rates [43], leading to the uncertainty relation We note that this bound is saturated, with Cǫ 2 = 2, for a clock close to equilibrium, i.e., in the linear response regime with small A. The implications of Eq. (6) for the design, precision and cost of such a Brownian clock can best be illustrated by comparing two clocks using familiar notions. Suppose we want to measure reliably, say with a precision ǫ = 10 −2 , a time of one hour with either a "slow" clock that takes one minute for a revolution or a "fast" clock that takes only one second. The mean of the stochastic variable X will be 60 or 3600, respectively. First, the inequality (6) with (5) implies a structural constraint on the minimal number of states N min = (ǫ 2 X ) −1 required for a cycle which turns out to be 167 and 3 for the slow and the fast clocks, respectively. The crucial quantity thus is the product N X , i.e., the number of elementary steps taken for the measurement. For a precision of 10 −2 , a clock has to undergo at least 10 4 elementary steps. A clock counting "minutes" rather than "seconds" is not necessarily less precise provided its cycle consists of sufficiently many elementary steps. Second, for a given design, i.e., N , the affinity driving the clock has to be at least For the slow clock, A min ≃ 333, and for the fast one A min ≃ 5.55. The overall cost of measuring one hour with this precision is bounded by 20000 for both types.
From an energetic point of view, neither the slow nor the fast design is preferable. In a biochemical network, free energy is typically provided by AT P hydrolysis, which in physiological conditions liberates approximately 20k B T . The universal result Cǫ 2 ≥ 2 implies that small uncertainty always has an energetic price associated with it. An uncertainty ǫ requires the consumption of 1/(10ǫ 2 ) ATP molecules. As we show next, the situation for a clock driven by an external protocol is fundamentally different, since there high precision does not require a minimal energy budget.

A. Model Definition
For a Brownian clock driven by an external timedependent protocol we also consider a ring geometry with τ t= N states. The forward transition rates k i,i+1 (t) and the backward transition rates k i,i−1 (t) depend on the time t with a period τ . The energy of site i is denoted E i (t), whereas the energy barrier between sites i and i + 1 is B i (t). Using the parameters and we fix the rates as For fixed t the rates fulfill detailed balance. Hence, if the rates are time independent, there is no probability current in the ring and the clock cannot count time. A current can be generated by a periodic variation of both the energies E i and the energy barriers B i . A simple and symmetric choice for such protocol is as follows, see Fig.  2. The full period of the external protocol τ is divided into N parts. In the first part of the period from t = 0 to t = τ /N the transition rate from state i to state i + 1 is In the second part of the period, from t = τ /N to t = 2τ /N the energies and energy barriers are shifted one step in the clockwise direction, i.e., the rates change to k i,i+1 (t) = k + i−1 and where for the variable labeling a state i we assume that a sum i + j is modulo N . In general, the transition rates for t ∈ [0, τ ] are given by (10) and Besides the variable i = 1, 2, . . . , N we also consider a variable α = 1, 2, . . . , N , which is convenient for our calculations. Whereas the variable i marks a position in the clock the variable α is determined by the energy of the state E α . If the external protocol changes during the period, for the variable i the transition rates rotate in the clockwise direction, whereas the variable α undergoes an effective backward transition, as illustrated in Fig. 2.
The random variable X is the same as for the previous clock: X counts the number of transitions between i = N and i = 1 in the clockwise direction minus the number of transitions in the anticlockwise direction. It turns out that analytical calculations with the above model that reaches a periodic steady state are complicated. In particular, a method to calculate the diffusion coefficient (3) for arbitrary N is not available. However, if we consider a protocol that changes at stochastic times with a rate γ = N/τ , analytical calculations become simpler. In Appendix A, we explain a general theory for such stochastic protocols, along the lines of [63]. We show that an analytical expression for the diffusion constant D can be obtained in this case. Furthermore, in Appendix B we show that given a periodic steady state arising from a continuous deterministic periodic protocol, it is always possible to build a bipartite process comprising the system and the stochastic protocol that has the same probability distribution as the periodic steady state [71].
For the clock with stochastic protocol, the energies and energy barriers change at stochastic times, with a rate γ = N/τ . The precise definition of the model for general N is presented in Appendix C. Here in the main text we discuss the case N = 4 that is represented in Fig 3. It turns out that the full bipartite process can be reduced to a Markov process with four states only. In this reduced description we use the variable α. The transition rates γ are related to one rotation of the transition rates. Effectively, such a rotation corresponds to a backward jump of this α variable, as illustrated for the deterministic protocol in Fig. 2 and explained in more detail in Appendix C.

B. Optimal Time-Scales and Energy Barriers
As explained in Appendix C, we can calculate current J, entropy production rate σ, and diffusion constant D analytically for this clock with the stochastic protocol, which lead to the product Cǫ 2 = 2Dσ/J 2 as a function Effective network for a clock driven by an external protocol that changes at stochastic times with N = 4 states. The green backward arrows represent a jump with rate γ = N/τ . A backward jump is equivalent to a forward rotation of the rates represented in Fig. 2.
of the transition rates. The entropy production is equal to the rate of work done on the system due to the periodic variation of the external protocol. Similar to the previous clock driven by a fixed thermodynamic force, if this clock runs for a time T , the energetic cost is C = σT and the uncertainty is ǫ 2 = 2D/(J 2 T ).
For the simplest clock with N = 3, the minimum value of the product turns out to be Cǫ 2 ≃ 1.33651, which is smaller than the universal limit 2 for systems driven by a fixed thermodynamic force. We have obtained this product as a function of the transition rates up to N = 6. Minimizing Cǫ 2 numerically, we find that the minimum decreases with N , and that the transition rates at the minimum have the properties χ 1 = χ 2 = . . . = χ N −1 = χ ≫ γ and (χ N ) −1 → 0. Thus, in this limit, the energy barrier between states N and 1 becomes infinite, effectively blocking transitions between these states. Moreover, the internal transitions are much faster than changes in the protocol, i.e., the system equilibrates before the next change in the external protocol happens, which is common in studies about periodically driven systems [49][50][51][52]. For this clock, the product Cǫ 2 is minimized in the far from equilibrium regime, in contrast to the clock from Sec. II, for which the minimum occurs in the linear response regime.
In this limit, the expressions for current J and diffusion coefficient D become and where Z ≡ N α=1 e −Eα . These expressions can be obtained by mapping the model in this special limit onto a biased random walk, as explained in Appendix C. The basic idea behind this mapping is to consider the position of the particle, i.e., the state of the clock, in relation to the barrier. If the barrier moves and the particle is in state α = 1, then the particle crosses the barrier and moves to state α = N , corresponding to a backward step of size N − 1 of the random walk. Otherwise, the particle moves one step closer to the barrier, i.e., from state α to α − 1, corresponding to a forward step of size 1.
The entropy production σ is calculated with the expression in Eq. (A12), which gives This expression for the entropy production, which is the rate of work done on the system, can be understood as follows. If there is a jump that changes the external protocol, the work done on the system is given by the energy change of the system after the jump. If the system is in a state α, this energy change is E α−1 − E α . Therefore, the rate of work done on the system in Eq. (14) is γ times a sum over all state α of this energy difference multiplied by the probability of the system being in state α before an external jump, which is Z −1 e −Eα . In marked contrast to the clock driven by a fixed thermodynamic force, the cost C = σT for this periodically driven clock is, in general, not proportional to the current J that is given in Eq. (12).

C. Dissipation-less Clock I: Simple Profile
Before discussing the optimal energy profile that minimizes the product Cǫ 2 we consider the simple profile where δ α,1 is the Kronecker delta. In this case, using Eqs. (12), (13), and (14) the product Cǫ 2 = 2Dσ/J 2 becomes and N in such a way that e E ≫ N ≫ E, the product (16) can reach an arbitrarily small value. For example, for N = 64 and E = 5.7 we obtain Cǫ 2 ≃ 0.11. The fact that it is possible to build a clock that has small uncertainty and dissipates arbitrarily low energy is the main result of this paper. Such a dissipation-less clock is in stark contrast with a clock driven by a fixed thermodynamic force, which is constrained by the thermodynamic uncertainty relation Cǫ 2 ≥ 2.
A physical explanation for this result is as follows. Let us consider the case where E is large enough so that the particle is practically never at position α = 1 when the barrier moves forward. This condition amounts to e E ≫ N . In this case, the position of the particle with respect to the energy barrier always diminishes by one when the barrier moves. The current is then given by the velocity of the barrier J ≃ γ/N and the dispersion is D ≃ γ/(2N 2 ), which is the dispersion of the random walk performed by the barrier that has only forward transitions with rate γ. Work is done on the system only if the particle is at state α = 2 when the barrier moves, which happens with probability 1/(N − 1). For large N , the entropy production is then given by σ ≃ γE/N . The product of cost and uncertainty becomes Cǫ 2 = 2Dσ/J 2 ≃ E/N . The condition N ≫ E guarantees a small dissipation, leading to a product Cǫ 2 that can be arbitrarily close to 0. The mechanism that allows for this scaling of the product Cǫ 2 with N is the large energy barrier that determines the current J and the dispersion D. Such a mechanism cannot be realized with the clock driven by a fixed thermodynamic force from Sec. II.

D. Dissipation-less Clock II: Optimal Profile
In the limit where the expressions (12), (13), and (14) are valid, the minimum of Cǫ 2 is achieved with an opti-mal energy profile {E α } that depends on N , as shown in Fig 4. The negative value of the minimum of this energy profile grows with N 2 , and for larger N the profile becomes flatter in the middle. Hence, for large N , the probability P 1 to be in the state with highest energy goes to zero and, from expressions (12) and (13), J → γ/N and D → γ/(2N 2 ), respectively. Current and diffusion are then determined by the unidirectional random walk performed by the barrier, as is the case of the simple profile from Eq. (15) with a large E.
We verified numerically that for this optimal profile the entropy production rate behaves as σ ∼ N −2 . The product Cǫ 2 = 2Dσ/J 2 ∼ N −2 can then become arbitrarily small for large N . For example, for a clock with N = 64 states and with an optimal energy profile, we get Cǫ 2 ≃ 0.0047. Hence, with this clock, an uncertainty ǫ = 10 −2 costs approximately 47k B T , which is much less then the minimal cost of 20000k B T found above for a clock with the same precision and driven by a fixed thermodynamic force.
This clock with an optimal energy profile also relies on the mechanism of a large barrier that controls the dispersion and current of the clock, with the difference that the energy dissipation can be suppressed as N −2 .
A dissipation-less and precise clock can also be obtained with a deterministic protocol. We have confirmed with numerical simulations up to N = 8, using the optimal energy profile from Fig. 4, that for a deterministic protocol J and σ are the same as given by (12) and (14), while D becomes smaller. Such a smaller diffusion comes from the fact that the deterministic protocol does not have the randomness associated with the waiting times for a change in the protocol. Therefore, the product Cǫ 2 is even smaller in this case and also vanishes for large N .

E. Numerical Case Study
For illustrative purposes we compare a specific clock driven by an external protocol with the results for clocks driven by a fixed thermodynamic force. In Fig. 5, we show a contour plot of the product Cǫ 2 for N = 3. The energies of the clock are set to E 1 = 0, E 2 = −1.21938, and E 3 = −1.43550, which is the optimal profile for N = 3. The parameters B and x determine the other transition rates in the following way. The parameters related to the energy barriers are set to χ 1 = χ 2 = 1 and χ 3 = 10 −B . The rate of change of the protocol is set to γ = 10 −x . Hence, for large B and x, the product Cǫ 2 reaches its minimal value for N = 3, which is This externally driven clock can be compared to an optimal clock driven by a fixed thermodynamic force A with the same number of states N = 3. The product Cǫ 2 FIG. 5. Product Cǫ 2 for a clock driven by an external protocol. The parameters of the clock are set to χ1 = χ2 = 1, χ3 = 10 −B , γ = 10 −x , E1 = 0, E2 = −1.21938, and E3 = −1.43550. Below the lines, the product Cǫ 2 is smaller than (A/3) coth(A/6), which is the optimal value of this product for a clock driven by a fixed affinity A and N = 3.
for the optimal clock driven by a fixed affinity A saturates the inequality (6), i.e., for N = 3 this optimal clock follows the relation Cǫ 2 = (A/3) coth(A/6), which is an increasing function of the affinity. Close to equilibrium, A → 0, the product reaches the minimal value Cǫ 2 = 2. Hence, a clock driven by a fixed thermodynamic force cannot have a better tradeoff relation between cost and precision than the externally driven clock inside the region limited by the line A → 0 in Fig. 5. Increasing the affinity A leads to a larger region for which the externally driven clock has an smaller product Cǫ 2 .

IV. DISCUSSION AND CONCLUSION
We have shown that a Brownian clock driven by an external protocol can achieve small uncertainty in a dissipation-less manner. This result constitutes a fundamental difference between systems driven by a fixed thermodynamic force and systems driven by an external protocol. For the first case, small uncertainty does have a fundamental cost associated with it, which is determined by the thermodynamic uncertainty relation from [43].
More realistic models related to biochemical oscillations do not typically have a simple space of states like the ring geometry considered in this paper. However, this feature does not represent a limitation in our fundamental bounds. First, the thermodynamic uncertainty rela-tion Cǫ 2 ≥ 2 is not limited to the ring geometry but valid even for any multicyclic networks of states [43,45]. Second, we have shown that it is possible to reach Cǫ 2 → 0 with a specific model, which is sufficient to prove that systems driven by an external periodic protocol can, in principle, achieve high precision with vanishingly small dissipation.
Main features of the protocol that achieves high precision in a dissipation-less manner are internal transitions much faster than changes in the external protocol, a large number of states, and a large energy barrier that effectively blocks transitions between one pair of states. This third property does not allow for cycle completions without a change in the external protocol. It remains to be seen whether further classes of protocols that also lead to Cǫ 2 → 0 exists. In particular, a quite different externally driven system, known as a hidden pump, that leads to a finite current with an arbitrarily low entropy production has been proposed in [72]. It would be worthwhile to verify whether such hidden pumps can also be used to build a clock that reaches a finite precision with arbitrarily low dissipation.
The theoretical framework for systems driven by a protocol that changes at stochastic times considered here was crucial to obtain our main result. With this theory the system and external protocol together form a bipartite Markov process and quantities like the diffusion coefficient can be calculated with standard methods for steady states. This option represents a major advantage in relation to standard deterministic protocols that reach a periodic steady state, where a similar method to calculate the diffusion coefficient is not available.
It is possible to consider a stochastic protocol that also has reversed jumps. In this case, the entropy production associated with generating the external protocol is finite. This well defined quantity can then be taken into account in a way consistent with thermodynamics [63]. If one chooses to also consider the entropy production due to the changes in the external protocol as part of the thermodynamic cost, then the thermodynamic uncertainty relation from Sec. II is again valid. This result follows from the fact that the uncertainty relation from [44] is valid for any Markov process, including the full bipartite process of system and protocol together. From a physical perspective, this observation is not surprising. If we also take the cost of generating the stochastic protocol into account, then our full bipartite process is a thermodynamic system driven by a fixed force, which obeys the thermodynamic uncertainty relation. For example, this cost of the external protocol would be of interest if the external protocol is driven by some chemical reaction [73]. However, if the protocol is directed by some truly external process, e.g., day light changes that influence a circadian clock or an external field applied to a system, then the entropic cost of the external protocol is irrelevant, independent on whether the protocol is deterministic or stochastic. It is in this case that our definition of cost for a system driven by an external protocol is meaningful.
Finally, the experimental confirmation of both the thermodynamic uncertainty relation for systems driven by a fixed thermodynamic force and the limit of high precision in the output with small dissipation for a system driven by an external periodic protocol remains an open challenge. Promising candidates for the experimental realization of a Brownian clock are single molecules, colloidal particles, and small electronic systems.
Appendix A: External protocols that change at stochastic times In this appendix, we consider a theoretical framework for systems driven by periodic protocols that change at stochastic times.

Two state model
As a simple example of a periodic steady state we consider a two state system. The "lower" level has energy 0 while the "upper" level has a time dependent periodic energy where τ ≡ 2π/ω is the period. The transition rates fulfill the detailed balance relation k + (t)/k − (t) = e −E(t) . The master equation reads where R(t) is the probability that the level with energy E(t) is occupied. With the particular choice k + = k −1 − = e −E(t)/2 and the initial condition R(0) = 0, the solution of this equation reads This solution has the property that, for large t, the system reaches a periodic steady state independent of initial conditions that fulfills the relation R PS (t) = R PS (t + τ ). The function R PS (t) in a period τ obtained from Eq. A3 is shown in Fig. 6. Instead of an energy that changes continuously and deterministically with time we now consider discontinuous changes that take place at random times, as shown in Fig. 7. Particularly, the transition rates for changes in the state of the system are now written as k n ± , where n plays a role similar to t in Eq. (A1). The detailed balance condition for jumps changing the state of the system reads k n + /k n − = e −E n . The period τ is partitioned in L pieces, leading to E n = E(t = nτ /L). The energy E n can change to E n+1 with jumps that take place with a rate γ, where for n = L − 1 the jump is to E n+1 = E 0 . The reversed transition leading to an energy change from E n+1 to E n is not allowed. The external protocol and the system together form a bipartite Markov process that has 2 × L states (see Fig. 7). Furthermore, the external protocol alone is a unicyclic Markov process with the ir- To match with the protocol in Eq. (A1), the rate γ is set to γ = L/τ . The full Markov process of system and protocol together reaches a stationary state, with the joint probability that the protocol is in state n and the system is in a generic state i denoted by P n i . The marginal probability of the state of the protocol is P n ≡ i P n i . For the present case P n = 1/L. Comparing the periodic steady state with the stationary state, the quantity analogous to the probability R P S (t) is the conditional probability P (u|n) ≡ P n u /P n , where u denotes the state with energy E n . This conditional probability is compared to R P S (t) in Fig. 6. Clearly, for larger L the conditional probability of the steady state tends to the probability in the periodic steady state. More generally, in Appendix B we prove that for any periodic steady state it is possible to construct a steady state of a bipartite process with a stationary probability that converges to the probability of the periodic steady state in the limit L → ∞.
For both protocols the system is out of equilibrium due to the time variation of the energy levels. For the periodic steady state the average rate of work done on the system isẇ The integrand is just the probability of being in the upper state with energy E(t) multiplied by the rate of energy changeĖ(t). The expression for the rate of work done on the system for the model with stochastic jumps in the protocol iṡ The sum in n corresponds to the integral in t in Eq. (A4), P n = 1/L is the average fraction of time that the protocol spends in state n during a period, P (u|n) is equivalent to R P S (t), and E n+1 − E n is related toĖ(t) in Eq. (A4).
In Fig. 8 we compareẇ P S withẇ. For large L, they become the same, which is a consequence of the convergence of the corresponding probabilities shown in Fig. 6. Even if for smaller L the quantitative discrepancy betweenẇ P S andẇ is noticeable, the qualitative behavior is still similar, i.e., in all cases the rate of work done on the system is an increasing function of ω.

General theory
We now consider the general case that includes an arbitrary network of states beyond the ring geometry of the models in the main text, which is similar to the framework from [63]. The system and the external protocol together form a Markov process with states labeled by the variables i = 1, 2, . . . , N for the state of the system and n = 0, 1, . . . , L − 1 for the state of the external protocol. This full Markov process is bipartite, i.e., a transition changing both variables is not allowed [37]. A state of the system i with the external protocol in state n has free energy E n i . The transition rates for a change in the state of the system fulfill the generalized detailed balance relation [30] where A n is a thermodynamic force or affinity and d ij is a generalized distance. For example, if the transition from i to j is related to a chemical reaction then A n is the chemical potential difference driving the reaction and d ij is the number of molecules consumed in the reaction. A jump changing the external protocol from (i, n) to (i, n+1) takes place with rate γ n , while the reversed jump is not allowed. The master equation for the full bipartite process then reads where P n i (t ′ ) is the probability that the system is at state i and the external protocol at state n at time t ′ . We use the variable t ′ in this master equation in order to stress the difference with the variable t used for the periodic steady state. In the following we consider only the stationary distribution, which is simply denoted P n i . The entropy production, which characterizes the rate of dissipated heat in an isothermal system, is defined as The above inequality is demonstrated in [37]. This entropy production does not include jumps that lead to a change in the external protocol. The mathematical expression for the entropy production of the full Markov process also contains a contribution that comes from these jumps. This contribution is related to the entropy production due to the external protocol [63] (see also [73]). As usual for thermodynamic systems driven by an external protocol, we do not take such contribution, which is irrelevant for the second law in Eq. (A8), into account. The first law readsẇ whereẇ is the rate of work done on the system anḋ E is the rate of increase of the internal energy. Since k B T = 1, the rate of dissipated heat isq = σ. In the stationary stateĖ which, with Eq. (A7), leads to the equation (A11) In the stationary state the first law then readsẇ =q. Using equation (A11) we can rewrite the entropy production (A8) in the form (A12) where J n ij ≡ P (i|n)k n ij −P (j|n)k n ji is a probability current. The second term on the right hand side of this equation is the work done by the external variation of the protocol. The first term is the work related to the affinity A n ; this term would be present even if the protocol was constant in time. For the model considered in Sec. III of the main text only the second term is present.
We now compare expression (A12) with the expression for entropy production for a standard periodic steady state. The master equation for the periodic steady state is where R i (t) is the probability of the system being in state i at time t. The generalized detailed balance relation (A6) in this case reads where the time dependent quantities have a period τ . We assume that for large t Eq. (A13) reaches a periodic steady state with the property R P S i (t) = R P S i (t + τ ). From the average energy that is also periodic, i.e., we obtain This equation is equivalent to Eq. (A11). The standard entropy production rate from stochastic thermodynamics [30] for this periodic steady state is where J ij (t) ≡ R P S i (t)k ij (t)−R P S j k ji (t). This expression is analogous to the entropy production (A12).
The problem of determining a periodic steady state probability analytically is typically complicated, whereas finding the probability distribution of a steady state in the case of stochastic changes in the external protocol can be much easier. This framework should then be useful also for the analysis of the qualitative behavior displayed by a system driven by a deterministic external protocol that is preserved in the case of a discretized stochastic protocol.

Diffusion coefficient
A main advantage of the stochastic protocols we consider here is that we can determine the diffusion coefficient defined in Eq. (3). For a general model defined by the master equation (A7), we calculate the diffusion coefficient associated with an elementary current between states a and b: the random variable X in Eq. (3) is such that if there is a jump from a to b it increases by one and if there is jump from b to a it decreases by one.
This random variable is a standard probability current of a steady state, therefore, the method from Koza [74] (see also [43,44]) can be used to calculate the current and diffusion coefficient in the following way. The Ndimensional matrix L n (z), where z is a real variable, is defined as The modified generator [74,75] associated with the current X is a matrix with dimension N × L given by where Γ n is the identity matrix with dimension N multiplied by γ n . As explained in [43,44], we can obtain J and D, defined in Eqs. (1) and (3), respectively, from the coefficients C m (z) of the characteristic polynomial associated with L(z), which are defined through the relation (A21) The current and diffusion coefficient are given by [74] and where the lack of dependence in z indicates evaluation of the function at z = 0 and the primes denote derivatives with respect to z.
Appendix B: Proof of the equivalence between periodic steady state and steady state of a bipartite process In this appendix we prove that for any given periodic steady state it is possible to construct a bipartite process that has a stationary distribution corresponding to the distribution of the periodic steady state.
We consider a periodic steady state following the master equation (A13), which can be written in the form where stochastic matrix M(t) has period τ , i.e., M(t) = M(t + τ ), and R(t) is the probability vector with N states. The periodic steady state R P S (t).
The period τ is discretized in L small intervals so that in each time interval the transition rates can be taken as time-independent. In the nth-time interval the system then follows the master equation with time independent transition rates where M n ≡ M(nτ /L) and R n ≡ R P S (nτ /L). The formal solution of this equation is where ǫ ≡ τ /L and the superscript i (f ) denotes the initial (final) distribution of the system in the time interval [nτ /L, (n + 1)τ /L]. Using the relation R n+1 we rewrite Eq. (B3) for n + 1 as where we have multiplied the equation by exp(−M n+1 ǫ).
Expanding to first order in ǫ we obtain We now construct a bipartite process with a steady state corresponding to the periodic steady state R P S (t). The Markov process including both the system and the external protocol has N ×L states, which is the dimension of the stationary distribution vector P. The stochastic matrix that fulfill the relation LP = 0 can be written in the form where Γ is the identity matrix with dimension N multiplied by γ, and L n is the matrix in Eq. (A19) with z = 0 and γ n = γ. From (A20), the stationary master equation can be written as where P n is a vector that contains the N states of the system for the protocol in state n. This equation is valid for n = 0, 1, . . . , L − 1, where if n = L − 1 then n + 1 = 0. Eq. (B8) implies whereL n ≡ 1 − L n γ −1 . Hence, P n is the eigenvector of L n+1Ln+2 . . .L N −1L0 . . .L n associated with the eigenvalue 1. Comparing (B6) with (B9), we obtain that the choices L n = M n and γ = ǫ −1 = L/τ lead to P n ∝ R (f ) n . These two quantities are not exactly the same due to a different normalization, i.e., i P n i = 1/L. Therefore, the steady state of the stochastic matrix (A20) in the limit L → ∞, with γ = L/τ and L n = M(nτ /L), is equivalent to the periodic steady state from Eq. (B1).
Appendix C: Details for the model from Sec. III In this Appendix we define more precisely the model from Sec. III with changes in the energies and energy barriers that take place at random times, and explain how we calculate J, D, and σ.
The clock and external protocol together form a bipartite Markov process. The model is defined by the stochastic matrix for this bipartite process. This matrix is of the form (A20) with (L n ) i+1i = χ i−n ǫ i−n , (L n ) i−1i = χ i−1−n ǫ i−n , (L n ) ii = −(χ i−n + χ i−1−n )ǫ i−n , where the other elements of the matrix are 0. For this model the number of jumps that change the protocol is L = N . Due to the symmetry of the external protocol, the fluctuating current between states N and 1, which we label X, is the same as the fluctuating current between any pair of states i and i + 1. The random variable X is then the sum of all these currents divided by N . The statistics of this random variable can be described by a matrix that has dimension N instead of the full matrix for the bipartite process that has dimension N 2 . This reduction can be demonstrated in the following way. Instead of changing the transition rates between states after a jump with rate γ we consider that the states rotate in the anti-clock wise direction. In this case a label α = 1 refers to the states that have transition rate ǫ 1 χ 1 to jump to state α = 2 and transition rate ǫ 1 χ N to jump to state α = N . This label α that marks the state that has certain transition rates is different from the label i that marks a position in the ring. The sum of the currents between the states with the labels i is the same as the sum of currents between states with label α. Within the label α a jump with rate γ, which is related to a change in the external protocol, implies a jump from α to α−1. Therefore, instead of a stochastic matrix of the form (A20) the time evolution of the probability vector of the states α = 1, 2, . . . , N is described by the stochastic matrix L * that is defined by the following non-zero elements, L * α+1α = χ α ǫ α , L * α−1α = χ α−1 ǫ α + γ, L * αα = −(χ α + χ α−1 )ǫ α − γ.
With this reduction the system and protocol together are described by a matrix with dimension N . The modified generator (A20) is also reduced to a N -dimensional matrix L * (z). Its non-zero elements are L * (z) α+1α = χ α ǫ α e z/N , The current J and the diffusion coefficient D are given by relations (A22) and (A23), respectively, with the coefficients C m (z) given by The entropy production σ is calculated with relation (A12).
We now consider the model in the limit χ N = 0, χ 1 = χ 2 = . . . = χ N −1 = χ, and χ ≫ γ. The condition χ ≫ γ means that the system reaches an equilibrium distribution P * α before a jump with rate γ takes place. This equilibrium distribution is given by where Z = N α=1 e −Eα . With this distribution we can calculate the entropy production rate σ given in Eq. (14) using Eq. (A12).
The total current X is the sum of the current between all states divided by N . Denoting the current between α and α + 1 by X αα+1 we obtain X = (X 12 + X 23 + . . . + X 1N )/N . The fluctuating current through the links associated with the rate γ that leave state α is denoted by Y α . The average value for this unidirectional current is γP * α . From Kirchhoff's law for the fluctuating currents we obtain Hence, the random variable X can be viewed as a biased random walk that gives a step of size 1/N forward if the protocol changes and the clock is in a state α = 1 or a step of size (N − 1)/N backward if the clock is in state α = 1. The master equation for this random walk reads d dt P (X, t) = k eff + P (X − 1/N, t) + k eff − P (X + 1 − 1/N, t) − (k eff + + k eff − )P (X, t), where k eff + ≡ γ N α=2 P * α and k eff − ≡ γP * 1 . Using the Laplace transform P (z, t) ≡ X P (X, t)e Xz (C8) we obtain d dtP (z, t) = k eff + e z/N + k eff − e −(N −1)z/N − (k eff + + k eff − ) P (z, t).
The solution of this differential equation with boundary conditionP (0, t) = 1 isP (z, t) = e ψ(z)t , with From this solution we obtain which are the expressions given in Eqs. (12) and (13) of the main text, respectively.