Universal bounds on the performance of information-thermodynamic engine

We investigate fundamental limits on the performance of information processing systems from the perspective of information thermodynamics. We first extend the thermodynamic uncertainty relation (TUR) to a subsystem. Specifically, for a bipartite composite system consisting of a system of interest X and an auxiliary system Y, we show that the relative fluctuation of an arbitrary current for X is lower bounded not only by the entropy production associated with X but also by the information flow between X and Y. As a direct consequence of this bipartite TUR, we prove universal trade-off relations between the output power and efficiency of an information-thermodynamic engine in the fast relaxation limit of the auxiliary system. In this limit, we further show that the Gallavotti-Cohen symmetry is satisfied even in the presence of information flow. This symmetry leads to universal relations between the fluctuations of information flow and entropy production in the linear response regime. We illustrate these results with simple examples: coupled quantum dots and coupled linear overdamped Langevin equations. Interestingly, in the latter case, the equality of the bipartite TUR is achieved even far from equilibrium, which is a very different property from the standard TUR. Our results will be applicable to a wide range of systems, including biological systems, and thus provide insight into the design principles of biological systems.


I. INTRODUCTION
Biological systems maintain their functions by acquiring or using information about fluctuating environments. For example, E. coli regulates its flagellar motors by processing information about external ligand concentrations to adapt to the environment [1][2][3][4][5][6]. A gene network senses a sudden increase in protein concentration and then suppresses mRNA transcription to maintain protein levels [7][8][9][10]. While these systems rely on a negative feedback mechanism that suppresses intrinsic noise by using information about fluctuating environments, some molecular machines can even convert information into output work. Such examples include F o F 1 -ATP synthase, where F 1 motor converts energy and information provided by F o motor into the synthesis of ATP molecules [11][12][13]. To elucidate the general design principles underlying biological systems, it is necessary to investigate the fundamental limits on the performance of such information processing systems.
Stochastic thermodynamics has revealed various fundamental limits to the thermodynamic aspects of such fluctuating mesoscale systems [14][15][16][17]. For example, the thermodynamic uncertainty relation (TUR) states that suppressing the relative fluctuations of an arbitrary timeintegrated currentĴ T necessarily involves a thermodynamic cost [18][19][20]: where ⟨Ĵ T ⟩ and Var[Ĵ T ] denote the mean and variance of J T , and ∆σ denotes the total entropy production up to time T . While the validity of TUR in its original form (1) is limited to steady-state currents in Markov jump processes and overdamped Langevin dynamics, TUR-type inequalities even revealed that there is a fundamental limit to the performance of a thermodynamic heat engine. Specifically, a heat engine with a finite output power cannot achieve the Carnot efficiency as long as the fluctuation of the output power is finite [21][22][23][24]. Furthermore, for a stationary cross-transport system with input and output currents, which can be regarded as fuel (positive entropy) and load (negative entropy), respectively, the input-output fluctuation inequalities hold in the linear response regime [25,26]. These inequalities state that the fluctuation of the output current is smaller than that of the input current, while the relative fluctuation of the output current is larger than that of the input current.
In this paper, we aim to find similar fundamental limits for information processing systems, in particular for an information-thermodynamic engine that converts information into output work. Information thermodynamics, which is essentially stochastic thermodynamics for subsystems, is a thermodynamic framework for information flow between two interacting subsystems, either autonomous or nonautonomous [27]. This theory reveals that the information flow between subsystems can significantly affect the thermodynamic constraints of each subsystem. While information thermodynamics has its origins in the thought experiment of Maxwell's demon, it has recently been applied to information processing at the cellular level in biological systems [5,[28][29][30][31] and even to fully developed fluid turbulence [32].
Here, we consider a composite system consisting of a system of interest X and an auxiliary system Y , described by continuous-time Markov jump processes or diffusion processes with only even variables and parameters under time reversal. Our main results can be summarized as follows.
(i) Bipartite TUR.-We first extend the standard TUR (1) to a subsystem. For arbitrary timeintegrated currentĴ T for X with arbitrary observation time T , we prove that [cf. Ineq. (26)] Here, ∆S X tot denotes the entropy production associated with X, and ∆I X denotes the time-integrated information flow, which is the amount of information exchanged with the auxiliary system Y . The additional term δ J reflects the contribution of the interaction with Y . This bipartite TUR states that the relative fluctuation of the current for the subsystem X is lower bounded not only by the entropy production associated with X, but also by the information transfer between X and Y . In particular, if Y evolves much faster than X, we can further show that δ J → 0 in the steady state. In this case, the bipartite TUR gives a tighter bound than the standard TUR (1). While here we derive the bipartite TUR in the steady state, this relation is valid even for systems under arbitrary time-dependent driving from arbitrary initial states (see Appendix A).
(ii) Trade-off relations.-As a consequence of the bipartite TUR, we show that there are fundamental limits on the performance of an informationthermodynamic engine.
When the system of interest X acts as a steady-state informationthermodynamic engine, its performance can be quantified, e.g., by the negative entropy production rate |Ṡ X env | in the environment and the informationthermodynamic efficiency η X S := |Ṡ X env |/|İ X |, which quantifies how efficiently the engine converts information into the negative entropy production. In the typical case where the auxiliary system Y evolves much faster than the engine X, we prove universal trade-off relations between |Ṡ X env | and η X S [cf. Ineqs. (74) and (76)]: and where D S and D I denote the fluctuation of the stochastic medium entropy production and the time-integrated stochastic information flow, respectively. These inequalities state that an information engine with a finite negative entropy production rate cannot achieve η X S = 1 as long as the fluctuations D S and D I are finite. In order to achieve a finite negative entropy production rate with η X S = 1, the fluctuations D S and D I must diverge.
(iv) Input-output fluctuation inequalities.-As a direct consequence of the Gallavotti-Cohen symmetry, we show that the input-output fluctuation inequalities hold even in the case where information flow is regarded as an input or output current. That is, in the linear response regime where X acts as a steady-state information-thermodynamic engine, we prove that [cf. Ineqs. (122) and (123)] These inequalities state that the fluctuation of the output current (negative entropy production) is smaller than that of the input current (information flow), while the relative fluctuation of the output current is larger than that of the input current.
We illustrate these results with two simple examples: coupled quantum dots and coupled linear overdamped Langevin equations. Interestingly, the latter provides an example where the equality of the bipartite TUR is achieved even far from equilibrium. This is in contrast to the standard TUR (1), where the equality is guaranteed only in the near-equilibrium limit [20,36]. While the bipartite TUR is generally not valid for systems with broken time-reversal symmetry, such as underdamped Langevin dynamics [37][38][39][40][41][42][43], many relevant biological systems are often described by continuoustime Markov jump processes or diffusion processes with only even variables and parameters under time reversal. Therefore, these results will be applicable to a wide range of systems, including biological systems, and thus shed new light on our understanding of the design principles of biological systems. This paper is organized as follows. In Sec. II, we introduce important information-theoretic quantities and briefly review the framework of information thermodynamics in a general setup. In Sec. III A, we describe the bipartite TUR, which is the first main result of this paper. The detailed derivation of the bipartite TUR is presented in Sec. III B. In Sec. III C, we show that the bipartite TUR reduces to the form of the standard TUR if the auxiliary system evolves much faster than the system of interest. We discuss the equality condition of the bipartite TUR in Sec. III D. In Sec. IV, we show that the bipartite TUR gives universal bounds on the performance of an information-thermodynamic engine, which is the second main result of this paper. In Sec. V A, as the third main result of this paper, we prove that the Gallavotti-Cohen symmetry holds even in the presence of information flow in the fast relaxation limit of the auxiliary system. As a corollary to this symmetry, we show that the input-output fluctuation inequalities are valid even in the case where information flow is regarded as an input or output current in Sec. V B. In Sec. VI, we illustrate our results with two examples. In Sec. VII, we conclude this paper with some remarks.

II. SETUP
We consider a composite system that consists of two subsystems, X (system of interest) and Y (auxiliary system), whose time evolution is described by Markov jump processes or overdamped Langevin equations. Let x t and y t be the states of X and Y at time t, respectively. We assume that the system satisfies the bipartite property: the transition probability p(x t+dt , y t+dt |x t , y t ) satisfies p(x t+dt , y t+dt |x t , y t ) = p(x t+dt |x t , y t )p(y t+dt |x t , y t ) (8) for dt → 0 + . This property means that X and Y do not jump simultaneously in the case of Markov jump processes and that the noises acting on X and Y are uncorrelated in the case of diffusion processes. In this paper, we focus mainly on Markov jump processes, while the extension to the overdamped Langevin case is straightforward. Let p t (x, y) be the probability of state (x, y) at time t. The time evolution of p t (x, y) is described by the master equation: The rate matrix is assumed to be irreducible to ensure the uniqueness of the stationary distribution p ss (x, y). Note that X and Y can affect each other's transition rates, although they cannot jump simultaneously.

A. Information-theoretic quantities
We introduce important information-theoretic quantities. The strength of the correlation between X and Y can be quantified by the mutual information [44]: where p X t (x) = y p t (x, y) and p Y t (y) = x p t (x, y) denote the marginal distributions for X and Y , respectively. The mutual information is nonnegative and is equal to zero if and only if X and Y are independent.
The directional information from one variable to the other can be quantified by information flow [45], which is defined aṡ where p t (y|x) = p t (x, y)/p X t (x) and p t (x|y) = p t (x, y)/p Y t (y) denote the conditional probabilities. From the bipartite property, the sum ofİ X andİ Y gives the time derivative of the mutual information [46]: In the steady state condition,İ X andİ Y have opposite signs because d t I[X : Y ] = 0. Ifİ X > 0, the correlation between X and Y increases due to transitions in X. In other words, X gains information about Y . Ifİ X < 0, in contrast, X t+dt is less correlated with Y t than X t . In this case, the information is destroyed or exploited by X.

B. Second law of information thermodynamics
Here, we formulate the second law of information thermodynamics. To this end, we impose the local detailed balance condition to ensure that the system is thermodynamically consistent [15,17,47]. Then, the entropy change in the environment due to transitions in X and Y is identified aṡ The average rate of the system entropy is identified as the time derivative of the system's Shannon entropy S[X, Y ] := − x,y p t (x, y) ln p t (x, y): Then, the total entropy production rateσ is given bẏ where the nonnegativity is proved by using ln a ≤ a − 1 (a ≥ 0). The nonnegativity of the total entropy production rate is a manifestation of the second law of thermodynamics and is sometimes called the second law of stochastic thermodynamics [17]. From the bipartite property,σ can be decomposed into two parts:σ Here,σ X andσ Y denote the partial entropy production rate due to transitions in X and Y , respectively [48]: whereṠ Z tot (Z = X, Y ) can be interpreted as the entropy production rate associated with Z, which consists of the time derivative of Z's Shannon entropy S[Z] := − z p Z t (z) ln p Z t (z) and the entropy change in the environment due to transitions in Z: From the definition of the partial entropy production rates (18) and (19), it immediately follows thatσ X anḋ σ Y are individually nonnegative: This is the so-called second law of information thermodynamics (see also Fig. 1). The important point here is thatṠ X tot (Ṡ Y tot ) can be negative ifİ X (İ Y ) is negative. This apparent violation of the second law of thermodynamics caused by information flow lies at the heart of the mechanism of Maxwell's demon [27]. In this case, X acts as an information-thermodynamic engine that converts information into output work or negative entropy production. In the case where X (blue) acts as a steady-state information-thermodynamic engine, X converts information (İ X < 0) into negative entropy production (Ṡ Although only a single thermal bath is depicted here, our results hold even in the presence of multiple thermal baths.

III. THERMODYNAMIC UNCERTAINTY RELATION FOR BIPARTITE SYSTEMS
In this section, we explain our first main result, which can be regarded as an extension of the standard TUR (1) to bipartite systems. Hereafter, we assume that the whole system is in the steady state. See Appendices A and B for time-dependent cases.
A. Bipartite TUR LetĴ T be a generalized time-integrated current for the subsystem X with an arbitrary antisymmetric weight wheren y xx ′ denotes the number of transitions from the state (x ′ , y) to (x, y) during the time interval [0, T ]. For example, the choice of d y xx ′ = ln w y xx ′ /w y x ′ x yields the stochastic entropy change in the environment due to transitions in X during [0, T ]. In the steady state, the time-integrated stochastic information flow can also be expressed in this form (see (81)). We remark that when there are multiple environments with the label ν = 1, 2, · · · , the weight d y xx ′ can depend on ν. The ensemble average ofĴ T reads Our first main result is the following inequality: for arbitrary observation time T , where ∆S X tot := T 0 dtṠ X tot and ∆I X := T 0 dtİ X denote the entropy production and time-integrated information flow associated with X, respectively. Here, δ J := ⟨Ĵ T ⟩ q /⟨Ĵ T ⟩, and ⟨Ĵ T ⟩ q is defined as where q t satisfies the following equation with q 0 = 0: This additional current term ⟨Ĵ T ⟩ q reflects the contribution of the interaction with Y . Indeed, if the transition rate for X and the weight are independent of Y , i.e., w y xx ′ = w xx ′ and d y xx ′ = d xx ′ , we can prove that ⟨Ĵ T ⟩ q = 0. For the derivation, see Appendix B, where we consider the bipartite TUR in a more general case, applicable to a transient state. If, in addition, X and Y are independent and thus ∆I X = 0, then the bipartite TUR (26) reduces to the standard TUR (1), where the relative fluctuation of the current for X is lower bounded by the entropy production associated with X. In the general case where X and Y are correlated, the bipartite TUR (26) states that the relative fluctuation of the current for the subsystem X is lower bounded not only by the entropy production associated with X, but also by the information transfer between X and Y .
There are two important cases where the additional current term ⟨Ĵ T ⟩ q can be ignored and thus δ J → 0. The first case occurs in the short time limit T → 0. Since q 0 = 0, it immediately follows that ⟨Ĵ T ⟩ q → 0 as T → 0, while the information flow ∆I X remains finite in general. The second case occurs in the long time limit T → ∞ where there is a separation of time scales between X and Y . That is, the case where the observation time is long and Y evolves much faster than X. The proof of δ J → 0 in this case will be described in detail in Sec. III C. Since this case is typically realized when X acts as an information-thermodynamic engine, we will mainly focus on this case in the following sections.

B. Derivation of the bipartite TUR
Here, we prove the bipartite TUR (26) by using the generalized Cramér-Rao inequality [20,36,44,49]. We remark that the bipartite TUR can also be proved more directly from the master equation or Langevin equation [50] (see also Appendix A for the direct derivation for overdamped Langevin equations). We consider the following auxiliary dynamics parameterized by θ with p θ 0 = p ss : Here, w y xx ′ (θ) denotes the parameterized transition rate where Let P θ (Γ) be the parameterized path probability for the trajectory Γ = {x t , y t } T t=0 , wheren yy ′ x denotes the number of transitions from the state (x, y ′ ) to (x, y) during the time interval [0, T ], and τ y x denotes the empirical dwell time in state (x, y), defined as the total amount of time spent in state (x, y) along the trajectory Γ:τ where δ xxt (δ yyt ) denotes the Kronecker delta, which is 1 if x = x t (y = y t ), and zero otherwise. We denote by I(θ) := −⟨∂ 2 θ ln P θ (Γ)⟩ θ the corresponding Fisher information [44], where ⟨·⟩ θ denotes the average with respect to P θ . The generalized Cramér-Rao inequality then yields [20,36,44,49] Var Here, I(0) can be expressed as where we have used the inequality 2( . By substituting this expression into (29), we find that Then, ∂ θ ⟨Ĵ T ⟩ θ | θ=0 can be calculated as We thus arrive at the inequality (26).

C. Fast relaxation limit of Y
Here, we show that ⟨Ĵ T ⟩ q ≪ ⟨Ĵ T ⟩ in the long time limit T → ∞ if Y relaxes much faster than X. Let τ X and τ Y be the time scale of X and Y , respectively. We assume a separation of time scales: τ Y ≪ τ X , i.e., the auxiliary system Y evolves much faster than the system of interest X. This situation is typically realized when Y acts as a Maxwell's demon, i.e., when Y measures the state of X and performs feedback control [45]. We introduce a dimensionless slow time τ := t/τ X and a small parameter ϵ := τ Y /τ X ≪ 1. Correspondingly, we nondimensionalize the transition rates as w y xx ′ := τ X w y xx ′ and w yy ′ x := τ Y w yy ′ x . We first take the long time limit T → ∞, i.e., T ≫ τ X , and assume that q t (x, y) reaches a stationary solution q ss (x, y). Then, p ss and q ss satisfy the following equations: We now assume that p ss and q ss have asymptotic expansions in terms of the asymptotic sequences {ϵ n } ∞ n=0 as ϵ → 0: Here, we impose the normalization condition where we have introduced q X ss (x) := y q ss (x, y). Note that q X ss satisfies the normalization condition x q X ss (x) = 0.
By substituting these expansions into (38) and (39), we find that the leading order yields Let π ss (y|x) be the normalized zero-eigenvector that satisfies y ′ w yy ′ x π ss (y ′ |x) = 0. Due to the irreducibility of the rate matrix, this normalized zero-eigenvector is unique for each x. Then, from the normalization condition, p The subleading order of (38) and (39) yields Note that (48) and (49) ss with the matrix w yy ′ x , which has the left zeroeigenvector 1 because y w yy ′ x = 0. This property guarantees that the solutions p (1) ss and q (1) ss exist only under the solvability conditions: which correspond to (48) and (49) summed over y, respectively. Here, we have introduced the effective transition rate w xx ′ := y w y xx ′ π ss (y|x ′ ). Then, from the Perron-Frobenius theorem, q X ss can be expressed as q X ss = N p X ss , where N denotes the normalization constant. Because q X ss satisfies the normalization condition x q X ss (x) = 0, we obtain N = 0. Thus, in the fast relaxation limit of Y , we have Therefore, the additional current term ⟨Ĵ T ⟩ q appearing in the bipartite TUR (26) is much smaller than ⟨Ĵ T ⟩ in the long time limit T → ∞: To summarize, in the fast relaxation limit of Y , the bipartite TUR (26) reduces to the form similar to the standard TUR (1) in the long time limit: where D J := lim T →∞ Var[Ĵ T ]/2T denotes the fluctuation ofĴ T , and J := lim T →∞ ⟨Ĵ T ⟩/T denotes the mean current. Note that (55) gives a tighter lower bound on the fluctuation of a current than the standard TUR, becausė S X tot −İ X is smaller than or equal to the total entropy productionσ. If the partial entropy production of Y is zero, then (55) can also be obtained from the standard TUR.

D. Equality condition
The equality of the bipartite TUR in the fast relaxation limit of Y (55) can be achieved even far from equilibrium. This nontrivial fact will be shown later with a simple example in Sec. VI B. This property is a stark difference from the standard TUR, where the equality is guaranteed only in the near-equilibrium limit [20,36]. Here, before showing the example in Sec. VI B, we discuss a possible scenario to achieve the equality of the bipartite TUR (26) in a somewhat abstract manner.
We first consider the equality condition of the generalized Cramér-Rao inequality at θ = 0 (34). Because the generalized Cramér-Rao inequality is based on the Cauchy-Schwarz inequality, the equality condition is satisfied if and only if the following relation holds [36]: where C is a constant. The right-hand side of (56) is given by The left-hand side of (56) readŝ where we have decomposed the currentĴ T into two parts [50]: Note that ⟨Ĵ I T ⟩ = 0 and ⟨Ĵ II T ⟩ = ⟨Ĵ T ⟩. By comparing (57) and (58), we expect that d y xx ′ = CZ y xx ′ to be the optimal choice. However, due to the presence ofĴ II T − ⟨Ĵ T ⟩, the equality condition is generally not satisfied.
In the standard TUR, it is known that the equality can be achieved by including the generalized time-integrated static observable in addition to the currentĴ T [20,50,51]. Here, we consider the following generalized timeintegrated static observable: where ρ y x is an arbitrary weight that depends on a state (x, y). Even for the observableĴ T +Ô T instead ofĴ T , by following the same argument described in Sec. III B, we can derive the following bipartite-correlation TUR: where ⟨Ô T ⟩ q is defined as In this case, the equality condition of the inequality (62) is given bŷ Then, we find that the choice yieldŝ where we have used the fact thatÔ T = −Ĵ II T for this choice. Thus, the equality of (62) is achieved for this choice ofĴ T andÔ T . However, even if we can achieve the equality of (62), the equality condition of the second inequality (63) is not satisfied in general. Still, in the overdamped Langevin case, the inequality I(0) ≥ 2/(∆S X tot − ∆I X ) becomes an equality (see [52] for the detailed discussion). Therefore, the equality of the bipartite-correlation TUR is achieved in the overdamped Langevin case even far from equilibrium for the choice (66) and (67): where we have used ⟨Ô T ⟩ q = −⟨Ĵ T ⟩ q . In the long time limit T → ∞, (69) can be rewritten as where D I J := lim T →∞ Var[Ĵ I T ]/2T denotes the fluctuation ofĴ I T . Note that D I J is generally different from D J , and thus the equality (70) does generally not correspond to the equality of the bipartite TUR in the fast relaxation limit of Y (55). To put it another way, if D I J = D J in the fast relaxation limit, then the equality of (55) is achieved.
Therefore, the difference between D I J and D J is given by in the fast relaxation limit of Y . Thus, D II J = 0 is a sufficient condition for (55) to hold with equality. In Sec. VI B, we give an example that satisfies this sufficient condition.

IV. TRADE-OFF RELATIONS
In this section, we focus on the regime where Y evolves much faster than X and show that the bipartite TUR in this regime (55) provides trade-off relations for the performance of information processing systems. In Sec. IV A, we consider the situation where the subsystem X can be regarded as a steady-state informationthermodynamic engine, while the external system Y plays the role of a memory of Maxwell's demon. From the second law of information thermodynamics (22), this situation corresponds to the case where 0 < −Ṡ X env ≤ −İ X . While this situation may be typical in the regime of the fast relaxation limit of Y , we can also consider the case where the slow system X plays the role of a memory and measures the state of the fast system Y , i.e., 0 <İ X ≤Ṡ X env . Even for this case, we can show that the bipartite TUR (55) provides trade-off relations on the performance of the memory, which will be described in Sec. IV B.
A. Information-thermodynamic engine: Here, we show that the bipartite TUR gives several universal bounds on the performance of informationthermodynamic engines. In this case, both the entropy production and the information flow associated with X are negative and satisfy the relation 0 < −Ṡ X env ≤ −İ X . Then, the performance of an information-thermodynamic engine can be quantified by, e.g., the informationthermodynamic efficiency [45]: which satisfies 0 ≤ η X S ≤ 1 as a direct consequence of the second law of information thermodynamics. This efficiency quantifies how efficiently the engine X converts information into negative entropy production. In addition to this information-thermodynamic efficiency, the negative entropy production rate itself is an important indicator characterizing the performance of an informationthermodynamic engine. Here, we show that there is the following trade-off relation between η X S and |Ṡ X env |: where D S denotes the fluctuation of the stochastic medium entropy production ∆Ŝ X env , This inequality states that an information engine with a finite negative entropy production rate cannot achieve η X S = 1 as long as the fluctuation D S is finite. In order to achieve a finite negative entropy production rate with η X S = 1, the fluctuation D S must diverge. We can also prove a similar trade-off relation where the negative entropy production rate is bounded by the fluctuation of the time-integrated stochastic information flow ∆Î X instead of D S : where The inequalities (74) and (76) are the second main results of this paper. In Sec. IV A 1, we provide detailed proof of these inequalities. In Sec. IV A 2, we briefly discuss which of the two inequalities (74) and (76) gives a tighter bound on the negative entropy production rate. In Sec. IV A 3, we derive a trade-off relation in terms of power, i.e., output work produced per unit of time, instead of the negative entropy production. This relation can be regarded as a direct extension of the trade-offs for heat engines [21,22] to information-thermodynamic engines. (74) and (76) Here, we derive the trade-off relations (74) and (76) by using the bipartite TUR in the fast relaxation limit of Y (55), which can be rewritten as follows:

Derivation of
Let us choose stochastic medium entropy production associated with X as currentĴ T in (78).
which satisfies ⟨∆Ŝ X env ⟩ = ∆S X env . Then, we immediately obtain the trade-off between entropy production and efficiency (74).
Another type of the trade-off relation (76) can be derived by choosing the time-integrated stochastic information flow as currentĴ T in (78). Here, the instantaneous stochastic information flow is defined as the partial rate of change of the stochastic mutual information I(x t : y t ) := ln[p t (x t , y t )/p X t (x t )p Y t (y t )]: where t n denotes the time at which X jumps from x t − n to x t + n , and w xx ′ := y w y xx ′ p t (y|x ′ ) denotes the effective transition rate. In the steady state, the last two terms vanish so that the time-integrated stochastic information flow reads which satisfies ⟨∆Î X ⟩ = ∆I X . SubstitutingĴ T = ∆Î X in the bipartite TUR (78), we obtain the inequality (76). Note that we can also obtain a trade-off relation between the information flow and efficiency:

Tightness of the bounds
Here, we consider which of the two inequalities (74) and (76) gives a tighter bound on the negative entropy production rate. The difference between the two upper bounds reads Therefore, the bound (74) is tighter than (76) when D S /D I < η X S ≤ 1, while (76) becomes tighter than (74) when 0 ≤ η X S < D S /D I . Note that D S /D I may depend on η X S . In the linear response regime withṠ X env ≤ 0 anḋ I X ≤ 0, we can prove the input-output fluctuation inequality D S ≤ D I (for the derivation, see Sec. V B). Beyond the linear response regime, however, the inputoutput fluctuation inequality can be violated, i.e., D S can become larger than D I [26].

Trade-off between power and efficiency
While we have focused on the negative entropy production rate to characterize the performance of an information-thermodynamic engine, we can also derive a trade-off relation in terms of power, i.e., output work produced per unit of time. To define power, we assume that the transition rates satisfy the local detailed balance condition of the following form [17]: where β = (k B T ) −1 denotes the inverse temperature, ϵ xy denotes the energy of the state (x, y), and ∆ y xx ′ denotes the energy provided by an external agent during the transition (x ′ , y) → (x, y). Then, the average rate of heat absorbed by X from the environment is identified aṡ Similarly, the average rate of work done by the external agent to X is identified aṡ Finally, the average rate of change of internal energy readṡ If we regard x as an externally manipulated control parameter driving Y , thenĖ X can also be identified as the power delivered from X to Y [12,53]: Similarly, we can defineẆ Y ,Q Y , andẆ Y →X . Then, the first law of stochastic thermodynamics for each subsystem can be expressed as follows (in an averaged form): By using these relations, we can rewrite the second law of information thermodynamics in the steady state as Here, we have assumed that both X and Y are each in contact with a thermal bath at temperature T , while the extension to the case of different temperatures is straightforward [13]. Note thatẆ X→Y = −Ẇ Y →X anḋ I X = −İ Y in the steady state. Therefore,Ẇ X andẆ Y cannot both be negative. Now, suppose that X operates as an informationthermodynamic engine, i.e.,Ẇ Y > 0 andẆ X < 0. In this case, we can introduce the following efficiency: which satisfies 0 ≤ η X W ≤ 1, as can be seen from the second law of information thermodynamics. The denominator βẆ Y →X +İ Y = −βẆ X→Y −İ X ≥ 0 is called the transduced capacity [11,53], because it constraints the conversion of the input powerẆ Y into the output power |Ẇ X | as βẆ Y ≥ βẆ Y →X +İ Y ≥ |βẆ X |. The efficiency η X W quantifies how efficiently X converts the transduced capacity into the output power |Ẇ X |. Now we derive a trade-off relation between the output power and the efficiency η X W by using the bipartite TUR (78). Let us choose the stochastic work as currentĴ T in (78):Ĵ Then, the bipartite TUR gives where D W denotes the fluctuation of the output work defined by The inequality (95) states that an information engine with a finite output power cannot achieve η X W = 1 as long as the fluctuation D W is finite.
Although we have assumed that X evolves much slower than Y , there may be a situation where X measures the state of Y , i.e.,İ X > 0. Even in this case, we can also prove similar trade-off relations concerning the performance of the memory X. We first note that both the entropy production and the information flow associated with X are positive and satisfy the relation 0 <İ X ≤Ṡ X env . Then, we can introduce the following information-thermodynamic efficiency: which satisfies 0 ≤ η X I ≤ 1. In contrast to η X S , this efficiency quantifies how efficiently X gains information about Y relative to the energy dissipation or thermodynamic cost. Now we choose the time-integrated stochastic information flow as currentĴ T in the bipartite TUR (78). By noting the positivity ofṠ X env andİ X , the bipartite TUR gives the following inequality: This inequality states that a memory with a finite information flow can never attain η X I = 1 as long as D I is finite.
If we choose the stochastic entropy production as current,Ĵ T = ∆Ŝ X env , then we can obtain a similar trade-off relation where the information flow is bounded by the fluctuation of the stochastic entropy production D S instead of D I :İ

V. GALLAVOTTI-COHEN SYMMETRY AND INPUT-OUTPUT FLUCTUATION INEQUALITIES
In this section, we prove that the Gallavotti-Cohen symmetry [33][34][35] is satisfied in the fast relaxation limit of Y . As a consequence of this symmetry, we can further show that the input-output fluctuation inequalities hold in the linear response regime even in the presence of an information flow.

A. Gallavotti-Cohen symmetry
Let µ(λ S , λ I ) be the scaled cumulant generating function of the time-integrated currents ∆Ŝ X env and ∆Î X defined by where λ S and λ I are the counting fields for ∆Ŝ X env and ∆Î X , respectively. In this section, we prove that µ(λ S , λ I ) satisfies the following Gallavotti-Cohen symmetry in the fast relaxation limit of Y : To prove this, we first note that µ(λ S , λ I ) can be rewritten as where G T (x, y) denotes the generating function conditioned to a final state (x, y): where p T (x, y, ∆S X env , ∆I X ) denotes the joint probability density such that the state of the system at time T is (x, y) and the entropy production and information flow generated up to that time are ∆S X env and ∆I X , respectively. Therefore, the property of the scaled cumulant generating function µ(λ S , λ I ) is encoded in the property of the time evolution equation of the generating function G T (x, y). The time evolution equation of G T (x, y) can be obtained by noting that the time evolution equation of p T (x, y, ∆S X env , ∆I X ) reads where we have used the dimensionless slow time τ := T /τ X and dimensionless transition rates w y xx ′ := τ X w y xx ′ and w yy ′ x := τ Y w yy ′ x . Then, we find that the time evolution of G τ (x, y) is described by the following tilted dynamics: where L X λ S ,λ I and L Y λ S ,λ I denote the tilted generators given by We now assume that G τ has asymptotic expansions in terms of the asymptotic sequences {ϵ n } ∞ n=0 as ϵ → 0: Here, we impose the normalization condition By substituting this expansion into (105), we find that the leading order gives From the Perron-Frobenius theorem and the normaliza-tion condition, we find that G (0) τ has the form The subleading order of (105) yields From the solvability condition for G (1) τ , we obtain the effective dynamics for G X τ (x): where L X λ S ,λ I denotes the effective tilted generator given by Importantly, this effective tilted generator satisfies the following property: where ⊤ denotes the matrix transpose. Because the scaled cumulant generating function is equal to the largest eigenvalue of this effective tilted generator, the Gallavotti-Cohen symmetry follows from this property:

B. Input-output fluctuation inequalities
In the linear response regime, where the scaled cumulant generating function can be approximated by a quadratic form [17], the Gallavotti-Cohen symmetry (116) constrains its form as where a, b, c are constants. From the convexity of µ(λ S , λ I ), these coefficients satisfy a ≥ 0, c ≥ 0, and ac − b 2 /4 ≥ 0. By noting thaṫ these coefficients are further constrained by the second law of information thermodynamics to satisfy a + b + c ≥ 0.
1. Information-thermodynamic engine: We consider the case of −İ X ≥Ṡ X env , i.e., c ≥ a, which includes the case where X acts as an informationthermodynamic engine with 0 < −Ṡ X env ≤ −İ X . Since we have these relations between the coefficients a, b, c lead to the following input-output fluctuation inequalities: These inequalities state that the fluctuation of the output current (negative entropy production) is smaller than that of the input current (information flow), while the relative fluctuation of the output current is larger than that of the input current.
We can also derive input-output fluctuation inequalities when −İ X ≤Ṡ X env , i.e., c ≤ a, which includes the case where X plays the role of a memory with 0 <İ X ≤Ṡ X env . In this case, the information flowİ X corresponds to the output current while the entropy production rateṠ X env corresponds to the input current. Obviously, we have the following relations: (125)

VI. EXAMPLES
In this section, we illustrate our results, the trade-offs for information-thermodynamic engines and the inputoutput fluctuation inequalities, using two simple examples. The first example is coupled quantum dots, which is one of the simplest models of autonomous Maxwell's demon [45,54]. As a second example, we consider coupled linear overdamped Langevin equations, which ubiquitously appear in biological contexts with the linear noise approximation [5,[55][56][57]. Interestingly, the equality condition of the trade-offs (74) and (76) is satisfied even far from equilibrium in this case.
A. Coupled quantum dots

Model
We consider the system composed of two single-level quantum dots X and Y . Let x ∈ {0, 1} and y ∈ {0, 1} be occupation variables on each particle site, where x = 1 and y = 1 (x = 0 and y = 0) represent that the site of X and Y is filled (empty), respectively. The energy of X is ϵ X when it is filled with a particle and zero when it is empty. A single particle site of X exchanges particles with two particle reservoirs ν = L, R at temperature T and chemical potential µ ν . We assume that ∆µ := µ L − µ R > 0. Let p t (x, y) be the probability of state (x, y) at time t. The time evolution of p t (x, y) is described by the master equation: where x ′ := 1−x and y ′ := 1−y. Here, w (ν)y xx ′ denotes the time-independent transition rate from x ′ to x induced by the reservoir ν, which satisfies the local detailed balance condition: w (ν)y 10 w (ν)y 01 = exp(−β(ϵ X − µ ν )). (127) We suppose that the transition rates have the form where f ν := [exp(β(ϵ X −µ ν ))+1] −1 is the Fermi distribution function, and Γ X (Γ X ) denotes a positive coupling strength. Below, we focus on the case whereΓ X ≪ Γ X . The above form of transition rates implies that the coupling strength of the R(L)-reservoir changes from Γ X tõ Γ X when Y is filled (empty) with a particle. The transition rates associated with Y are given as follows: where Γ Y is a coupling strength, and ε can be interpreted as an error probability with 0 ≤ ε ≤ 1.
In this model, the subsystem Y acts as Maxwell's demon when ε is sufficiently small. To understand this point intuitively, let us consider the state of Y as representing the position of the wall, which is inserted between the single site of X and the reservoir. In other words, when y = 0 (y = 1), the wall is inserted between the site of X and the L (R) reservoir and prohibits the transition due to the L (R) reservoir by changing the coupling strength from Γ X toΓ X (see Fig. 2). As a result, particles are transferred from the R to L reservoirs against the chemical potential difference.

Fast relaxation limit of Y
Hereafter, we focus on the case where Y is faster than X, i.e., Γ Y ≫ Γ X ≫Γ X . By performing a perturbation expansion following Sec. III C, we can show that p t (x, y) ≃ p X t (x)π ss (y|x) with π ss (y = x|x) = 1 − ε, (133) π ss (y = 1 − x|x) = ε. (134) The effective dynamics for X is then given by where w (ν) xx ′ π ss (y|x ′ ) denotes the effective transition rates: Thus, in the fast relaxation limit of Y , the system X can be considered as an autonomous system where the coupling strength of the reservoirs changes autonomously. More specifically, when x = 1, the coupling strength of the R-reservoir changes from the original strength, Γ X , to a smaller value,Γ X , while that of the L-reservoir remains unchanged. In contrast, when x = 0, the coupling strength of the L-reservoir becomes small while that of the R-reservoir remains at the original strength Γ X . This autonomous control is probabilistic and has the error probability ε.

Trade-off between power and efficiency
We first consider the trade-off between the negative entropy production rate and information-thermodynamic efficiency (74). Note that (74) corresponds to the tradeoff between power and efficiency (95), becauseṠ X env = βẆ X in this model.
We first calculate the average rate of chemical work we note that b xx ′ µ ν corresponds to the energy provided by the particle reservoir ν during the transition (x ′ , y) → (x, y). Then, the average rate of chemical work readṡ where in the second line, we have used p ss (x ′ , y) ≃ π ss (y|x ′ )p X ss (x ′ ) in the fast relaxation limit of Y . In the last line, J X denotes the net particle current from L to R, which is conjugate with the chemical potential difference ∆µ: The net particle current J X becomes negative when ε is smaller than the critical value ε * , which can be evaluated as Note which follows from the condition ∆µ = µ L − µ R > 0. Similarly, the information flow can be expressed aṡ where in the second line, we have used p ss (x ′ , y) ≃ π ss (y|x ′ )p X ss (x ′ ) in the fast relaxation limit of Y . In the last line, F I denotes the information affinity defined as F I := ln π ss (0|0)π ss (1|1) π ss (0|1)π ss (1|0) and J I denotes the probability current that is conjugate with F I : Thus, the tight-coupling condition is satisfied in the limit Γ X /Γ X ≪ 1. Since ε * < 1/2, the information flowİ X also becomes negative when ε < ε * . The fluctuation of the chemical work can be calculated by considering the tilted dynamics (see Appendix C). The result reads where D n denotes the fluctuation of the net particle current: We now focus on the case of ε < ε * , where the system X acts as an information-thermodynamic engine withẆ X < 0. The corresponding informationthermodynamic efficiency reads where F X := β∆µ denotes the thermodynamic affinity conjugate with J X . The ε-dependence of the efficiency η X W and the output power |Ẇ X | is shown in Fig. 3(a) and (b), respectively. From this figure, we can see that the output power does not remain finite as η X W → 1. This result is consistent with the trade-off between power and information-thermodynamic efficiency (95) as illustrated in Fig. 3(b).
We next consider the trade-off relation where the negative entropy production is bounded by the fluctuation of the time-integrated stochastic information flow D I (76). In terms of the powerẆ X , it can be expressed as The fluctuation of the information flow D I can also be calculated by using the tilted dynamics as which satisfies β D W /D I = F X /F I ≃ η X W . Therefore, from (83), it follows that the upper bound of (150) is exactly the same as that of (95) for ε < ε * .
For comparison, we also plot the information flow and its upper bound (82) in Fig. 3(c). As in the case of the output power, the information flow also vanishes as the efficiency η X S (= η X W ) approaches 1. We note that |İ X | → ∞ as ε → 0 because the information affinity F I diverges.

Input-output fluctuation inequalities
We now consider the input-output fluctuation inequalities for ε < ε * , where the entropy production (Ṡ X env = βẆ X ) and the information flow correspond to the output and input currents, respectively. Since F X /F I ≤ 1, we can easily confirm that D S ≤ D I and D I /(İ X ) 2 = D S /(Ṡ X env ) 2 . Thus, the input-output fluctuation inequalities are satisfied even beyond the linear response regime in this model. Furthermore, the equality is achieved for the inequality regarding the relative fluctuations.
In the fast relaxation limit ϵ → 0, the joint probability density p τ (x, y) can be approximated as p τ (x, y) ≃ p X τ (x)π ss (y|x), where The resulting effective dynamics for X readṡ 3. Trade-off between negative entropy production and efficiency We first consider the trade-off between the negative entropy production and efficiency (74). Note that, unlike the previous example, this trade-off is not the same as the trade-off between power and efficiency (95) because there is no externally applied work in this system. In the steady state with the fast relaxation limit of Y , the entropy production rate associated with X readṡ where the symbol • denotes the Stratonovich product. We note thatṠ X env is induced by the fast variable y t , which does not appear in the effective dynamics for X (161). In other words,Ṡ X env is an entropy production invisible from the effective dynamics, which is called hidden entropy [62,63]. Similarly, the information flow can be calculated aṡ In the context of the Brownian gyrator, we can show that there is a torque, which remains finite even in the fast relaxation limit of Y . Both the medium entropy production rateṠ X env and the information flowİ X are proportional to this "hidden" torque. The fluctuation of the entropy production can be calculated by considering the tilted dynamics. The result reads (see Appendix D for the derivation) We now focus on the case where ω XY ω Y X > 0 and ω XY D Y < ω Y X D X . In this case, both the entropy production rate and information flow become negative, i.e., X acts as an information-thermodynamic engine. Then, the corresponding information-thermodynamic efficiency is given by Combining (165) and (164), we find that the upper bound on the negative entropy production rate (74) is Thus, the equality condition is satisfied even far from equilibrium in this case. This is in contrast to the standard long-time TUR, where the equality is guaranteed only in the near-equilibrium limit. We next consider the trade-off relation where the negative entropy production is bounded by the fluctuation of the time-integrated stochastic information flow D I (76). The fluctuation of the information flow D I can also be calculated by using the tilted dynamics as which satisfies D S /D I = η X S . Therefore, from (83), it follows that the upper bound of (76) is exactly the same as that of (74). This implies that the trade-off between the information flow and efficiency (82) also achieves the equality in this case: We now consider the possibility of achieving finite negative entropy production even when η X S → 1. We first note that the negative entropy production can be expressed in terms of η X S as Since 0 <ω XYωY X < 1, we find that |Ṡ X env | → 0 as η X S → 1 as long as ω X is finite. In contrast, if ω X is scaled as ω X = ω 0 /(1 − η X S ), the negative entropy production can remain finite even in the limit η X S → 1: As can be seen from the trade-off relation (166), the fluctuation of entropy production blows up as η X S → 1 in this case (see Fig. 4(a)): Similarly, the information flow can also remain finite in the limit η X S → 1 at the expense of the blow-up of the fluctuation of information flow as η X S → 1 (see Fig. 4(b)):

Equality condition of bipartite TUR for this model
Here, we discuss the reason why the equality of the trade-offs (166) and (168) is achieved in this model. We first recall that these trade-offs are special cases of the bipartite TUR in the fast relaxation limit of Y (55). In this model, the time-integrated generalized currentĴ T for the subsystem X can be expressed aŝ with an arbitrary weight function g(x, y). As in Sec. III D, the current can be decomposed asĴ T =Ĵ I T +Ĵ II T witĥ where the symbol · denotes the Ito product, W X t denotes the Wiener process, and We now show that this model satisfies the sufficient condition for the bipartite TUR in the fast relaxation limit of Y (55) to hold with equality, described in Sec. III D. First, the weight of the current should be proportional to that of the partial entropy production: The time-integrated stochastic information flow ∆Î X in the steady state with the fast relaxation limit of Y is an example that satisfies this condition: Second, for this choice of the current, the fluctuation of J II T must go to zero in the fast relaxation limit of Y : In this model, we can confirm that this condition is indeed satisfied by explicitly calculating D II J (see Appendix D 2). As a result, the equality of (55) is achieved for a current that satisfies the condition (178) in the fast relaxation limit of Y : Note that the condition described above is only a sufficient condition. In fact, the equality (181) holds for more diverse types of currents that do not even satisfy the condition (178) in this model. To see this, note that the currentĴ T that satisfies the condition (178) can be expressed aŝ The important point here is that the second term is a boundary term and can be ignored when considering the long-time statistical properties ofĴ T . (For the effect of such a boundary term on the large deviation, see [64].) Therefore, any currentĴ T that has the same long-time statistical properties as (182) satisfies the equality (181). The example includes the stochastic medium entropy production ∆Ŝ X env : . (183) Hence, the choiceĴ T = ∆Ŝ X env also satisfies the equality (181), although it does not satisfy the condition (178). Indeed, we can show that the following relation holds: Here, D I S := lim T →∞ Var[Ĵ I T ]/2T denotes the fluctuation ofĴ I T withĴ T = ∆Ŝ X env , which is given by

Input-output fluctuation inequalities
We finally consider the input-output fluctuation inequalities for the case of ω XY ω Y X > 0 and ω XY D Y < ω Y X D X , where the entropy production and the information flow correspond to the output and input currents, respectively. From the relation D S /D I = η X S , it immediately follows that D S ≤ D I and D I /(İ X ) 2 = D S /(Ṡ X env ) 2 . Thus, as in the previous example, the input-output fluctuation inequalities are satisfied even beyond the linear response regime in this model, and the equality is achieved for the inequality regarding the relative fluctuations.

VII. CONCLUDING REMARKS
In this paper, we have obtained several fundamental limits for information processing systems. Specifically, we have derived a TUR-type inequality for bipartite systems that provides a universal lower bound on the relative fluctuation of an arbitrary current for a system of interest by the associated partial entropy production, which includes the information flow. This bipartite TUR includes the standard TUR as a special case and incorporates the effect of the interaction with external auxiliary systems. As a corollary to this inequality, we have derived universal trade-off relations between the negative entropy production rate and the informationthermodynamic efficiency, which can be regarded as an extension of the trade-offs for heat engines [21,22] to information-thermodynamic engines. Furthermore, in the fast relaxation limit of the auxiliary system, we have shown that the Gallavotti-Cohen symmetry holds even in the presence of information flow. From this symmetry, we can show that the input-output fluctuation inequalities are also valid for information processing systems. We have illustrated our results with two simple examples: coupled quantum dots and coupled linear overdamped Langevin equations. In particular, we have seen that the latter provides an example where the equality of the bipartite TUR is achieved even far from equilibrium.
Here, we provide some remarks on previous studies related to our results. We first note that the bipartite TUR in the short time limit T → 0 is already proved in [52] using the Cauchy-Schwarz inequality. Our first main result (26) can be regarded as an extension of the short-time bipartite TUR to an arbitrary observation time T . TUR-type inequalities including measurement and feedback are also derived from fluctuation theorems in [65,66]. While these relations include a contribution of information induced by measurement and feedback processes, this contribution appears in the form of total entropy production rather than partial entropy production. Therefore, our bipartite TUR can provide more stringent bounds on the precision of currents under measurement and feedback control. The standard TUR has also been discussed as a tool for inferring entropy production [52,[67][68][69][70][71]. In this context, the bipartite TUR proved here may provide a promising approach to estimating a partial entropy production, especially an information flow.
Next, we remark on the range of validity of the bipartite TUR. While here we have presented the bipartite TUR in the steady state, this relation is valid even for systems under arbitrary time-dependent driving from arbitrary initial states. In Appendix A, we provide a proof of the bipartite TUR in a general form for the case of overdamped Langevin equations. It should also be noted that the bipartite TUR is generally not valid for systems with broken time-reversal symmetry, such as underdamped Langevin dynamics [37][38][39][40][41][42][43], as in the standard TUR. However, many relevant biological systems are often described by continuous-time Markov jump processes or diffusion processes with only even variables and parameters under time reversal. Therefore, the results described in this paper will be applicable to a wide range of systems, including biological systems.
In this study, we have focused mainly on the case where an auxiliary system evolves much faster than the system of interest. Such a separation of time scales allows the dynamics of a composite system to be reduced to the effective dynamics of the system of interest, and thus various universal relations similar to those found for a single system hold. While we expect such a separation of time scales to be ubiquitous in biological systems, extending our results to cases where there is no clear time-scale separation would be important for elucidating the design principles of biological systems.
∂ t ′ p(x, t|x ′ , t ′ ) = −∂ t p(x, t|x ′ , t ′ ), and thus Hence, by integrating by parts, we obtain Appendix C: Coupled quantum dots In this section, we provide a detailed calculation of the fluctuation of the chemical work D W in the fast relaxation limit of Y for the coupled quantum dots introduced in Sec. VI A. The fluctuation of the information flow D I can be calculated in a similar way. The stochastic chemical work is defined as where b xx ′ := 1 (x = 1, x ′ = 0), −1 (x = 0, x ′ = 1). (C2) Then, the fluctuation of the stochastic chemical work is defined as The fluctuation D W can be obtained from the scaled cumulant generating function defined by As described in Sec. V A, the scaled cumulant generating function can be calculated by considering the generating function conditioned to a final state (x, y): G T (x, y) := d∆W X p T (x, y, ∆W X )e λ∆W X . (C5) The time evolution of G T (x, y) reads where L X λ and L Y λ denote the tilted generators given by where we have used the dimensionless slow time τ := Γ X T and dimensionless transition rates w (ν)y xx ′ := w (ν)y xx ′ /Γ X and w yy ′ x := w yy ′ x /Γ Y with a small parameter ϵ := Γ X /Γ Y ≪ 1 (do not confuse ϵ with the error probability ε).
Since we are interested in the fast relaxation limit of Y , we can consider the effective tilted dynamics for G X τ := y G τ . By performing a perturbation expansion as in Sec. V A, we obtain where L X λ denotes the effective tilted generator given by which can be expressed as where we have introduced the effective transition rate w Then, the second derivative of θ max gives the fluctuation D W : where D n denotes the fluctuation of the net particle current: 10 + w g(x, y) := 1 D X (−x +ω XY y).
The fluctuation D S can be obtained from the scaled cumulant generating function defined by To compute the scaled cumulant generating function, we introduce the generating function conditioned to an initial state (x 0 , y 0 ) = (x, y), defined by G T (x, y) := ⟨e λ∆Ŝ X env |x, y⟩.
The time evolution of G T is described by the Feynman-Kac formula [78]: where L † λ denotes the tilted generator defined by whereF X (x, y) := −x+ω XY y andF Y (x, y) :=ω Y X x−y denote the dimensionless drift terms. The largest eigenvalue of this tilted generator gives the scaled cumulant generating function.
Since we are interested in the fast relaxation limit of Y , we can further simplify the problem by considering the effective tilted generator for X, as follows. We first assume that G τ have asymptotic expansions in terms of the asymptotic sequences {ϵ n } ∞ n=0 as ϵ → 0: Here, we impose the normalization condition dyπ ss (y|x)G (0) τ (x, y) = dyπ ss (y|x)G τ (x, y) where π ss denotes the zero-eigenfunction for L Y 0 . By substituting this expansion into (D6), we find that the lead-ing order gives Since L Y † λ = L Y † 0 , the zero-eigenfunction for L Y † λ is 1. From the Perron-Frobenius theorem and the normalization condition, we find that G f (x τ , y τ )dτ, where f (x, y) := C(y −ω Y X x)(−x +ω XY y) The fluctuation D II J can be obtained from the following scaled cumulant generating function, which corresponds to the largest eigenvalue of the following tilted generator [78]: The effective tilted generator is then given by L X † λ := dyπ ss (y|x)L X † λ = −(1 −ω XYωY X )x∂ x +D X ∂ 2 x + λC −ω Y XDX +ω XYDY .
By performing a similar calculation as in the previous section, we finally obtain Thus, we find that D II J := lim T →∞ Var[Ĵ II T ]/2T = 0.