Truncated stochastically switching processes

There are a large variety of hybrid stochastic systems that couple a continuous process with some form of stochastic switching mechanism. In many cases the system switches between different discrete internal states according to a finite-state Markov chain, and the continuous dynamics depends on the current internal state. The resulting hybrid stochastic differential equation (hSDE) could describe the evolution of a neuron's membrane potential, the concentration of proteins synthesized by a gene network, or the position of an active particle. Another major class of switching system is a search process with stochastic resetting, where the position of a diffusing or active particle is reset to a fixed position at a random sequence of times. In this case the system switches between a search phase and a reset phase, where the latter may be instantaneous. In this paper, we investigate how the behavior of a stochastically switching system is modified when the maximum number of switching (or reset) events in a given time interval is fixed. This is motivated by the idea that each time the system switches there is an additive energy cost. We first show that in the case of an hSDE, restricting the number of switching events is equivalent to truncating a Volterra series expansion of the particle propagator. Such a truncation significantly modifies the moments of the resulting renormalized propagator. We then investigate how restricting the number of reset events affects the diffusive search for an absorbing target. In particular, truncating a Volterra series expansion of the survival probability, we calculate the splitting probabilities and conditional MFPTs for the particle to be absorbed by the target or to exceed a given number of resets, respectively.


I. INTRODUCTION
There are a wide range of stochastic processes in cell biology that involve the coupling between continuous and discrete random variables (stochastic hybrid systems) [1].The continuous process could represent the concentration of proteins synthesized by a gene [2][3][4][5][6][7], the membrane voltage of a neuron [8,9,[11][12][13][14][15][16], the position of a swimming bacterium [17][18][19][20], or a molecular motor [21][22][23][24][25].The corresponding discrete process could represent the activation state of the gene, the conformational state of an ion channel, or the velocity state of an active particle.Let (X(t), N (t)) denote the state of the system at time t with X(t) ∈ R d and N (t) ∈ Γ, where Γ is a discrete set.Assuming that N (t) = n, the continuous variables typically evolve according to a hybrid stochastic differential equation (hSDE) of the form dX = A n (X)dt + √ 2DdW, where W is a vector of independent Wiener processes and A n is an n-dependent drift term.(The diffusivity could also depend on n.)The discrete variable switches between the different discrete states according to a continuous time Markov chain whose matrix generator could itself depend on X(t).In the limit D → 0, the dynamics reduces to a so-called piecewise deterministic Markov process [26].
In many applications of hSDEs, there is a separation of time scales, whereby the switching between discrete states of the Markov chain is fast compared to the relaxation dynamics of the continuous process.Suppose that τ is the characteristic time-scale of the relaxation dynamics and ǫτ is the characteristic time-scale of the Markov chain for some small positive parameter ǫ.Taking the limit ǫ → 0 then leads to an effective continuous dynamical system that is obtained by averaging the piecewise dynamics with respect to the corresponding unique stationary measure of the Markov chain (assuming the latter exists).In the weak-noise regime 0 < ǫ ≪ 1, various approaches have been used to study noise-induced transitions between metastable states of the averaged system.These include large deviation theory [27][28][29][30], WKB approximations and matched asymptotics [5,6,11,14,15], and stochastic hybrid path integrals [31][32][33].
Another important example of a randomly switching process is a search process with stochastic resetting.(See the review [36] and references therein.)The simplest version of a resetting protocol is to instantaneously reset the position of a diffusing particle to some fixed point x r at a constant rate r [37][38][39].One of the characteristic properties of a search process with stochastic resetting is that the mean time for a Brownian particle to find a hidden target in an unbounded domain is finite, and has an optimal value as a function of the resetting rate r.This is a consequence of the fact that the mean first passage time (MFPT) to find the target diverges in the limits r → 0 and r → ∞.Analogous behavior has been observed in other search processes with resetting, including nondiffusive search processes such as Levy flights [40], active run and tumble particles [41,42] and directed velocity jump processes [43,44], diffusion in potential landscapes [45,51] or switching environments [46][47][48], resetting followed by a refractory period [49,50], resetting with finite return times [51][52][53][54][55][56], and encounter-based models of absorbing targets [57][58][59].
In this paper we consider a different aspect of stochastically switching systems, namely, conditioning the process on the maximum number of switching events that can occur.That is, if M(t) denotes the number of switching events in the interval [0, t], then we impose the condition M(t) ≤ µ < ∞ for all t.One motivation for such a construction is that state transitions in an hSDE tend to cost energy, so that the maximum number of such transitions could be limited.Alternatively, conditioning on the number of transitions provides another type of statistic that could be measured experimentally.For example, in the case of gene networks, transitions from the inactive to active state often results on some form of bursting.In the case of search processes, the cost of stochastic resetting has been explored in a recent paper [60], which assumes that the cost is additive, and the contribution of each reset is a function of the distance a particle must travel to the reset position x r .These authors focus on the mean cost accrued by a search process that is terminated when the target is found.In contrast, we take the cost to be equal to the number of reset events, and terminate the search process as soon as one or other of the following occurs: the particle finds the target or the number of reset events crosses some threshold.
The structure of the paper is as follows.In Sect.II we give a general definition of an hSDE and write down the evolution equation for the associated propagator.In Sect.III we construct an integral equation for the propagator, which is expanded as a Volterra series, whose individual terms correspond to fixing the number of state transitions.Truncating the Volterra series is then equivalent to restricting the maximum number of allowed state transitions.We use this to define a renormalized propagator and its associated moments.The theory is illustrated in Sect.IV using the example of an OU process with random drift, which has previously been used to model the motion of an RTP with diffusion in a harmonic potential [34,35] and protein synthesis in a two-state gene network [3,4].We use the corresponding diagrammatic expansion to calculate moments of the hSDE that are conditioned on the maximum number of switching events.In Sect.V, we develop the analogous theory for a diffusive search process with stochastic resetting.In this case, we expand the standard last renewal equation for the survival probability as a Volterra series in the number of resetting events.Truncating the series now corresponds to restricting the maximum number of resets.We use this to calculate the splitting probabilities and conditional MF-PTs for the particle to be absorbed by the target or to exceed a given number of resets, respectively.

II. HYBRID SDE IN R d
Consider a system whose states are described by a pair of stochastic variables (X(t), When the discrete state is N (t) = n, the system evolves according to the SDE where W is a vector of d independent Wiener processes.The discrete stochastic variable N (t) evolves according to a K-state continuous-time Markov chain with a K × K matrix generator Q that is taken to be independent of X(t).It is related to the corresponding transition matrix W according to Given the definition of Γ m , we can introduce the decomposition W nm = P nm Γ m with n P nm = 1.The positive quantity Γ m is the rate at which a transition from the state m occurs and P nm is the probability that such a transition is to the state n.We assume that the generator is irreducible so that there exists a stationary density ρ for which m Q nm ρ m = 0.In the case of a two-state hSDE (n = 0, 1), the matrix generator takes the form and Given the initial conditions X(0) = x 0 , N (0) = n 0 , we introduce the propagator G nn0 (x, t|x 0 , 0) with and G nn0 (x, 0|x 0 , 0) = δ n,n0 δ(x − x 0 ).The propagator G nn0 evolves according to the forward differential Chapman-Kolmogorov (CK) equation The first two terms on the right-hand side represent the probability flow associated with the SDE for a given n, whereas the third term represents jumps into or out of the discrete state n.In the absence of switching with n fixed, the system reduces to a single SDE whose corresponding FP equation takes the form and p n (x, 0|x 0 , 0) = δ(x − x 0 ).We will refer to p n as the bare (no switching) propagator.

III. INTEGRAL EQUATION AND VOLTERRA SERIES EXPANSION
The propagator G nm satisfies an integral equation of the form dτ dy e −Γn(t−τ ) p n (x, t|y, τ )G lm (y, τ |x 0 , 0). (3.1) The first term on the right-hand side is the contribution from all paths that never switch in the interval [0, t], which only occurs if n = m.The probability of no switching from the state m is e −Γmt .The second term on the right-hand side represents the sum over all trajectories that switch at least once, with the final transition occurring at the time τ .Iterating the integral Eq. (3.1) generates a Volterra series representation of the propagator: dτ dy e −Γn(t−τ ) p n (x, t|y, τ )e −Γm τ p m (y, τ |x 0 , 0) The jth term in the series expansion, j ≥ 0, has the following interpretation: it specifies the contribution to the propagator from paths that undergo exactly j switching events.For example, if n = m then the j = 1 term has a factor P nm Γ m e −Γn(t−τ ) e −Γmτ , after setting W nm = P nm Γ m .The probability that the first transition occurs in the time interval [τ, τ + dτ ] is Γ m e −Γmτ dτ , the probability that m → n is P nm , and the probability that there are no transitions from the state n is e −Γn(t−τ ) .
Hence, the total probability that there is a single transition m → n in the time interval [0, t] is Similarly, the probability that there are two transitions in the interval [0, t] is (3.4) In addition, integrating Eq. (3.2) with respect to x, summing over n, and then using the unit normalization of the propagator shows that For the sake of illustration, consider a two-state hSDE with matrix generator (2.3).Suppose that the system starts in the state n 0 = 0. Then dτ dy e −β(t−τ ) p 0 (x, t|y, τ )G 10 (y, τ |x 0 , 0), The first term on the right-hand side of Eq. (3.6a) represents the contribution from all trajectories that never switch to the state n = 1.The latter occurs with probability e −βt .On the other hand, the integral term sums over all trajectories that switch at least once, with the last switch 1 → 0 occurring at a rate β at a time τ , 0 < τ < t.Similar interpretations apply to Eqs. (3.6b- First few terms in the diagrammatic expansions of the full propagators G00(x, t|x0, 0) and G10(x, t|x0, 0) in terms of the bare propagators pn(x, t|x0, t0) for the two-state Markov chain.Time flows from right to left.
d). Iterating Eq. (3.6a) gives Since, the initial and final discrete states are the same, the number of switches has to be even.Using similar arguments, we obtain analogous series expansions of G 11 , G 01 and G 10 .For example, contributions to G 11 involve sequences of transitions of the form 1 → 0 → 1, whereas contributions to G 10 involves the transition 1 → 0 followed by additional transitions of the form 0 → 1 → 0. The first few terms in the diagrammatic expansions of G 00 and G 10 are shown in Fig. 1.
A few comments are in order.First, as we show in section IV, the series expansion (3.7) is not uniformly convergent due to the presence of secular terms involving powers of αt and βt.Thus one cannot interpret Eq. (3.7) as a perturbation expansion in the slow switching limit α, β → 0. On the other hand, as we have already highlighted, the terms in Eq. (3.7) have a natural probabilistic interpretation based on the number of state transitions.In particular, truncating the series is equivalent to conditioning the propagator with respect to the maximum number of transitions.For a general hSDE, let G (µ) nm (x, t|x 0 , 0) denote the contribution to the propagator from paths that have a maximum of µ transitions, which is given by the first µ + 1 terms in the corresponding diagrammatic expansion.Taking the random variable M(t) to denote the number of transitions over the interval [0, t], it follows that We then introduce a renormalized propagator that is conditioned to undergo a maximum of µ transitions:

IV. OU PROCESS WITH RANDOM DRIFT
In this section we illustrate the theory by considering the particular example of an OU process with random drift.This has previously been used to model an RTP with diffusion in a harmonic potential [34,35] and protein synthesis in a gene network [3,4].In the former case, X(t) ∈ R represents the position of the RTP at time t whereas N (t) = n ∈ {0, 1} specifies the current velocity state v n of the particle.If v 0 = v and v 1 = −v then the motion becomes unbiased when the mean time spent in each velocity state is the same (α = β).On the other hand, in the case of the gene network, X(t) represents the concentration of synthesized protein and N (t) specifies whether the gene is active or inactive.That is, v n is the rate of synthesis with v 0 > v 1 ≥ 0. In both examples, the variable X(t) evolves according to the piecewise SDE ) where κ 0 represents an effective "spring constant" for an RTP in a harmonic potential, whereas it corresponds to a protein degradation rate in the case of a gene network.Comparison with Eq. (2.1) implies that A n (x) = −κ 0 x + v n .One major difference between an RTP and a gene network is that the continuous variable X(t) has to be positive in the latter case.However, we will assume that the effective "harmonic potential" for v 0 > v 1 ≥ 0 restricts X(t) to positive values with high probability so that we do not have to impose the condition and the CK equation can be restricted to the finite interval Σ with reflecting boundary conditions at the ends.
In this case, the steady-state CK equation can be solved explicitly [2][3][4].) A. Bare propagator First suppose that there is no switching (α = β = 0).The FP equation for the bare propagator p n is One way to determine the propagator p n (x, t|x 0 , 0) is to use the fact that we have a Gaussian process so we only need to determine the first and second moments of X(t).
Taking expectations of both sides of Eq. (4.1) and using dW (t) = 0 yields the deterministic differential equation This has the solution Similarly, using dX(t)dW (t) = 0 and dW (t) 2 = dt, we have Subtracting X(t)X(t) from both sides, dividing through by dt and taking the limit dt → 0 leads to the second-order moment equation which has the solution It immediately follows that Hence, the bare propagator p n has the explicit solution

B. Conditional moments
When switching is included, the moments of the hSDE are given by the full propagator: Setting we define the conditional moments according to with G (µ) defined in Eq. (3.9).

M
(1,2) If v 0 = −v 1 = v, then our result is consistent with Taylor expanding the exact expression [35], which can be written in the form does not yield a good approximation of M (1) 00 unless t ≪ 1/α.Similarly for the zeroth moments.
Finally, substituting Eqs.(4.15), (4.16) and (4.18) into Eq.(4.13) yields the following expression for the condi-tional first moment given a maximum of two transitions: Note that, The fact that this limit is independent of the leftward velocity v 1 reflects the fact that restricting the dynamics to two switching events means that the fraction of time spent in the right-moving state approaches unity in the limit t → ∞.Note, however, the conditional and bare moments differ significantly for finite t.In particular, the conditional moment takes much longer to approach the steady-state, and tends to be a non-monotonic function of t, as illustrated in Fig. 2(a).In Fig. 2(b), we compare the conditional moment M (1,2) 00 (t) with the unconditional moment M (1) 00 given by Eq. (4.19) for v 0 = −v 1 = v.The latter has the asymptotic limit As expected, the difference between the two moments increases with α.We now turn to another example of a randomly switching process, namely, a search process with stochastic resetting [36].Consider a particle (searcher) subject to Brownian motion in Ω ⊆ R d , and resetting to a fixed point x r at a constant rate r.Suppose that there exists some target U ⊂ Ω whose boundary ∂U is totally absorbing and x r / ∈ U, see Fig. 3.The probability density p r (x, t|x 0 ) for the particle to be at position x at time t given the initial position x 0 evolves according to the master equation where Q r (x 0 , t) is the survival probability of a particle that started at x 0 : Eq. (5.1) is supplemented by the absorbing boundary condition p r (x, t|x 0 ) = 0 for all x ∈ ∂U and the reflecting boundary condition J r (x, t|x 0 ) = 0 for all x ∈ ∂Ω.
Here J r (x, t|x 0 ) = −∇p r (x, t|x 0 ) • n with n the outward normal on ∂Ω.Let Integrating Eq. ( 5.1) with respect to x ∈ Ω\U and using the divergence theorem shows that with n 0 the normal into U, see Fig. 3. Let T denote the FPT for absorption at ∂U.The MFPT can be expressed in terms of Q r according to (5.4) We have used the fact that the FPT density is f r (x 0 , t) = −dQ r (x 0 , t/dt.It is well known that Q r is related to the survival probability without resetting, Q 0 , according to a last renewal equation [36][37][38]: The first term on the right-hand side represents trajectories with no resettings.The integrand in the second term is the contribution from trajectories that last reset at time τ ∈ (0, t), and consists of the product of the survival probability starting from x 0 with resetting up to time t − τ and the survival probability starting from x r without any resetting for the time interval τ .Eq. (5.5) is the natural analog of the integral Eq. (3.1).The standard method for solving the renewal Eq. (5.5) is to introduce the Laplace transform and use the convolution theorem.Thus, Laplace transforming Eq. (5.5) and rearranging shows that (5.7) The MFPT to reach the target is then given by (5.8)

A. Splitting probabilities and conditional MFPTs
Following our analysis of hybrid SDEs, we now consider a truncated version of the search process, in which the maximum number of resets is fixed.This is equivalent to truncating the Volterra series expansion of the renewal equation, which in the time domain takes the form Q r (x 0 , t) = e −rt Q 0 (x 0 , t) (5.9) The corresponding expansion in Laplace space is a geometric series in powers of r Q 0 (x r , r + s).The ℓth term in the series (5.9), ℓ ≥ 0, is the joint probability Q r,ℓ (x 0 , t) that the particle hasn't been absorbed and has reset exactly ℓ times: where Q 0 ℓ ⊗ Q 0 denotes the ℓth order convolution.The probability that there are ℓ reset events in the interval [0, t] is given by the Poisson distribution Hence, Q r,ℓ (x 0 , t)/P ℓ (t) is the survival probability conditioned on exactly ℓ reset events in [0, t].In Ref. [60] the joint probability distribution for the number of resets, the time of absorption, and a general cost was calculated.One result from that analysis was the probability distribution P (N |x 0 ) for N resets up to the time of absorption with x r = x 0 .In our notation, This equation can be interpreted as follows.First, we suppose that the N th reset occurred at time τ and the particle has not yet been absorbed, which is given by the probability Q r,N (τ, x 0 ).The probability density that there are no more resets and the particle is absorbed at time t is then e −r(t−τ ) f 0 (t − τ, x 0 ).Integrating with respect to τ and t then yields P (N |x 0 ).We can rewrite the right-hand side of Eq. (5.12) using Laplace transforms so that which recovers the result obtained in Ref. [60].
In contrast to [60], we assume that Brownian motion is killed when either (a) the particle reaches ∂U or (b) it resets for the (µ + 1)-th times.The unconditional FPT density is then where r (x 0 , t) is the corresponding survival probability: Since r (x r , t) = 0, the FPT density has unit normalization.Using similar arguments to previous examples, the unconditional MFPT is If we wish to distinguish between the two types of killing events, then we need to determine the splitting probabilities and conditional MFPTs.Let p r,ℓ (x, t|x 0 ) denote the joint probability density for particle position at time t and the number ℓ of resets in the interval [0, t].The forward equation for p r,ℓ is Integrating with respect to x ∈ Ω\U implies that with Q r,−1 ≡ 0.Here J a,ℓ (x 0 , t) is the probability flux into the surface ∂U, whereas J b,ℓ (x 0 , t) is the probability flux associated with resetting, a (x 0 ) and π (µ) b (x 0 ) denote, respectively, the splitting probabilities for absorption at U and resetting for the (µ + 1)-th time.Then and In order to determine the Laplace transformed flux J a (x 0 , s), we Laplace transform Eq. (5.17) under the initial condition p r,ℓ (x, 0|x 0 ) = δ(x − x 0 )δ ℓ,0 .This yields the equation Introduce the Green's function G(x, s|y) with together with the boundary conditions ∇G • n = 0 for all x ∈ ∂Ω and G(x, s|y) = 0 for all x ∈ ∂U.We can then write the solution for p r,ℓ as p r,ℓ (x, s|x r ) = G(x, s + r|x 0 )δ ℓ,0 + rG(x, s + r|x r ) Q r,ℓ−1 (x 0 , s). (5.25) Combining with the Laplace transform of Eq. (5.19), we have Note that − ∂U ∇G(x, s|x 0 )•n 0 dx can be identified with the Laplace transform of the probability flux into the target in the absence of resetting, which we denote by J 0 (x 0 , s).Finally, substituting this solution into Eq.
(5.21) gives Q r,ℓ (x 0 , s) . (5.27) Let T (µ) a (x 0 ) be the FPT that the particle is absorbed at ∂U having started at x 0 .Since there is a nonzero probability that the particle never exits at a point on ∂U due to resetting for the (µ+1)th time prior to absorption, it follows that the unconditional MFPT E[T (µ) a (y)] = ∞.This motivates the introduction of the conditional MFPT (5.28) The conditional FPT density for absorption is so that Similarly, (5.31)

B. Diffusion on the half-line
Consider a diffusing particle on the half-line [0, ∞) with an absorbing target at x = 0.For simplicity, we set x r = x 0 .In the absence of resetting the Laplace transformed survival probability Q 0 (x, s) satisfies the equation together with the boundary condition Q 0 (0, s) = 0. (5.33) The solution takes the form [37,38] which can be inverted to give the error function Eq. (5.8) then implies that T r (x r ) = 1 r e √ r/Dxr − 1 . (5.36) Note that in the limit r → 0, the MFPT diverges as T r ∼ 1/ √ r, which recovers the result that the MFPT of a Brownian particle without resetting to return to the origin is infinite.One also finds that T r diverges in the limit r → ∞, since the particle resets to x r so often that it never has the chance to reach the origin.Finally, the MFPT has a finite and unique minimum at an intermediate value of the resetting rate r [38,39].If we restrict the maximum number of resets, then the blow-up of T r at r → ∞ no longer occurs.This suggests that the unconditional MFPT T (µ) r (x r ) may no longer be unimodal.This is indeed found to be the case.In particular, Eq. (5.16) implies that .
In Fig. 4(a) we plot T (µ) r (x r ) as a function of the resetting rate r for various values of µ and fixed x r .For sufficiently small µ, the MFPT is a monotonically decreasing function of r, whereas as µ increases, T Example plots of π µ b ) and T that the particle resets for the (µ + 1)th time before being absorbed decreases as the maximum reset threshold µ is increased.On the other hand, it is an increasing function of r.The conditional MFPT for exceeding the reset threshold µ is a monotonically decreasing function of r and a monotonically increasing function of µ.This is consistent with the idea that, all other things being equal, a faster reset rate reduces the time to reach µ + 1.In Fig. 6 we show corresponding plots of π In this paper we explored the effects of restricting the maximum number of switching events in a stochastic hybrid system, under the assumption that switching costs energy.We considered two distinct classes of switching dynamics; (i) an hSDE and (ii) diffusion with stochastic resetting.In the former case, we truncated a Volterra series expansion of the particle propagator, and used this to define a renormalized propagator in which the maximum number of switching events is fixed.We illustrated the theory by calculating the renormalized moments of an OU process with random drift.In case (ii), we truncated a Volterra series expansion of the survival probability of a Brownian particle searching for an absorbing target.This led to a modified FPT problem in which the search is terminated when either the particle finds the target or the number of resets exceeds a fixed threshold.We calculated the splitting probabilities and conditional MFPTs for these mutually exclusive events.There are a number of natural extensions of the current work.The first is to calculate renormalized propagators for hSDEs beyond the example of a one-dimensional OU process with random drift.One of the challenges is that there are few examples where the bare propagators p n are known exactly.Moreover, in many cases, the matrix generator Q depends on the continuous state X(t) at time t.One notable example is a gene network that is regulated by its own protein product [2].Suppose that the promoter has a single operator site OS 1 for binding protein X.The gene is assumed to be OFF when X is bound to the promoter and ON otherwise.A second example is protein concentration gradient formation during a particular stage of cell polarization in C. elegans zygotes.Experimentally, it is found that the underlying mechanism relies on space-dependent switching between fast and slow diffusion [61], see also the theoretical studies of Refs.[62,63].Another future direction would be to consider other examples of truncated search processes with stochastic resetting.This could involve modifying the underlying stochastic search dynamics (eg.active particles, Lèvy flights etc.) or introducing delays such as refractory periods and finite return times.Finally, it would be interesting to modify the additive rule for energy cost along the lines of Ref. [60] by taking the cost of each reset to depend on the distance the particle has to travel to the reset point.This would imply that the threshold µ for the number of resets before the search process is killed is itself a random variable that depends on the history of previous resets.

FIG. 3 .
FIG. 3. Domain Ω ⊂ R d containing a single target U with a totally absorbing surface ∂U.Particle starts at x0 and resets to the point xr at a constant rate r.

FIG. 4 .
FIG. 4. Plot of the MFPT T (µ) r (xr) as a function of r for a Brownian particle on the half-line that is killed either by reaching the boundary x = 0 or by resetting for the (µ + 1)-th time.(a) Various µ for xr = 1.(b) Various xr for µ = 100 (solid curves) and µ = ∞ (dashed curves).We set D = 1.

b
cal minimum but is not unimodal.Corresponding plots for various reset positions x r and fixed µ are shown in Fig.4(b).It can be seen that the value of r where truncation starts to have a noticeable effect decreases as x r increases.Turning to the splitting probabilities and conditional MFPTs, we use the identities 1 = π (x r ) = r Q r,µ (x r , 0) = r Q 0 (x r , r) are shown in Fig. 5.As expected, the probability π (µ) b

FIG. 5 . 5 FIG. 6 .
FIG. 5. (a) Plot of the splitting probability π (µ) b as a function of r for the particle to reset for the (µ + 1)th time before being absorbed at ∂U.(b) Corresponding plots of the conditional MFPT T (µ) b .We set xr = 1 and D = 1.

1 α→
If O 0 and O 1 denote the unbound and bound promoter states, then the corresponding state transitions are O 0 βx → O 1 and O O 0 , where x is the concentration of X. Eq. (4.1) still holds but the matrix generator becomes