Noise-Induced Bistable States and Their Mean Switching Time in Foraging Colonies

We investigate a type of bistability where noise not only causes transitions between stable states, but also constructs the states themselves. We focus on the experimentally well-studied system of ants choosing between two food sources to illustrate the essential points, but the ideas are more general. The mean time for switching between the two bistable states of the system is calculated. This suggests a procedure for estimating, in a real system, the critical population size above which bistability ceases to occur.

Bistable systems, as their name implies, are systems which may reside in one of two states. Typically, these states are extremely stable, with rare transitions only occurring through the effects of noise (intrinsic or extrinsic) or external perturbations.
The standard theoretical approach used to investigate bistability is to begin by modeling the system deterministically though a set of differential or difference equations. In the deterministic system there can be no transitions between steady states without the addition of noise to move the system from one state to the other. The theoretical literature examining this effect is enormous, with very many variants of this basic scenario having been investigated in considerable detail [1]. The majority of these theoretical studies fail to use the noise structure appropriate to the system under consideration, and reverse the logical sequence of model building: the deterministic equations together with the correct form of the noise should follow from a model constructed at the microscale (see for instance [2] or [3]).
A bottom-up approach such as this is required to understand unexpected and non-intuitive results such as those seen when a chemical system with a single stable fixed point is driven to bistability at low molecule numbers [4]. This recently discovered mechanism for bistability, so far only investigated in the context of biochemical reactions, is a result of the non-linear nature of the intrinsic noise [4][5][6][7][8]. In this type of bistability, the noise is responsible for the existence of the bistable states, as well as causing the transitions between them, in contrast to the conventional picture of bistability in which the role of the noise is simply to induce transitions. A distinguishing feature of these noise-induced bistable states is the presence of a critical system size, N c , above which bistability does not occur. Evidence for the effect was first found numerically in a study of autocatalytic reactions in a cell [4]. Subsequent analytical studies proposed that the phenomenon is due to the multiplicative nature of the noise [5], and this was later confirmed by the estimation of the critical system size, N c [6]. The theory has been applied to the study of an enzymatic cycle [7]. A recent and more rigorous analysis can be found in [8].
An experimentally testable biological system that exhibits bistability may be found in the foraging behavior of an ant colony. Here we consider a classic experiment, in which a colony of ants is exposed to two identical sources of food. The foraging ants, rather than distributing equally between the two sources instead favor only one source [9,10]. After a period of time they appear to turn their attention to the other option, so that the majority of ants then start to collect their food from the other source [10,11]. The models initially used to explain this result were typically rather detailed [9]. However, Kirman [11] observed that analogous behavior also occurs in other systems involving populations, for instance queuing [12] and stock market trading [13]. This suggests a common mechanism depending only on shared properties of the different systems. It is generally agreed that the autocatalytic dynamics present in all of these systems is a key ingredient required for their bistability [11,14].
In this Letter we propose that the underlying mechanism for the bistability observed in the experiment described above is the same as that found in the biochemical reactions previously mentioned [4,5]. To study this, we use a simple model of autocatalytic recruitment and review the estimation of the critical system size, N c , using stationary analysis, for our system. However, the expression obtained for N c is not easy to experimentally test in our system. We therefore extend our analysis to study the time-dependent behavior of the system, by calculating the mean switching time between the two bistable states for different population sizes. This provides a means to measure N c experimentally and can be used to test our hypothesized mechanism for bistability.
Our model consists of a colony of N ants collecting food from two identical sources, labeled 1 and 2. Ants which collect food from source 1 are denoted by X 1 and those which collect food from source 2 by X 2 . The fraction of ants which choose source i is denoted by x i , i = 1, 2. An ant collecting food from one source can be recruited by an ant collecting food from the other. The recruitment of ants is thus autocatalytic, in that the more ants collecting from any particular source, the higher the rate of recruitment to that source. An ant may also spontaneously choose to use the other source. We may summarize the model through the following reaction scheme: (1) This model is already known in the context of chemical reactions [5], obtained as a simplification of the Togashi-Kaneko scheme [4]. Ant recruitment is dominant so that 0 < r, and we assume r = 1 without loss of generality by noting that may always be rescaled, as discussed in the supplementary material (SM). We note that the number of ants is conserved so that x 1 + x 2 = 1 for all time, and hence the system is fully described by a single independent variable.
To fully specify the model we now give the probability of transition, T (a|b) from state b to state a. Invoking mass action [15] We use the transition rates to write down the master equation for the probability density function (PDF), The scheme of reactions (1) was simulated using the Gillespie algorithm [16] and a typical time series for z = x 1 − x 2 is shown in Fig. 1. Regardless of the initial condition, the system settles into one of the steady states z ≈ ±1, indicating that the majority of ants favor one food source. After some time, the system then switches to the other state, z ≈ ∓1, where the majority of ants favor the other source.
Unlike other forms of bistability (for example, a Brownian particle in a double-well potential [1]), this type of bistability cannot be understood from the fixed points of the corresponding deterministic equations. Indeed, if we take the limit N → ∞ [15] to eliminate stochastic effects, we obtain the equationż = −2 z (see SM). This equation has a unique stable fixed point at z * = 0, which is not seen in simulations of the full system. Thus the bistability observed in the stochastic system is not reflected in the deterministic equations.
To understand the origin of the bistability, we expand the master equation (3) in powers of the inverse population size, N −1 (see SM). After rescaling time, 2 t/N = τ , we find that the system is approximated by the following stochastic differential equation (SDE) [17]: where N c ≡ 1/ and η(τ ) is Gaussian white noise with zero mean and correlator η(τ )η(τ ) = δ(τ − τ ). As shown in [6], Eq. (4) underlies a broad class of systems featuring an autocatalytic network and a slow linear reaction. The variable z = x 1 − x 2 ranges over the interval [−1, 1], whose extrema correspond to all ants collecting food from a single source. Equation (4) for = 0 is equivalent to the Wright-Fisher model with mutation, under the change of variable x = (1 + z)/2 [18]. We see from Eq. (4) that the strength of the intrinsic system noise is proportional to √ 1 + 2 − z 2 . The noise therefore has maximum strength at the deterministic steady state z = z * = 0, pushing the system away from this point and towards z = ± √ 1 + 2 . Since z is defined in the interval [−1, 1] the system cannot cross these boundaries. Bistability originates from the dependence of the noise strength on the variable z. At z = ±1 the noise term is at a minimum, whilst the deterministic term −z attracts the system back towards z * . As the trajectory leaves z = ±1 the noise term regains strength and once again kicks the system towards one of the bistable steady states z = ±1. These combined effects are seen in the dynamics of Fig. 1.
A distinguishing characteristic of noise-induced bistable states is the existence of a critical system size, above which bistability ceases to occur. This should be contrasted with the bistability in which the system moves between two fixed points due to the presence of noise, where varying the noise strength merely affects the characteristic time spent in each bistable state. We may therefore predict that if the bistable states are noise-induced then there should exist a critical population size above which the behavior ceases to occur.
As shown in previous studies [5][6][7][8]18], the transition between the regime which shows bistable behavior and the one that does not, can be understood from the Fokker-Planck equation corresponding to Eq. (4). Taking ∂ t P = 0 and imposing zero-flux boundary conditions at z = ±1 [1], we obtain the stationary probability distribution where C 0 is a normalisation constant, found by requiring that the integral of P s (z) over the interval [−1, 1] is unity. The stationary distribution predicts the normalised long-time frequency histogram of z and is plotted against simulation data in Fig. 2 for different population sizes. For N < N c , P s (z) has a U-shape, diverging at z = ± √ 1 + 2 . Below the critical population size, the system therefore spends most of the time close to the bistable states. In contrast, for N > N c , the steady state distribution, P s (z) has an inverted U-shape, centred on the deterministic fixed point z = z * = 0. This latter regime is the only one that is captured by the linear noise approximation technique (the van Kampen expansion) [15,17,19].
To estimate the critical population size requires knowledge of the parameters r and (recall that we set r = 1 by rescaling ). However, these reaction constants are difficult to measure experimentally. An alternative way to estimate N c is provided by calculating the time taken for the system to move from one bistable state (z = −1, say) to the other (z = 1). This time is a stochastic variable whose mean (over many realizations) is denoted by T . Using Eq. (4) we may find this mean switching time [1] (see the SM for details). In the rescaled time variable, τ , this is given by where the function 2 F 1 is the hypergeometric function [20]. Equation (6) agrees with simulations of the reaction scheme (1) only for N in the neighborhood of N c (Fig. 3) and for N > N c (this latter result is not shown). Results are shown for different values of using different symbols. Note that for small N the simulation results merge so that the mean time is independent of . Since time was rescaled by , however, an dependence is retained in the definition of τ . At small population sizes, as the simulation results become independent of , Eq. (6) breaks down and does not capture the system behavior. The failure of Eq. (6) in this regime is due to assumptions made in the derivation of Eq. (4), which is no longer representative of the system at small population sizes. Instead the terms neglected in the expansion of the master equation must be retained.
Indeed, in our derivation, the noise strength in Eq. (4) diverges as N → 0, so that the time taken to move from one bistable state to the other shrinks to zero. In contrast, the simulated switching times do not go to zero as N → 0. However, we see from Fig. 4 that the range of N where our prediction holds differs for different values of . The agreement improves for smaller , suggesting that the limiting value of Eq. (6) as → 0 may capture the system dynamics at small population sizes.
Taking → 0 (see SM), Eq. (6) reduces to: Equation (7) agrees well with simulation data for small population sizes (Fig. 5). Since the mean switching time depends strongly on for larger population sizes (Fig. 3), we do not expect T 0 to accurately predict the simulation data for larger N . Indeed, as N → N c , Eq. (7) diverges and thus does not capture the behavior of the system (see SM). Thus we have found two expressions for the mean time to move from one bistable state to the other. Equation (6) is valid for larger population sizes and captures the dependence of the system on in this regime. Equation (7) is valid for small population sizes and does not have any explicit dependence on . These equations may be used to estimate both and the critical population size, N c . To facilitate this estimation we first linearize Eq. (7) for small N to obtain T 0 ≈ 4N/N c + 2.
Since T 0 is measured in units of τ = 2 t/N , and is unknown, we may plot experimental results for t/N and observe that we would expect to obtain a straight line for small values of N . The y-intercept is then given by −1 , whilst the gradient will be 2/(N c ). The value obtained for may then be checked by taking larger population sizes and using Eq. (6). Note, however, that the value of found is the ratio of the two reaction constants, r and , since has been rescaled in order to take r = 1.
In this Letter we have presented a way to experimentally determine the critical population size in a system with noise-induced bistable states. Using time-dependent analysis, we have investigated the mean time taken for the system to move between the two bistable states and found that two regimes exist. For small population sizes, the mean switching time is independent of and Eq. (7) is representative of the system behavior. Conversely, for large population sizes the value of becomes important and we must use Eq. (6). The mean switching time is an experimentally measurable quantity that may be used to confirm or reject the hypothesis that noise-induced bistable states may explain the empirical results seen in the experiments on ant foraging..
The analysis may be further extended by considering the full distribution of times to move between the bistable states, rather than using only the mean time. In this way it would be possible to assess any skewness of the distribution and determine how representative the mean time is of the full distribution.
Our results do not only apply to the model described here, as Eq. (4) is the reduced one-dimensional equation for many stochastic systems, such as the Togashi-Kaneko model [6]. We believe that the mechanism for noiseinduced bistability, in which the changing noise strength at different system states leads to substantially different behavior from the deterministic approximation, will be applicable to a wide variety of systems.
TB acknowledges partial financial support from the EPSRC (UK) and LD was supported under EPSRC grant EP/H02171X.

The derivation of the equation for the z variable
The model is defined by the two transition rates: We rewrite the master equation, using the step operators, ε ± i , which represent the creation or destruction of a molecule of species X i (i = 1, 2). Taylor expanding in 1/N , the inverse of the population size: where f (x i ) is a general function of the fraction of the i-th species, x i . The master equation (9) can be approximated using Eq. (10) to give neglecting terms of O(1/N 3 ).
Rescaling time by t/N → t and inserting the expressions of the transition rates (8) gives the Fokker-Planck equation where A 1 = −A 2 = (x 2 − x 1 ) and B ij = (2rx 1 x 2 + (x 1 + x 2 ))(−1) i+j . This is equivalent to the following system of SDEs [1] in which the noises have zero mean: We make the transformation Hence the new noises are delta-correlated, that is, η i (t)η j (t ) = δ ij δ(t−t ). This can be proved using the expression of the correlator for ξ i and the fact that B = GG T . System (13) then becomes: We now introduce new variables, w = x 1 + x 2 and z = x 1 − x 2 , which satisfy equations obtained by summing and subtracting the equations for x 1 and x 2 : The z equation can be simplified as follows. First, we use the sum rule for Gaussian variables, so that η 1 − η 2 = √ 2η, where η is normalised Gaussian white noise [1]. Then, we rescale time by 2 t → t. Note that the coefficient which multiplies the noise scales with a square root law, as expected [1]. The overall time scaling is given by τ = 2 t/N and we obtain where the prime sign indicates the time derivative with respect to τ . Without loss of generality we set r = 1, since we may rescale to absorb r. Since the transition rates (8) do not alter the total number of ants, N and w are conserved quantities with w = 1. Hence where N c = 1/ .

The mean switching time
We wish to find the mean time for the system to leave z = −1 and reach z = 1. To derive this quantity we consider the mean time, T , for a system starting at z to leave the interval [−1, 1]. This is derived from G(z, t), the density of probability that a system beginning at z has not left the interval [−1, 1] by time t. Then G(z, t) satisfies the backward Fokker-Planck equation corresponding to Eq. (17) [1] where λ = N/N c , with a reflecting boundary condition at z = −1 and an absorbing boundary condition at z = 1. Now the probability density function for the system beginning at z and reaching the boundary at z = 1 at time t (where it is thus removed from the interval) is given by −∂ t G [1]. Hence the mean switching time is given by assuming G(z, t) is well behaved as t → ∞. Integrating Eq. (18) over t, we obtain since the system must start in the interval [−1, 1] so that ∞ 0 ∂ t G(z, t) dt = −G(z, 0) = −1.
We may solve Eq. (20) by first writing it as To integrate the right hand side we need the following integral where 2 F 1 is the hypergeometric function [20]. Equality (23) can be seen by expanding (1 + 2 − z 2 ) µ as the binomial series, integrating term-by-term and using the series definition for the hypergeometric function. Integrating both sides of Eq. (22): where C 1 is an integration constant. Integrating again, we obtain