Escape from an attractor generated by recurrent exit

Kramer's theory of activation over a potential barrier consists in computing the mean exit time from the boundary of a basin of attraction of a randomly perturbed dynamical system. Here we report that for some systems, crossing the boundary is not enough, because stochastic trajectories return inside the basin with a high probability a certain number of times before escaping far away. This situation is due to a shallow potential. We compute the mean and distribution of escape times and show how this result explains the large distribution of interburst durations in neuronal networks.

Kramer's theory of activation over a potential barrier consists in computing the mean exit time from the boundary of a basin of attraction of a randomly perturbed dynamical system. Here we report that for some systems, crossing the boundary is not enough, because stochastic trajectories return inside the basin with a high probability a certain number of times before escaping far away. This situation is due to a shallow potential. We compute the mean and distribution of escape times and show how this result explains the large distribution of interburst durations in neuronal networks.
In Kramers'theory [1][2][3][4], the escape time over a potential barrier consists in computing the mean first passage time (MFPT) of a dynamical system perturbed by a small noise to the boundary of a basin of attraction. The MFPT measures the stability and provides great insight of the backward binding rate in chemistry [5,6], loss-of-lock for phase controllers in communication theory [7], escape of receptors from the post-synaptic density at neuronal synapses and is also used to evaluate future derivatives in the financial market [8]. The full distribution of exit times can be used to characterize both short and intermediate time asymptotics relevant in polymer physics [9], accelerating chemical reaction simulations [10], or better characterizing the search for a small target in a complex environment [11,12]. In the limit of small noise, a trajectory escapes a basin of attraction with probability one [13], but the escape time is exponentially long depending on the topology of the noiseless dynamics [14,15] and its behavior at the boundary. In addition, the distribution of exit points peaks at a distance O( √ σ) from a saddle-point, where σ is the noise amplitude [2,7,16]. Interestingly, when a focus attractor is located near the boundary of the basin of attraction, the escape time deviates from an exponential distribution because trajectories oscillate inside the attractor before escape [17][18][19][20][21]. In these previous examples, the escape ends at the first time a trajectory crosses the separatrix that delimits the basin of attraction. Recurrent returns inside a basin of attraction can be quantified by the Green's function of the inner domain used in the additive properties of the MFPT [22]. In their specific case, where the escape time consists in the first crossing of the boundary of the basin of attraction and a second separatrix, their results show a factor two between the escape time and the exit from the basin of attraction. In dimension one, a recurrent return can be quantified using a relaxation time computed from the survival probability when it does not converge to zero in the long times regime [23]. We show here that for some shallow two-dimensional dynamical systems, trajectories can first exit the basin of attraction, then make excursions outside before coming back inside the domain, a behavior that occurs several times before eventually escaping far away. This situation is peculiar and specific to dimensions greater than two and these recurrent entries need to be taken into account in computing the final escape time. This letter reports such phenomenon. We present formulas for the mean and distribution of escape times and we show that these recurrent re-entries inside the basin of attraction can increase the escape time by a factor between two and three. Finally, we apply these results to explain the origin of long interburst durations found in neuronal network models [24]. Recurrent escape patterns. We start with a generic two-dimensional systeṁ where α ∈]0, 1], γ ∈]0, α[,ω is a Gaussian white noise and σ its amplitude. The determinist part B yellow area, situated between the x-nullcline (red) and the h-nullcline (purple)). Before reaching C ∞ , the noise pushes the trajectories back and forth into the basin of attraction.
To conclude this part we shall summarize the escape dynamics: 1. The distribution of exit points peaks at a distance O( √ σ) from the saddle-point (generically satisfied [16]).
2. The shallow field near the separatrix allows the trajectories to reenter with high probability.
3. The peaks of the successive exit points distributions drift towards the saddle-point S ( fig. 1C).
4. When the trajectories enter the escape cone C ∞ (yellow surface in fig. 1A-B) where the field increases, they eventually escape to infinity.
Finally, this escape pattern could not occur in dimension one since conditions 1 and 3 cannot be satisfied. Characterizing the escape time. We compute here the total escape time. For that goal, we decomposed it into the time to reach the separatrix Γ for the first time plus the time spent to go back and forth around Γ before the final escape. Using Baye's law and conditioning on the RT numbers, the mean escape time can be written as where τ |k (resp. P RT (k)) is the mean time (resp. probability) to return k times inside the basin of attraction.
To estimate the escape probabilityp for a trajectory that had crossed Γ to escape to infinity, we ran N = 500 trajectories starting from A and lasting T = 300s. We first counted the proportion of trajectories reentering the basin of attraction at least once and obtained 88%. We then reiterated this process and counted the proportion of trajectories reentering the basin of attraction one more time after each RT. We found that this proportion was stable equal to 88%, leading top = 0.12. We applied this process for values of the noise amplitude σ ∈ [0.21, 1.05] and found thatp did not depend on σ. After T = 300s all trajectories had escaped to infinity (for all the values of σ), thus choosing a higher value for T would not change the value ofp. This escape phenomenon could be interpreted as follows: a trajectory has escaped when it reaches a distance far away from the separatrix and to better characterize such a distance outside the basin of attraction, we generated empirical trajectories that will return (have not yet escaped) and estimated their convex hull C ( fig. 1D red, 500 runs). Formally, this is equivalent to looking at trajectories starting at A conditioned to a return to the basin of attraction, thus defining a sort of Brownian bridge. This procedure leads to a bounded domain: any point inside C has a high probability of reentering the basin of attraction while points further away will escape to infinity. Due to the strong Markovian properties, each RT can be considered independent of the previous ones, thus the probability to escape after exactly k−RT is given by and thus the mean escape time is spent on the outside (resp. inside) the basin of attraction of A for each RT ( fig. 2A). When the escape probabilityp tends to zero, the escape time tends to infinity, corresponding to trajectories that would be trapped in C. In our case, the mean escape time is τ esc ≈ τ 0 + 8.33( τ ext + τ int ). With the present parameters τ 0 ≈ 5.1s and τ ext + τ int ≈ 1s showing that the escape time is increased by a factor 2.6. Interestingly, the noise amplitude does not influence the number of RT before escape ( fig. 2B). For the parameter value γ = 0.6, we found that a trajectory perform 8 RT on average ( fig. 2B, inset). These results indicate that the noise amplitude does not directly influence the probability to escape to infinity. We now determine the distribution of escape times where P (τ k < t|k) is the conditional probability distribution to escape after k RT. Because RT are i.i.d, this probability is the k-th convolution of the distribution of times of a single RT f 1 (t) with the distribution of escape times without RT f 0 (t) where f (t) * k = f (t) * f (t) * ... * f (t), k times. Thus the pdf of exit times is given by To compare this formula to the results of our numerical simulations, we approximate the distributions f 0 and f 1 by  (7) and we could compare it to the corresponding parts of the distribution of escape times obtained from stochastic simulations ( fig. 2D).
Interburst durations in a firing excitatory neuronal network Burst and interburst are fundamental network events occurring during dominant imbalance dominated by excitatory neuronal activity. Network burst generation could rely on specific spiking frequencies in connected neurons [25] despite a high variability in interspike intervals [26]. Neuronal population bursts separated by long interbursts have been modeled using a two-state synaptic depression [27], or by using the refractory period induced by afterhyperpolarization (AHP), a mechanism leading to a long voltage hyperpolarisation transient and generated by various potassium channels [28]. Here we show that the recurrent escape mechanism described above can be used as one explanation of the origin of long interburst intervals without the need of any other mechanism. However, we note that this mechanism does not have to be exclusive and that long interburst intervals could also be explained in some cases by a combination of mechanisms such as the recurrent escape pattern presented here and AHP. Indeed, we start from the depression-facilitation short-term synaptic plasticity mean-field model of network neuronal bursting [29][30][31], which consists of three equations (9) for the mean voltage h, the depression y, and the facilitation x. The depression mechanism describes the depletion of the vesicular pool necessary for neurotransmission following successive action potentials, while the facilitation mechanism corresponds to a transient increase of the release probability mediated by a local calcium accumulation at synapses.
where h + = max(h, 0) is a linear threshold function of the synaptic current that gives the average population firing rate [29,31,32]. The mean number of connections (synapses) per neuron is accounted for by the parameter J and the term Jxy represents the combined effect of the short-term synaptic plasticity (facilitation and depression mechanisms) on the network activity. The parameters K and L describe how the firing rate is transformed into molecular events that are changing the duration (depression) and probability (facilitation) of vesicular release.
The time scales t f and t r define the recovery of an averaged synapse from the network activity. Finally,ω is an additive Gaussian noise and σ its amplitude, this additive noise term represents the fluctuations of the mean voltage generated by the average of independent vesicular release events and/or closings and openings of voltage gated channels. This system has 3 critical points, one attractor and two saddles. Interestingly, near the attractor A = (0, X, 1), the dynamic is anisotropic (|λ 1 | = 12.6 |λ 2 | = 1.11 |λ 3 | = 0.34, with the parameters from Table I) and thus we project the system on the two-dimensional plan y = Ctė y = 0 = 1 − y τ r − Lxyh + = 0 ⇐⇒ y = 1 1 + τ r Lxh + (10) leading to the simplified systeṁ The deterministic component of this system has 3 critical points, two attractors and one saddle-point .
Attractor A 0 A first equilibrium point is given by h = 0 and x = X. The Jacobian at this point is With our parameters (Table I)  Saddle-point S The second critical-point is S 1 (h 1 ≈ 8.07; x 1 ≈ 0.28). Its eigenvalues are λ 1 ≈ −5.73 and λ 2 ≈ 1.43. It is a saddle-point.
Attractor A 2 The third critical-point is A 2 (h 2 ≈ 28.8; x 2 ≈ 0.53). Its eigenvalues are λ 1 ≈ −11.9 and λ 2 ≈ −1.33. It is another attractor. The two attractors are separated by the 1D stable manifold of the saddlepoint S 1 (fig. 3A, solid black curve). The phase-space of system (11), restricted to the region {x ≤ 0.5 and h ≤ 30} has the same topological properties than system (1): one attractor and one saddle-point, the separatrix delimiting the basin of attraction is the stable manifold of S 1 (fig. 3A). The escaping trajectories exits and re-enters the basin of attraction several times before eventually escaping ( fig. 3A, orange). Thus, we can now understand that the interburst intervals correspond to the exit times of trajectories from the basin of attraction. Using formula (7) to fit the distribution of exit times, we obtain thatp ≈ 0.13 ( fig. 3B) and f 0 (t) = 0.23 exp(−0.25t) 1 + erf t − 2.45 0.43 (13) and Finally, similar to the generic system (1), the RT number before escape does not depend on the noise amplitude ( fig. 3C). Trajectories are making on average 8 RT before escape (inset). To determine the mean escape time, we use formula (4) and obtain τ esc ≈ τ 0 + 7.7( τ ext + τ int ) where τ 0 ≈ 4.35s and τ ext + τ int ≈ 0.7s ( fig.  3D) thus leading to a factor 2.2 in the escape time. At this stage we conclude that long interburst durations, generated by excitatory neuronal networks [33], can be explained by the recurrent escape mechanism introduced here.
Concluding remarks: We presented an escape mechanism for which reaching the boundary of the deterministic basin of attraction induced by noise is not sufficient to escape. After crossing the separatrix, the noise tends to bring trajectories back inside the basin of attraction until they reach a region (escape cone-like domain C ∞ ), narrow near S and that widens with the distance. The size of the characteristic distance from S (boundary layer) after which trajectories escape is λ + σ [34]. We derived formulas for the mean escape time and the distribution of escape times taking into account the excursions inside and outside of the basin of attraction before the final escape.