Dynamic Flux Tubes Form Reservoirs of Stability in Neuronal Circuits

Michael Monteforte* and Fred Wolf Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Goettingen, Germany Bernstein Focus Neurotechnology Goettingen, 37077 Goettingen, Germany Bernstein Center for Computational Neuroscience Goettingen, 37077 Goettingen, Germany Faculty of Physics, University of Goettingen, 37077 Goettingen, Germany (Received 23 February 2011; revised manuscript received 31 May 2012; published 1 November 2012)


I. INTRODUCTION
The dynamics of neuronal networks is fundamental to the brain's ability to perform cognitive functions such as sensory processing, working memory, and decision making.In the cerebral cortex-the part of the mammalian brain that expanded the most during the evolution of humans [1] and that is critically and specifically involved in all of these functions [2]-dynamically generated activity patterns are, in general, irregular and spatiotemporally complex [3][4][5][6][7][8].The prevailing theoretical explanation for this irregular activity is that its origin lies in strong fluctuations in inputs that arise from the dynamical balance of excitation and inhibition-known as the balanced state [9][10][11].It is a long standing theme of dynamic brain theories that such complex activity patterns might serve as a rich encoding and processing space for neural computations [12][13][14][15][16][17].In particular, if the precise timing of every nerve impulse is used for information encoding, the processing capacity of neuronal circuits might be amazingly rich.
At a fundamental level, brains are network dynamical systems, evolved to process information.It has long been conjectured that dynamical systems optimally operate as information processing devices near a transition between order and chaos-the so-called edge of chaos.This edgeof-chaos hypothesis was originally raised in studies of cellular automata [18,19].The dynamics at the edge of chaos is expected to support a relatively rich set of distinct state sequences and, at the same time, preserve information about initial conditions over intermediate time scales, both of which appear as important prerequisites for complex computations.In contrast, deep in a chaotic regime, information is rapidly destroyed by the dynamical entropy production, while deep in a stable regime, state sequences are often stereotyped and impoverished.Even in cellular automata, however, the edge of chaos has been found neither necessary nor sufficient for optimal performance [20].In the context of neuronal-circuit dynamics, the notion of computations near the edge of chaos has been emphasized in studies of so-called reservoir computing [21][22][23][24].In such applications, time-varying ''sensory'' inputs are represented by the instantaneous state of the entire network that can be ''read out'' by subsequent processing stages.Many such applications use models of spiking neural networks [22,[25][26][27][28][29][30].The basic state-space features and information capacity of such networks, however, have been analyzed only for firing-rate models [31,32].For spiking neural networks, it has not been possible, so far, to obtain the most elementary characteristics such as the repertoire of state sequences or the way in which the size of this repertoire depends on fundamental variables, such as the neuron number or synaptic connectivity.
With regard to biological neural networks, it is not obvious that networks of spiking neurons can operate in an edge-of-chaos regime.Recent studies, both experimental [33] and theoretical [34], suggest that chaos in typical irregular firing states can be very intense.In particular, London and co-workers have argued that, from the responses to artificially induced single spikes in the rodent barrel cortex, even single additional spikes in a cortical firing sequence can induce substantial decorrelation of the network's microstate [33].Studies of the state-space structure and the dynamics of spiking neural networks have uncovered a plethora of exotic behaviors that raise doubts as to whether notions from the theory of smooth dynamical systems, such as the edge of chaos, are generally appropriate to describe such behavior.For instance, it has been found that networks of spiking neurons can robustly exhibit unstable limit-cycle attractors [35], which are an oxymoron in classical dynamical-systems theory.Other exotic behaviors that arise from the basic structural features of spiking neuron networks are time-varying phase-space dimensionality [36], topological speed limits [37], or topologically induced chaotic transients [38].These and other recent observations indicate that understanding information processing by neuronal-circuit dynamics requires a specific characterization of the collective dynamics of large neural networks.
Here, we analyze the state-space structure of spiking neural networks, namely, sparse random networks of inhibitory leaky integrate-and-fire (LIF) neurons [39,40] in the balanced state [11].Although capable of generating complex irregular spike sequences, we show that these networks actually exhibit negative-definite Lyapunov spectra.The spectra are invariant to the network size, hence this stable dynamics is extensive and preserved in the thermodynamic limit.These results confirm and extend previous studies that described the occurrence of stable chaos [41,42] in networks of LIF neurons [43][44][45].We find that various state perturbations are predicted to decay extremely fast.In particular, in the limit of high connectivity, small perturbations to the membrane potentials of the neurons decay as quickly as in isolated cells.In addition, single-spike perturbations induce only minute responses in the population firing rates that decay on a millisecond time scale.Surprisingly, however, single-spike perturbations always lead to exponential state separation, causing complete decorrelation of the networks' microstates.This rapid decoherence is also established within only a few milliseconds.By examining the dynamics for arbitrary perturbation sizes, we explain this behavior by a picture of tangled exponentially separating flux tubes composing the networks' phase space.
These dynamic flux tubes form reservoirs of stability around stable trajectories with unique spike sequences and determine the repertoire of state sequences.A perturbation that leads the microstate of the network from one flux tube to another induces a cascade of new spikes that replaces the original spike sequence but obeys the same statistical and dynamical properties.The flux-tube structure of the phase space demonstrates that the fading-memory property (information about differences in the microstate decay over time) and the separation property (distinguishable inputs lead to significantly different states) can coexist in the dynamics of spiking neuron networks.We numerically measure the flux-tube radius and estimate the total fluxtube length.Studying the scaling of the flux-tube radius and length with the number of neurons N, synapses per neuron K, and firing rate leads us to surmise that the entropy of distinct multispike sequences grows faster rather than more extensive: Intriguingly, the flux-tube structure of the phase space calls into question the applicability of notions of classical dynamical systems even in the thermodynamic limit.The decrease of the flux-tube radius with the system size leads to an ambiguity in the characterization of the dynamics in the thermodynamic limit.It can appear either as suprachaotic or as stable depending on the order in which the weak-perturbation limit and the large-system limit are taken.To optimize the phase-space structure for computational applications, network structures that increase the flux-tube radius relative to random networks are needed.

II. MODEL AND METHOD
We studied large sparse networks of N LIF neurons arranged on directed Erdo ¨s-Re ´nyi random graphs of mean indegree K.The neurons' membrane potentials V i 2 ðÀ1; V T with i ¼ 1; . . .; N satisfy between spike events.When V i reaches the threshold V T ¼ 1, neuron i emits a spike and The membrane time constant is denoted m .The synaptic input currents are They are composed of constant excitatory external currents ffiffiffiffi K p I 0 and inhibitory nondelayed pulses of strength ÀJ 0 = ffiffiffiffi K p , received at the spike times t ðjÞ s of the presynaptic neurons j 2 preðiÞ.The external currents I 0 are chosen to obtain a target network-averaged firing rate ¼ ð1=NÞ P i i .For numerical simulations, we used a phase representation of the network dynamics equivalent to the voltage representation, Eqs. ( 1) and (2).In the phase representation, each neuron's state is described by a phase i 2 ðÀ1; 1, defined by Between successive spikes in the network ðt sÀ1 ; t s , all phases evolve with the constant phase velocity 1=T free .When receiving a spike at t s , a neuron's phase is updated with the phase-transition curve The phase-transition curve is related to the phase-response curve by ZðÞ ¼ UðÞ À .The resulting map of all neurons' phases reads where i Ã 2 postðj Ã Þ denote the postsynaptic neurons of the spiking neuron j Ã at spike time t s .The exact map (3) was used for event-based simulations [46][47][48] and to analytically calculate the single-spike Jacobians describing the evolution of infinitesimal phase perturbations.For the derivation of the single-spike , we refer to the Supplemental Material [49] (see, also, Refs.[45][46][47][48][50][51][52][53] therein): The single-spike Jacobian is determined by the derivative of the phase-transition (phase-response) curve , evaluated at times t À s just before the spike reception.They were used for numerically exact calculations of all N Lyapunov exponents as described in [51].Positive Lyapunov exponents represent the rate of exponential growth of perturbations, characteristic of a chaotic dynamics.Negative Lyapunov exponents represent the rate at which perturbations decay exponentially.A stable dynamics is characterized by all Lyapunov exponents being negative definite.The Lyapunov exponents 1 !Á Á Á !N thus comprehensively characterize the network dynamics with respect to infinitesimal perturbations [52].
In the asynchronous, irregular balanced state, neurons are driven by strong input fluctuations that result from a dynamical balance of excitatory and inhibitory inputs.The existence of a balanced-state fixed point in exclusively inhibitorily coupled networks, Eqs. ( 1) and ( 2), is implied by the equation for the average input current [11].The neurons receive exclusively inhibitory recurrent inputs ÀJ 0 = ffiffiffiffi K p from K presynaptic neurons that fire on average with firing rate .These inputs counteract the constant excitatory current ffiffiffiffi K p I 0 .Thus, the average input current reads It is easy to see that (I 0 À J 0 m ) cannot exhibit a nonzero large K limit.On the one hand, if lim K!1 ðI 0 À J 0 m Þ > 0 was true, then I would diverge to 1 and the neurons would fire at their maximal rate.The resulting strong inhibition would break the inequality, leading to a contradiction.If, on the other hand, lim K!1 ðI 0 À J 0 m Þ < 0 was true, then I would diverge to À1 and the neurons would be completely silenced.The resulting lack of inhibition again breaks the inequality.Thus, neither of these two possibilities for a nonzero limit of (I 0 À J 0 m ) is self-consistent.Instead, self-consistency requires the balance of excitation and inhibition: From this, the network-averaged firing rate in the balanced state can be predicted as bal % I 0 J 0 m : While the average input current vanishes in the balanced state, the input-current fluctuations remain strong in the large-K limit [11].This can be calculated from the input-current autocorrelation CðÞ ¼ R IðtÞIðt þ Þdt.Assuming that inputs from different presynaptic neurons are only weakly correlated, a received compound spike train can be modeled by a Poisson process with average rate K , yielding The input current thus resembles white noise of magnitude ¼ J 0 ffiffiffiffiffiffiffiffiffi m p .

III. RESULTS
As expected from the construction of the networks, Eqs. ( 1) and ( 2), the dynamics converged to a balanced state.Figures 1(a) and 1(b) show a representative spike pattern and voltage trace illustrating irregular and asynchronous firing and strong membrane-potential fluctuations.A second characteristic feature of balanced networks is a substantial heterogeneity in the spike statistics across neurons, which is indicated by broad distributions of firing rates () and coefficients of variation (cv) in Fig. 1(c).The good agreement of the predicted network-averaged firing rate in the balanced state, Eq. ( 5), with the numerically obtained firing rate confirms the dynamical balance of excitation and inhibition in the networks [Fig.1(d)].

A. Stable chaos
Although the network state was temporally irregular, the collective dynamics of the networks was formally stable.The entire spectrum of Lyapunov exponents (disregarding the zero exponent for perturbations tangential to the trajectory) was negative [Fig.2(a)].This confirms the occurrence of stable chaos in LIF networks [43][44][45].Furthermore, the Lyapunov spectra were invariant to the network size N, which demonstrates that this type of dynamics is extensive and thus presumably preserved in the thermodynamic limit.
For high connectivity, all Lyapunov exponents approached the inverse membrane time constant i % À1= m .This behavior can be understood analytically from a random matrix approximation of the mean Lyapunov exponent.The mean Lyapunov exponent is the rate of phase-space volume compression and is given by Assuming the single-spike Jacobians to be ) independent and identically distributed.random matrices of the form (4) with matrix elements dðÞ ¼ U 0 ðÞ ¼ 1 þ Z 0 ðÞ, sampled according to the stationary phase distribution PðÞ, yields the random matrix approximation [49]: The asymptotic expansion of dðÞ This and the numerical observation that the largest Lyapunov exponent approaches the mean Lyapunov exponent for large K indicate that in this limit infinitesimal perturbations decay exponentially with the single-neuron membrane time constant [Fig.2(b)].As will become clear in the following, however, the assessment of stability to general perturbations in such networks is quite subtle.

B. Single-spike perturbations
In practice, how would one realize a small state perturbation to the dynamics of a cortical network?[49].The parameters are as in Fig. 1.These are averages of ten initial conditions.
Experimentally realizable and well-controlled minimal state perturbations to the dynamics of cortical networks include the addition or suppression of individual spikes.Pioneered by studies of Brecht and co-workers, experimental neuroscientists have, in fact, developed approaches to perform single-neuron, single-spike perturbations in intact cortical networks in vivo and even in awake behaving animals [54][55][56].Such minimalistic neurostimulations can elicit complex animal behaviors [54][55][56] and can trigger measurable firing-rate responses in the local circuit [33].We therefore examined theoretically how such single-spike perturbations affect the collective dynamics of networks in the balanced state.In the networks defined by Eqs. ( 1) and ( 2), the simplest single-spike perturbation is the suppression of one spike.Figure 3(a) illustrates the response of the network if one spike, occurring at a reference time defined as t ¼ 0, is skipped.The missing inhibition immediately triggered additional spikes in the K postsynaptic neurons such that the network-averaged firing rate increased abruptly by $ K =N [Fig.3(b)].Since the induced extra spikes inhibited neurons in the network, the overshoot in the firing rate quickly settled back to the stationary state within a time of order t $ 1=ðK Þ.The overall number of additional spikes in the networks was therefore S extra ¼ Nt % 1 [Fig.3(c)].That is, the one skipped spike was immediately compensated by one extra spike.
The compensation of the one skipped spike by one additional spike in the network can be understood from a mean-field approximation of the firing-rate response.In fluctuation-driven neurons, there are generally two distinct input channels contributing to the change in the firing rate ðtÞ.This rate response results from changes in the mean and in the variance of the input current, respectively [50,57].In the wake of a spike perturbation, the mean input current deviates from the stationary value The first term reflects the average change due to one skipped (extra) spike: an effectively positive (negative) current to K postsynaptic neurons.The postsynaptic current is assumed to have the form gðtÞ, which is, in our case, a delta pulse ðtÞ.The second term reflects the change in the recurrent inhibition due to the deviating average firing rate ðtÞ that develops within the network.This rate response ðtÞ also implies that the magnitude of the input fluctuations ðtÞ ¼ J 0 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m þ ðtÞ m p is not stationary and deviates from the stationary value ¼ J 0 ffiffiffiffiffiffiffiffiffi m p for ðtÞ ( by To be self-consistent, the rate response to one skipped spike must equal the summed convolutions of ðtÞ and ðtÞ with the response functions in linear response theory [50,57,58] for the mean 1 ðtÞ and the variance 1 ðtÞ responses: Ã ðtÞ: This equation can be solved for the rate response in the frequency domain, with As expected, the rate response, Eq. ( 7), vanishes in the large-N limit.In the large-K limit, the rate response can be approximated as ð!Þ % AE gð!Þ N , and after transforming back into the time domain, it can be approximated as ðtÞ % AE gðtÞ N .The rate response hence follows the form of a postsynaptic current gðtÞ.Thus, the average number of additional (missed) spikes in the network becomes one: Note that this derivation and the result are in fact independent of the specific form of the response functions 1 and 1 , which depend on the neuron model used.We thus generally expect that, in balanced networks with exclusively inhibitory coupling, one skipped (extra) spike induces just one additional (missed) spike and the network returns basically immediately to a statistically stationary state.
These results show that the state of the networks, Eqs. ( 1) and (2), is not only stable to infinitesimal perturbations.The firing-rate dynamics, in addition, is capable of rapidly restoring the stationary firing rate in response to finite single-spike perturbations.A radically different picture emerges, however, when one considers the detailed microstate.Figure 4(a) displays the average distance D ðtÞ ¼ 1 N P i j i ðtÞ À i ðtÞj between a perturbed trajectory, in which one spike is skipped at t ¼ 0, and the reference trajectory.All perturbed trajectories separate exponentially fast at a surprisingly high rate.Because this exponential separation of nearby trajectories is reminiscent of deterministic chaos, we call its separation rate the pseudo-Lyapunov-exponent p .Varying the network size N, connectivity K, and network-average firing rate , we find that the pseudo-Lyapunov-exponent is invariant to the network size and generally positive, which is distinctly different from the classical Lyapunov exponent.With increasing connectivity, it apparently diverges linearly as p $ K [Fig.4(b) and 4(c)].The pseudo-Lyapunovexponent is thus expected to grow to infinity in the highconnectivity limit, which is reminiscent of binary neuron networks that exhibit an infinite Lyapunov exponent in this limit [11].

C. Flux-tube structure of phase space
In the same network, we find stable dynamics in response to infinitesimal perturbations, rapid restoration of the network's mean firing rate, but yet a highly unstable dynamics for the microstate in response to single-spike perturbations.To characterize the transition between these completely opposite behaviors, we analyze the network states under general finite perturbations.We apply perturbations of variable size " perpendicular to the unperturbed reference trajectories.Depending on the perturbation strength " and direction (with , the perturbed trajectories either converge back to the reference trajectory or diverge exponentially fast [Fig.5(a)].
The probability that a perturbation of strength " induces the exponential state separation is very well described by the function P s ð"Þ ¼ 1 À expðÀ"=" FT Þ, defining the characteristic scale " ft [Fig.5(b)].For small perturbation sizes " ( " FT , the perturbed trajectory almost always converges back to the reference trajectory.Increasing the perturbation size " increases the probability that the perturbed trajectory starts to diverge exponentially and becomes decorrelated from the reference trajectory.For large " ) " FT , this decorrelation process starts almost immediately after the perturbation is applied.The characteristic scale " FT , separating stable from unstable dynamics, scales as " FT $ 1=ð ffiffiffiffiffiffiffiffi KN p m Þ [Fig.5(c)].Intriguingly, the vanishing flux-tube radius for large N implies that the dynamics in the thermodynamic limit (N ! 1) would be unstable even to arbitrarily small perturbations (" !0).However, the analysis of the Lyapunov spectra has shown that taking the limit " !0 first and N ! 1 second yields stable dynamics.Thus, the order of these limits is crucial for characterizing the dynamical nature of such balanced networks.The noncommutativity of these limits might be the origin of the contradicting previous predictions of stable [43][44][45] or suprachaotic dynamics [11].
While single-spike failures always induce exponential state separation, the probability for this to happen after single-synaptic failures is relatively low.As expected from the size of such perturbations " syn $ 1= ffiffiffiffiffiffiffiffi KN p compared to the flux-tube radius " FT $ 1=ð ffiffiffiffiffiffiffiffi KN p m Þ, the probability of exponential state separation after synaptic failures is independent of K and N and increases linearly with [49].

D. Exponential decorrelation by an event cascade
The exponential decorrelation process after a perturbation is initiated and mediated by a different sequence of spikes in the perturbed trajectory compared to the reference trajectory.Taking a closer look at the individual distance measurements with finite perturbation size, one notices that every exponential separation of the trajectories appears to be initiated by a peculiar step of size d 1 [Fig.6 Step sizes d n versus connectivity K for finite perturbations (solid lines) and single-spike perturbations (dotted lines).(e) Histograms of the individual rate of exponential state separation ¼ lnðnÞ=t n for all measurements (dots) and fits to log-normal distributions (solid lines).(f) Expectation values of fitted log-normal distributions versus connectivity K for finite perturbations (solid lines) and singlespike perturbations (dotted lines).The p (dashed lines) from Fig. 4 is plotted for comparison.The parameters are as in Fig. 1 with N ¼ 100 000.These are averages of ten initial conditions with 100 calculations and 1000 random directions each.
connectivities K in measurements with finite perturbation sizes and after single-spike failures [Fig.6(d)].The deviations for large K and n can be explained by an increased overlap of the sets of postsynaptic neurons; the steps d n are therefore smaller than d.However, these results strongly indicate that the decorrelation process is initiated and mediated by steps of constant height due to the emergence of a new spike sequence in the perturbed trajectory compared to the reference trajectory.
The fact that the exponential decorrelation process is mediated by discrete steps of constant height implies that the rate of these steps must increase exponentially.The single spike that initiates the rapid decorrelation thus triggers a cascade of new spikes replacing the reference spike sequence.This can be seen by assuming D ðtÞ / expðtÞ for the exponential state separation.From the discrete steps, we also know that D ðt n Þ ¼ nd, where t n is the time of the nth step relative to the initializing one, and we obtain ¼ lnðnÞ=t n for each measurement.The histogram for all measurements is shown in Fig. 6(e).The observed convergence with increasing n confirms the occurrence of an exponentially increasing rate of steps.The mean value of the fitted log-normal distributions (solid lines) provides an measure of the average rate of exponential separation over all measurements [Fig.6(f)].It agrees very well with the pseudo-Lyapunov-exponent p (dashed line) previously obtained from exponential fits to the averaged distance after the single-spike failures depicted in Fig. 4.These results show that a perturbation leading to the change of one spike initiates a geometrically growing cascade of new spikes leading to a completely new spike sequence.Note that this high sensitivity was not visible in the Lyapunov spectra (Fig. 2) nor in the rate response after single-spike failures (Fig. 3).The perturbed trajectory belongs to a different dynamic flux tube with a different spike sequence but exhibits the same statistical properties and local stability.
To visualize the exotic phase-space structure generated by the dynamic flux tubes, we scanned a larger piece of phase-space volume.The cross sections of the phase space resemble stained glass [Fig.7(a)].For the axes, we chose two random N-dimensional vectors orthonormal to the direction of the trajectory (1; 1; . . .), spanning a twodimensional cross section in the N-dimensional phase space.Each pixel in the picture represents a new initial condition.If trajectories of adjacent initial conditions converged to each other, the pixels were assigned identical colors; otherwise they were colored differently.Areas of one color can thus be thought of as the cross sections of the basins of attraction of unique stable trajectories, corresponding to individual cells in the stained-glass architecture.When a trajectory hits a wall (threshold boundary) of the phase space, a spike is emitted and the trajectory continues at a different point in the phase space in a different cell.The black lines visualize the boundaries of adjacent cells and are expected to represent trajectories that lead, at some point in time, to synchronous spikes in pairs of neurons; they hit the edges of the phase space.A symbolic concatenation of the stained-glass cells that are visited by one trajectory yield the flux-tube picture we have developed above.If a perturbation leads to a different flux tube, then the new trajectory diverges exponentially fast from the original one as described above.
The stained-glass cells composing the dynamic flux tubes differ in shape and size.If they, in fact, represent the flux tubes postulated before, their sizes should match the " FT derived from random perturbations.We thus, repeatedly, calculated different phase-space cross sections as presented in Fig. 7(a) for different parameter sets and determined the areas A of all cross sections that were completely resolved.To obtain a characteristic scale for the radius, we used r ¼ ffiffiffiffiffiffiffiffiffi ffi A= p , which follows an exponential distribution [Fig.7 some mismatch for low K (open symbols).But this mismatch is actually also visible in Fig. 5(c) for low K, and using the measured value of " FT for low K leads to a good agreement of r FT and " FT for all cases N; K; (full symbols).We can thus conclude that the typical flux-tube radius scales essentially as 1=ð ffiffiffiffiffiffiffiffi KN p Þ.This exotic phase-space structure embodies a good partition through which to estimate the entropy of spike sequences in the networks.Our results suggest that the phase space is composed of dynamic flux tubes.Trajectories within these flux tubes converge to a unique stable trajectory while trajectories of adjacent flux tubes separate exponentially fast.In the N-dimensional phase space, the total flux-tube length is l FT % . This in turn suggests that the entropy of distinct network-state sequences is , growing faster than in the extensive case and increasing with a larger connectivity K and firing rate .Expressed in terms the time needed by the network to display all possible sequences, this time would scale as Already for N ¼ 200 neurons [Fig.7(a)], it would take the network an astronomically large time, more than 10 200 years, to display all distinct network-state sequences.

IV. CONCLUSION
Our analysis reveals the coexistence of (i) dynamical stability to infinitesimal as well as small finite state perturbations, and (ii) a high sensitivity to single-spike and even single-synapse perturbations in networks of inhibitory LIF neurons in the balanced state.The networks exhibit negative-definite and extensive Lyapunov spectra that, at first sight, suggest a well-defined thermodynamic limit of the network dynamics, characterized by stable chaos, which has previously been proposed in Refs.[43][44][45].In confirming the picture developed by London et al. [33], we found that single-spike perturbations induce a cascade of new spikes.Complementary to their proposition that this leads to an excess of extra spikes, we argue that in inhibitory networks, the new spike sequence replaces the original spike sequence without a considerable trace in the network statistics, e.g., the firing-rate response.The microstate of the network nevertheless follows a completely different dynamical path that diverges exponentially from the unperturbed one.We call the rate of exponential state separation in response to single-spike perturbations the pseudo-Lyapunov-exponent.It scales as p $ K and implies practically instantaneous decorrelation of network microstates.
The seemingly paradoxical coexistence of local stability and exponential state separation is reconciled by a unifying picture of dynamic flux tubes in the network's phase space.States within a flux tube are attracted to a stable trajectory with a unique spike sequence.Different flux tubes, however, separate exponentially fast at a rate of p .The radius of the dynamic flux tubes scales as " FT $ 1=ð ffiffiffiffiffiffiffiffi KN p m Þ.This scaling can be used to estimate the entropy of the network's repertoire of distinct state sequences H $ N lnð ffiffiffiffiffiffiffiffi KN p m Þ.Thus, the computation of the flux-tube radius presents a novel numerical approach to estimating the entropy of a neural network.With this approach, the entropy can be calculated with less effort than, e.g., with an approach that identifies all multiplespike sequences numerically.
The coexistence of local stability and exponential state separation leads to an ambiguous nature of the network dynamics in the thermodynamic limit and shows that the notion of an edge of chaos is not applicable to the networks considered here.On the one hand, taking the large-system limit first, one obtains vanishingly thin flux tubes.Thus, the dynamics becomes unstable even for arbitrarily small perturbations.The resulting sensitivity to initial conditions is described by the positive pseudo-Lyapunov-exponent p $ K , that showed no sign of saturation.On the other hand, taking the small-perturbation limit first, one obtains a stable dynamics even when then taking the large-system limit.This dynamics is described by the negative Lyapunov exponent % À1= m .These findings suggest that the previously reported infinite Lyapunov exponent on the one hand [11] and the negative Lyapunov exponent on the other hand [43][44][45] resulted from the order in which the weak-perturbation limit and the thermodynamic limit were taken.Since there is no phase transition between order and chaos in this case, an edge of chaos does not exist in these networks.
We expect this form of state decorrelation to occur rather generally in similar classes of neural networks as a result of the noncommutativity of the spike order [59,60].Trajectories in which at least two neurons would fire synchronous spikes at some time instant form the boundaries of the basins of attraction around stable trajectories with unique spike sequences.Infinitesimal perturbations to trajectories with such synchronous spikes lead to trajectories with different spike orders and the corresponding microstates diverge exponentially.Such a strong sensitivity, despite local stability due to the noncommutativity of -like events, might be a general feature of dynamical systems displaying stable chaos [42].There is evidence that stable chaos is not restricted to the idealized relatively simple networks studied here.In particular, it was shown that the local stability can be preserved when excitatory neurons are included [43], and also when the -pulse coupling is replaced with temporally decaying postsynaptic currents [45].We furthermore found that stable chaos can be preserved when the instantaneous action-potential generation of LIF neurons is replaced by a dynamic, yet extremely rapid, action-potential generation [61].
An important topic for future studies is the question of how the flux-tube structure of phase space is influenced by, e.g., the network topology or synaptic plasticity.That the structure of network phase space is expected to strongly impact on the performance of a reservoir-computing architecture was already pointed out in studies of stable heteroclinic channels [62].The novel phase-space structure of dynamic flux tubes, revealed here, may provide a basis for efficient stimulus categorization and segregation.If a perturbation leads to a switch of flux tubes, this can easily be distinguished by the completely different resulting spike sequence.In the context of reservoir computing, networks exhibiting flux tubes combine the fading memory and the separation property without the existence of an edge of chaos.The flux-tube radius can be thought of as the statistical border between the fading memory and the separation property.Applications of LIF neuron networks in reservoir computing may thus strongly benefit if the flux-tube structure of the network's phase space is taken into account.

FIG. 1 .
FIG. 1.The balanced state in inhibitory LIF networks: (a) Asynchronous irregular spike pattern of 30 randomly chosen neurons.(b) Fluctuating voltage trace of one neuron (voltage increased to V ¼ 2 at spikes).(c) Broad distributions of individual neurons' firing rates and coefficients of variation cv.(d) Network-averaged firing rate and synchrony measure versus predicted rate bal ¼ I 0 =ðJ 0 m Þ.The dotted line is a guide to the eye for ¼ bal , synchrony measure:¼ STDð½ i Þ ½STDð i Þ, where ½Á denotes the population average and STD stands for standard deviation.The parameters are N ¼ 10 000, K ¼ 1000, ¼ 10 Hz, J 0 ¼ 1, m ¼ 10 ms.

FIG. 3 .
FIG. 3. Weak firing-rate response after single-spike failures: (a) Sample spike pattern of 10 randomly chosen neurons (gray, reference trajectory; black, single spike skipped at t ¼ 0).(b) Network-averaged firing rate of the reference trajectory and after skipped spike for different connectivities K and network sizes N. (c) Number of extra spikes S extra in the entire network versus time rescaled by input rate K.The parameters are as in Fig.1.These are averages of 100 initial conditions with 10 000 perturbations each.

FIG. 4 .
FIG. 4. High sensitivity to single-spike failures: (a) Distance D ðtÞ ¼ 1 N P i j i ðtÞ À i ðtÞj between trajectories after skipped spike and reference to log-lin plots for different connectivities K and average firing rates .The small bars indicate the distances D 1 ¼ R R j À 'jPðÞPð'Þdd' of uncorrelated states obtained from self-consistent mean-field solutions PðÞ [49,50].(b) Pseudo-Lyapunov-exponent p from exponential fits D $ expð p tÞ before reaching saturation.(c) Collapse to characteristic exponential state separation with a rate of p $ K .(Distance has been rescaled with approximate perturbation strength K J 0 ffiffiffi K p ; the time has been rescaled with input rate K .Inset: Different network sizes N for K ¼ 100.)The parameters are as in Fig. 1 with N ¼ 100 000.These are averages of ten initial conditions with 100 perturbations each.

FIG. 5 .
FIG. 5. Sensitivity to finite perturbations: (a) Distance D between perturbed and reference trajectories at spike times of the reference trajectory (projecting out possible time shifts) for perturbation strengths " ¼ 10 À5 ; 10 À3 ; 10 À1 in log-lin plots.The gray lines indicate ten random directions perpendicular to the trajectory; the color lines indicate averages of exponentially separating or converging cases.(b) Probability P s of the exponential state separation versus perturbation strength " in the linlog plot.The dotted line indicates characteristic perturbation size " FT separating stable from unstable dynamics; the shaded areas indicate single-synapse, single-spike failures.(c) Characteristic perturbation size " FT versus network size N, connectivity K, and average firing rate in log-log plots.The straight lines indicate fits to " FT $ 1=ð ffiffiffiffiffiffiffiffi NK p Þ. (d) Symbolic picture of dynamic flux tubes with radius " FT , the boxes indicate the different spike sequences assigned to different flux tubes.(Inside the flux tubes are stable dynamics, but adjacent flux tubes separate exponentially.)The parameters are as in Fig. 1 with N ¼ 100 000.These are averages of ten initial conditions with 100 calculations and 100 random directions each.

FIG. 6 .
FIG. 6. Rapid decorrelation through a cascade of new spikes replacing those in the original spike sequence.(a) Five examples of typical distance measurements D ðtÞ for finite perturbations " ¼ 0:01.(b) Histogram of distances of all measurements.(c) Step sizes d n obtained from histograms as in panel (b) for different ".The dashed line indicates distance due to one replaced spike d¼ 2J 0 ffiffiffiffi K p =N. (d)Step sizes d n versus connectivity K for finite perturbations (solid lines) and single-spike perturbations (dotted lines).(e) Histograms of the individual rate of exponential state separation ¼ lnðnÞ=t n for all measurements (dots) and fits to log-normal distributions (solid lines).(f) Expectation values of fitted log-normal distributions versus connectivity K for finite perturbations (solid lines) and singlespike perturbations (dotted lines).The p (dashed lines) from Fig.4is plotted for comparison.The parameters are as in Fig.1with N ¼ 100 000.These are averages of ten initial conditions with 100 calculations and 1000 random directions each.

FIG. 7 .
FIG. 7. Visualization of the phase-space structure.(a) Phasespace cross sections for N ¼ 200 and N ¼ 2000, spanned by two random N-dimensional vectors perpendicular to the trajectory.Flux-tube sections are drawn in one color and separated by solid lines.(b) Exponential distribution of radii of flux-tube cross sections for different parameter sets.The inset shows a comparison of average radius r FT with " FT $ 1=ð ffiffiffiffiffiffiffiffi KN p Þ obtained in Fig. 5(c).Other parameters used are K ¼ 100, ¼ 10 Hz, J 0 ¼ 1, m ¼ 10 ms.
Stable dynamics with respect to infinitesimal perturbations: (a) Spectrum of Lyapunov exponents f i g of networks with N ¼ 10 000 neurons and different connectivities K. (b) Largest Lyapunov exponent max ¼ 2 and mean Lyapunov exponent mean ¼ 1