Transition to chaos in random neuronal networks

Firing patterns in the central nervous system often exhibit strong temporal irregularity and heterogeneity in their time averaged response properties. Previous studies suggested that these properties are outcome of an intrinsic chaotic dynamics. Indeed, simplified rate-based large neuronal networks with random synaptic connections are known to exhibit sharp transition from fixed point to chaotic dynamics when the synaptic gain is increased. However, the existence of a similar transition in neuronal circuit models with more realistic architectures and firing dynamics has not been established. In this work we investigate rate based dynamics of neuronal circuits composed of several subpopulations and random connectivity. Nonzero connections are either positive-for excitatory neurons, or negative for inhibitory ones, while single neuron output is strictly positive; in line with known constraints in many biological systems. Using Dynamic Mean Field Theory, we find the phase diagram depicting the regimes of stable fixed point, unstable dynamic and chaotic rate fluctuations. We characterize the properties of systems near the chaotic transition and show that dilute excitatory-inhibitory architectures exhibit the same onset to chaos as a network with Gaussian connectivity. Interestingly, the critical properties near transition depend on the shape of the single- neuron input-output transfer function near firing threshold. Finally, we investigate network models with spiking dynamics. When synaptic time constants are slow relative to the mean inverse firing rates, the network undergoes a sharp transition from fast spiking fluctuations and static firing rates to a state with slow chaotic rate fluctuations. When the synaptic time constants are finite, the transition becomes smooth and obeys scaling properties, similar to crossover phenomena in statistical mechanics


I. INTRODUCTION
The firing patterns of circuits in the central nervous system often exhibit a high level of temporal irregularity.The effect can be seen by the inter-spike interval (ISI) distribution, which, except for a short refractory period, is similar to that of a Poisson process [1][2][3].Intracellular recordings [4] indicate that this irregularity is due to fluctuations in the synaptic input to the neurons, suggesting a dynamic origin, and motivating the exploration of underlying neuronal circuit mechanisms.A possibly related issue is the ubiquitous diversity of the time averaged response properties of single neurons, e.g., their firing rates and tuning modulations, within a local population [1,5].
Several theoretical studies explored the emergence of temporal irregularity and in particular chaotic dynamics in neuronal networks.These investigations focused on two types of models: rate-based models with Gaussian connections, and spiking dynamics of sparsely connected excitatory-inhibitory networks.The first class uses a firing rate dynamics in which each unit is characterized by a smooth function that maps the synaptic input into an output firing rate.In its simplest version, the inputoutput transfer function is tanh(x) where a zero value denotes some reference activity level and 1 and −1 denote saturated firing and quiescent states, respectively.The architecture of the rate model was given by a random connectivity matrix where each connection is drawn from a Gaussian distribution, with zero mean and vari-ance given by g 2 /N , N being the size of the network.It was shown that the system exhibits a transition from a stable zero fixed point state for low value of g to chaotic state for large g.Furthermore, for large N this transition is sharp and occurs at g c = 1 [6].In these models, the emergence of chaos is gradual, as both the amplitude of the fluctuations, their inverse time constant, and the Lyapunov exponent vanish as g → 1 + .The chaotic state is asynchronous in that the correlations between fluctuations of different neurons are weak (and vanish as N → ∞ ).The second class is motivated by biological reality.To capture the spiking dynamics, these models use either binary {0, 1} neurons, or integrate-and-fire spiking neurons, and the connectivity is characterized by randomly sparse connections, where the mean number of connections, K, is much smaller than N .To capture the biological constraints on the sign of the connections, the networks consist of excitatory and inhibitory populations, where the nonzero output connections of excitatory (inhibitory) neurons are positive (negative).The dynamics is dominated by the competition between strong excitatory and inhibitory connections, leading to a dynamic cancellation of the excitatory and inhibitory inputs.The ensuing balanced state exhibits intrinsically generated Poisson-like stochasticity as well as asynchrony [7][8][9][10].There is no parameter regime where the state can be characterized as a stable fixed point, and chaos does not emerge gradually as a function of synaptic strength.Instead, temporal irregularity is always strong, and correlation times are short.The origin of the qualitative difference in the behavior of the two types of models was never investigated comprehensively.In particular, it was unclear whether the differences originate from the different dynamics (a smooth rate dynamics vs. spiking dynamics) or it is attributed to the different architectures: Gaussian distributed synapses in a single, fully connected, and statistically homogeneous population (with mixed excitation and inhibition) vs. two population (excitatory and inhibitory) architecture with sparse connectivity.The question of the existence of a bifurcation to chaos is not only interesting from dynamical systems perspective but may also have important functional consequences.Several studies have highlighted the computational utility of the nonlinear dynamics of random, recurrent networks near the onset of chaos.For instance, a novel 'reservoir computing' model has been proposed, which utilizes the rich intrinsic network dynamics to learn to generate complex temporal trajectories [11][12][13][14][15][16][17].Reservoir computing is most effective above but near the transition to chaos, due to the emergence of slow dynamical fluctuations, since many applications involve dynamics with time scales of seconds, much larger than the microscopic time scales of a few milliseconds.Additionally, it has been shown that decoding signals from these networks is particularly robust above and near the transition to chaos [18,19].A recent study by Saxe et al [20] studies the dynamics of learning in deep networks.They define a critical point above which infinitely deep networks exhibit chaotic percolating activity propagation, analogous to the chaotic state of recurrent networks.Finally, recent advances in machine learning has generated resurgence of interest in recurrent networks (primarily of speech and language processing, e.g., [21][22][23] and references therein).Understanding the dynamics of generic recurrent networks will gain insight into these highly interesting computational capacities.
In this work, we study the existence and the properties of the transition to chaotic dynamics in a broad range of models that span the two above mentioned model classes.
In section II we introduce a general architecture for random recurrent networks with multiple sub-populations, obeying smooth rate based dynamics.We show the correspondence between randomly diluted networks in the balanced state and networks with Gaussian distributed connections with the same multiple population architecture.Section III introduces the mathematical framework of the dynamic mean field theory (DMFT) used in analytical investigation of the properties of the network state and extend the theory of transition from fixed point to chaos, previously derived for a single gaussian population to the more general architecture.In section V we apply the theory to the simple case of a single inhibitory neuronal population and a threshold linear synaptic transfer function.The example of two population model is studied in section VI.Although the DMFT is more complex than the single population network, we show that the two population network exhibits a transition from fixed point to chaos, that is similar to that of the single population case.The role of the single neuron nonlinear transfer function is elucidated in VII.First, we show that for sufficiently sharp function, chaos exists in large systems at all gain values.Secondly, the shape of this function determines the critical behavior of the relaxation times and largest Lyapunov exponent near the transition.
Unlike the rate-based smooth dynamics, in models with spiking dynamics, a fixed point state does not exist as long as some of the neurons are firing.Nevertheless, it is interesting to explore the conditions under-which the spiking network exhibits a transition from a state with stationary inputs and firing rates, to a state where the underlying inputs and rates fluctuate in time.In section VIII we use a Poisson spiking model, to show that in the case where the synaptic integration time is much larger than the inverse of the neuronal firing rates, the synaptic current exhibits a sharp transition from a fixed point to chaotic dynamics, similar to that of a rate dynamics.The behavior for large but finite synaptic integration time is analyzed using scaling analysis, similar to that of a second order phase transition.The implications of the results for the understanding of the dynamics and computations in cortical circuits are discussed in section IX.

II. MODEL A. Randomly diluted network with multiple populations
We consider a network of neurons composed of P subpopulations which are assumed for convenience to have equal size, N (Fig. 1A).The recurrent connections, J ij kl , denote the synaptic efficacy between the presynaptic jth neuron of the lth population to the post synaptic ith neuron of the kth population, where k, l = 1, ..., P , and i, j = 1, ..., N .The connectivity is randomly diluted, so that each connection J ij kl is nonzero with probability p, where and zero otherwise.Thus, the mean number of inputs to each neuron is K.Some of the populations are assumed to be inhibitory.For an inhibitory population l, all J ij kl are nonpositive.Conversely, all nonzero outgoing connections of an excitatory populations are positive.In addition, each neuron from the k-th population receives a constant, uniform input, equal to m 0 W k , where the parameter m 0 denotes the mean activity (firing rate) of neurons in the input population, and W k are assumed positive.Throughout this work, we focus on networks with high degree of connectivity, i.e., N, K ≫ 1.Although in most previous analytical work on the dynamics of dilute neuronal networks it was assumed that the network connectivity is sparse (i.e., 1 ≪ K ≪ N ) [8,10,24,25], here we will assume only that 1 ≪ K < N allowing for dense regime as well (in [9,26] densely connected network were studied, but the focus was on the spatial patterns and correlations.The dynamical properties and the temporal autocorrelations were not addressed).In order for the dynamics to be affected by the fluctuations in connectivity, we assume all nonzero connections equal J kl / √ K, as discussed below.Similarly, the external connections scale as as W k = ω k √ K.

B. Rate dynamics
We first study a firing rate model for the dynamics of neuronal activity.The state of each neuron, say, with indices (i, k), at time t, is given by its total synaptic current, h i k (t), that obeys a first order nonlinear dynamics, The first term is a decay term, the second is the synaptic input from the network and the last is the synaptic input from the external source.For convenience we set all synaptic time constants to unity.The nonlinear transfer function φ(h) denotes the firing rate of a neuron with an input synaptic current, h, and is analogous to the neuronal input-current to firing-rate transfer function, known as the f-I curve.

C. Effective gaussian connectivity
Past work on random highly connected systems has shown that under fairly general conditions, in the limit of large K, the system's behavior depends only on the first two moments of the connectivity matrix [27,28].Hence the connectivity matrix J ij kl can be replaced by a random matrix of Gaussian distributed connections where the mean and variance are matched to that of the dilute network.We thus consider a general dynamics of a multiple population network with fully connected connectivity matrix with Gaussian distributed connections, where are the mean population activities.The coefficients J ij kl are quenched Gaussian variables with zero mean and variance We refer to g as the gain of the synaptic input.The contributions of the mean connections, ḡkl N , is represented by the third term in the RHS of Eq (3) .In the dilute network, the mean connection between two populations equals pJ kl / kl /N .Hence the corresponding parameters of the equivalent Gaussian network are, Note that in the dilute networks, g kl and ḡkl are related through In the theory below, we will analyze the dynamics of the Gaussian network, defined by Eq (3), in its generality; namely, we will consider the parameters describing the mean and the variance of the connections, ḡkl and g kl , as independent parameters.In addition, we will not restrict ourselves to the case that the mean connections ḡkl and h 0 k are large.The relation, (7), as well as the scaling of ḡkl and h 0 k with O( √ K) will be adopted when applying the theory to the balanced dilute networks.The numerical simulations will use both Gaussian and random dilution and will show their equivalence.

D. The balanced regime
In the diluted network model, the net input from the l-th population to each of the k-th neurons, is proportional to ḡkl , and h 0 k , hence scales as √ K. On the other hand, the fluctuations scale as g kl , hence are of order 1.Thus, it would seem that whenever the degree of connectivity, K, is high, the fluctuations induced by the random dilution will have negligible effect on the dynamics, reducing the system to that with a uniform connections and anomalously strong recurrent and external inputs.In fact, the system avoids this saturated state, by dynamically canceling the mean excitatory contributions against the inhibitory ones, resulting in the net mean inputs which are of order 1, and of the same order as the fluctuations.Balanced states in networks with strong excitatory and inhibitory connections were previously studied (see [8,29] ) in the context of binary or spiking networks.As we will show below, networks of units with smooth rate based dynamics and transfer functions with finite gains also settle in this balanced state.

A. Mean field equations for the fixed points
To fully characterize the state of the system we separate the population averaged quantities from the fluctuating ones, writing, where, The fluctuating inputs obey where the spatiotemporal fluctuations in the currents, δh i k (t), are low pass temporal filters of the fluctuating synaptic inputs, In the limit of a large number of inputs per neuron, these quantities obey Gaussian statistics with zero mean, zero spatial correlation but with temporal correlations which need to be evaluated self-consistently as explained below.
First, we study the fixed point solution of the network dynamics, corresponding to a time independent state, where δh i k (t) = δh i k = δη i k are Gaussian distributed with zero mean and variance ∆ k ≡ (δh i k ) 2 .This quantity is calculated self-consistently, as where C l = φ 2 (h) are the average autocorrelations of the neuronal activities.Finally using The constants u k are determined self-consistently, via Eq (9), and Note that in Mean Field Theory all averages denote integration over the Gaussian variables (z in the above equations) which have zero means and unit variance.

B. The balance equations
In balanced architectures both ḡkl and h 0 k are of order √ K (see Eq (6)).In this case, the self consistent equations for m k assume a simple form.This is because for the system to settle into an unsaturated state, u k must be of order 1; hence Eq (9) yield (to leading order in √ K) Since the mean rates are non-negative, Eq (15) can be obeyed only for a range of ḡkl and h 0 k values.Stability of the balanced state (see [8]) further restricts the parameter regimes.We refer to these restrictions on the parameters as the balance conditions.Substituting the solution to the balance equations into Eq (14) yields equations for the residual, order 1 mean inputs, u k .

C. Stability of fixed points
Stability of the population average activities The stability equations for the population averaged degrees of freedom are determined by considering the response of the system to perturbations in the external fields, δh 0 k , which are uniform within the populations.It is convenient to define the uniform linear response by Note that due to the spatial summation, this quantity is essentially averaged over J .Interestingly, the response of the population averaged activity is coupled to the response of the population variances, defined as: In the Appendix B we derive the following coupled equations for the two sets of susceptibilities in the temporal Fourier domain, Here, χ and χ ∆ are both P xP (P being the number of populations).The P xP dimensional matrices appearing in (18) are defined as, Thus, the fixed point is stable against population average perturbations provided that all the eigenvalues of the matrix have negative real part.

Stability against local perturbations
We now study the stability of the fixed point against small perturbations in the form of infinitesimal local fields h 0i k .It is convenient to define the local susceptibility matrix The average of the off-diagonal elements of this matrix is zero, and their variance is O(1/N ), hence we focus on the mean square susceptibility matrix, G, defined in the Fourier domain by In Appendix C we show that the matrix G is where Thus, the fixed points are stable provided the real parts of all eigenvalues of the P dim matrix M are all less than 1.In general, M is not symmetric; however, the largest (in absolute value) of the eigenvalues is real.We will call M the stability matrix.
In conclusion, we have derived two stability conditions: one related to population average perturbations, Eq (19), and a second related to local perturbations, associated with (22).Note that the mean interactions ḡ do not appear in the latter condition.This is because the contribution to the off-diagonal elements of (21) from the uniform susceptibility is only of the order 1/N .For the fixed point to be stable, both conditions must hold.However the instability associated with each condition has a different implication.The population average instability signals either a runaway (as we will show in a concrete example in Section V) or a transition to another stable fixed point, a stable limit cycle, or some other coherent spatio-temporal states.On the other hand, as we will show below, the instability in ( 22) signals a transition an asynchronous chaotic state.Which of the instabilities occurs first when one varies one of the parameters depends on the specific architecture and parameter sets.Specific examples will be shown below.

IV. CHAOTIC STATE: DYNAMIC MEAN FIELD THEORY
The chaotic state is an asynchronous state with stationary statistics, governed by the local-field autocorrelation functions where the angular brackets denote both average over neurons in the population and over time t.The selfconsistent equations for these functions, derived from the Dynamic Mean Field Theory (DMFT) are (see Appendix A for detailed derivation): where are the firing rate autocorrelation functions, which depends on ∆ k (τ ) through Here, both y and z are Gaussian distributed random variables with zero means and unit variances.The boundary conditions for the solution are: ∂∆ k (0)/∂τ = 0; ∂∆ 2 k (∞)/∂τ 2 = 0; and ∆ k (0) = ∆ 0 k .The first condition stems from the general fact that ∆(τ ) is a symmetric, continuous function.The second one comes from the requirement that ∆(τ ) converges to a finite value as τ → ∞ for a chaotic solution.Solution of (26) subject to these boundary conditions yields a unique solution for ∆ k (τ ) that converges to a fixed value, ∆ k (∞) at long time.Except for networks with h → −h symmetry, the quantity ∆ k (∞) is in general not zero, and represents the variance of the Gaussian distribution of the time averaged synaptic currents.The fluctuations in these inputs determine the fluctuations in the time averaged firing rates of individual neurons.Details of the derivation of the DMFT are given in appendix A. Alternative formalisms for deriving dynamic mean field equations for such architectures are Faugeras and coworkers using Mckean-Vaslov Fokker Plank formalisms [30], or for stochastic networks via path integrals (e.g., [31,32]).Lyapunov exponent In the chaotic state, we expect the squared susceptibility to show an exponential divergence with time with a rate constant given by the largest Lyapunov exponent (LE).LE is defined as where Extending previous calculations [6], we show in appendix C that the long time behavior of G(τ ) is determined by the ground state of a quantum mechanical problem with the Hamiltonian operator defined as where .
(30) Finally, the LE is equal to where ǫ 0 is the ground state energy of H.Note that, in general, H is not Hermitian.However, complex values of the Lyapunov exponents lead to unphysical oscillations of the squared susceptibility and so the ground state energy is expected to be real and negative.Direct rigorous proof is still missing.In the regime of a stable fixed point, M is time independent and ǫ 0 > 0, recovering the same stability criterion as above.A transition from a stable fixed point to chaotic dynamics occurs as ǫ 0 vanishes.

V. SINGLE INHIBITORY POPULATION WITH THRESHOLD-POWER LAW TRANSFER FUNCTION
Solving the DMFT for general networks requires extensive numerics.Many of the salient features are captured by the simplest case of a single inhibitory population (Fig. 1B) driven by a constant external excitatory input.The systems's dynamics, is characterized by the recurrent inhibitory mean gain parameter ḡ < 0, the gain parameter of the synaptic fluctuations, g 2 = N (J ij ) 2 , and the excitatory external input, h 0 > 0. The system is further simplified by assuming a transfer function of a threshold-power law form [33,34], where [x] + = max(x, 0).For reasons explained below, we will see that the this monomial form is general enough when one studies the properties of the chaotic instability in networks with continuous threshold transfer functions.

A. Fixed point and its stability
In this model, the mean field equations for the fixed point are particularly simple.First, the mean input and its variance are given respectively, by where m is the spatially averaged activity, φ i , and where δh i = h i − u and C is the mean square activity, φ 2 i .The mean field equations for m and C are given by and Here representing the mean input in units of the standard deviation, and we have used the homogeneity property of the transfer function.Substituting (37) in Eq (35), the parameter x can be determined from where g = g∆ (ν−1)/2 .The values of m and ∆ are evaluated using Eqs ( 34), (38), and (36).
As one increases g , the fixed point becomes unstable.
Whether the first instability is the chaotic one, (Eq (22)) or the uniform one (Eq (18)), will generally depend on the parameter set (ν, ḡ and h 0 ).Chaotic Stability A transition to a chaotic state occurs at the fixed point state when For a threshold-linear transfer function (ν = 1), g = g is independent of ∆ and critical values g c and x c , at which a bifurcation from a fixed point to chaotic dynamics occurs is determined by Eq (39) and (40) and are independent of h 0 .In this case C ′ (x) = H(−x) with , and transition occurs when g c = √ 2,and x c = 0.For non linear transfer functions, the critical values x c and g c depend on the values of h 0 and ḡ.Uniform Stability The stability condition for a network of single population against uniform perturbations is given by (See Appendix B where derive the uniform stability condition in Eq (18) for a single population for a single population): For ν = 1, φφ ′′ vanishes and the third term in the LHS of ( 41) is negative for ḡ < 0, hence a chaotic instability (Eq 40) always occurs first (i.e., for lower values of g).
For ν = 1, the occurrence of uniform stability of the fixed point will depend on the parameters ḡ and h 0 .

B. Chaotic state
In the threshold-linear model, it is useful to define the normalized autocorrelation as well as which measures the normalized variance of the dynamics, namely, the (spatially averaged) temporal variance of the local fields normalized by the squared amplitude.The autocorrelation function obeys (25) Normalizing by ∆ 0 , and using the homogeneity of the transfer function as above, we can define a Newtonian equation of motion on the normalized autocorrelation (42) which reads The potential V is given by where ´Dx = ´dx exp(x 2 /2)/ √ 2π, and In the above equation we have used the equality ∆ 0 = ∆(0) and g ≡ g∆ The initial conditions for (45) are q(0) = 0, ∂q(0)/∂τ = 0, and ∂ 2 q(∞)/∂τ 2 = 0.The second condition stems from the general fact that ∆(τ ) is a symmetric, continuous function.The last one comes from the requirement that ∆(τ ) converges to a finite value as τ → ∞.These three conditions yield a unique solution and a unique value for the normalized mean input x.Due to the existence of a potential, x can be obtained without explicitly solving for q(τ ) as follows.The above boundary conditions imply that the initial energy equals the potential energy, V (0), while the final energy equals the final potential energy, V (q(∞)).Thus, conservation of total energy yields V (0) = V (q(∞)).Finally, for q(∞) to be an equilibrium point, the force must vanish, hence ∂V (q(∞))/∂τ = 0.These two equations determine both q(∞) and x.Once x is known, ( 45) is integrated numerically to yield q(τ ).Finally, ∆(τ ) is evaluated by solving the mean field equation (36) with ∆ = ∆ 0 .
Stability of the chaotic solution The existence of a bounded chaotic phase, in which the mean activity of the network does not diverge and the trajectories of the local fields remain bounded requires stability of the uniform mode.Unfortunately a theory of uniform stability in a time dependent dynamical state is lacking.However, in the linear case, one can find simple arguments for the existence of a chaotic solution.In this case, the normalized mean input x < 0 in the chaotic phase is independent of h 0 , and one finds that only when does a solution for the mean field equation exists; smaller values of |ḡ|, entails dynamical instability.
For non linear transfer functions, the above argument does not hold.For ν = 2 for example, a solution for the MF equation exist for any value of h 0 and ḡ.However, numerical simulations show that an instability in the chaotic phase exist, as can be seen in Fig. 2.
We note that this instability results from the unboundedness of φ.For φ with a saturation level, the dynamics will always be bounded but, a crossover is expected from fluctuations spanning the linear dynamic range of the neurons when the net inhibition is large, to 'epileptic' fluctuations in which neurons fluctuate between their saturated levels, for weak inhibition.
Existence of a fixed point solution An interesting result that is implied by Eq (40) is that a stable FP exist only when In contrast, when ν ≤ 1/2, the RHS of the FP stability condition, (40), diverges indicating that no stable fixed point solution exists for finite g, and depending on g, the system is either in a stable chaotic state or diverges.This prediction is confirmed by the numerical simulations, Fig. 2A, in which the normalized variance of the fields q(∞) is plotted as a function of ν.For ν values of 0.5 or smaller the system is in a chaotic state even for small values of g.
The instability of the fixed point for small ν is due to the presence of positive local fields that are arbitrarily close to zero.Thus, for a system of finite size, where the positive local fields are always of a nonzero minimum value, we expect that a fixed point will be stable at sufficiently small values of g.In the following sections, we focus on networks with threshold-linear (ν = 1) and quadratic (ν = 2) transfer function, which exhibit a fixed point, a chaotic regime and a transition therefore.
Finally we note that the same arguments hold for the multiple population case as well.

C. Phase diagram
In Figure 2B we show the phase diagram for the threshold-linear (ν = 1), depicting the regimes of stable fixed point, chaos, and unstable dynamics in the parameter space of g and ḡ.For values of g < √ 2 the network settle into a fixed point.For larger gain values, if the inhibition is strong enough, i.e. the condition in (47) holds, then the dynamics is chaotic and bounded to a finite regime in the state phase.For lower values of the uniform inhibition, the mean activity diverges.Note that due to the semi-linearity of the transfer function, the phase diagram is independent of the magnitude of h 0 as long as it is positive.In a diluted network, the phase-plane is reduced to a single line For a sparse network where K ≪ N , ḡ = − √ K (dashed line in 2E).Thus, for large values of K, as in the balanced network case, the chaotic state at g > √ 2 is always stable.In 2C The phase diagram for the semi-quadratic (ν = 2) is presented for h 0 = 1.For low values of |ḡ|, uniform instability occurs at lower g than chaotic instability, and no stable chaotic phase exists.For larger values of |ḡ|, a critical transition between a fixed point and chaotic dynamics exist.For larger values of gain g, the dynamics always diverges; this instability is shown by numerical simulation in the phase diagram.For a diluted network, where |ḡ| = O( √ K), there is always a chaotic transition, as shown in the inset of Fig 2C.

D. Analytical evaluation of the Lyapunov exponent
In the single population case, the evaluation of the Lyapunov exponent is also relatively simple.In this case, the single-component Hamiltonian is given by 2 where V is the classical potential (Fig 3A) , which can be easily evaluated once q(τ ) is known (Fig. 3B).Indeed, for the semi-linear network we find that the ground state is negative for g < √ 2 and positive otherwise, implying that λ L is negative for g < √ 2 and changes sign to a positive value for g > √ 2 as expected in a chaotic state, see Fig. 2C-F.For the non linear case withν = 2 the Lyapunov exponents are calculated near and above transition in section VII.

E. Numerical simulations
Numerical integration of the full network equations in (32) verifies our theoretical predictions.For thresholdlinear network, when g < √ 2 the network settles into   (45).The value of x is determined through the requirement that the maximum of the potential at nonzero q equals its value at zero q.Compare the form of the potential with the exact value of x (solid line) with those calculated with x that deviates by ±1% from the correct value (dashed lines).
(B) numerical solution for the normalized variance q(∞)as a function of g. (C)-(F) The normalized autocorrelation function q(τ ) found by integrating the equation of motions (45) using the correct value of x for two values of g (top), and the corresponding quantum potential W = −∂ 2 V /∂q 2 (bottom).
The ground state energies ǫ0 (horizontal lines) were found numerically.In both cases they are negative implying a positive Lyapunov exponent.
a fixed point, with the expected mean and variance of the local quenched fields (Fig. 4A).When g > √ 2 chaos settles with temporal fluctuations that increase with g.The population averaged currents remain almost constant with small finite size fluctuations (Fig. 4B and 4C).The chaotic behavior is characterized first by the decay of the autocorrelation (Fig. 4D).which agrees with the theoretical q(τ ), and second by a positive LE.The latter was calculated from simulations using Wolf's algorithm [35].The resultant values λ L = 0.121, and 0.225 for g = 2.2 and 3.0, respectively, agree well with the values 0.126 and 0.232 obtained by numerically calculating the ground state of the potential W (τ ) (Fig. 2D).Simulations of a semi-quadratic network also verifies our analytical results.Simulations results near and above the chaotic transition are given below, in section VII.

Randomly diluted networks
The above numerical results were for Gaussian distributed synaptic connections.However, as shown above, in the limit of large mean number of connections per neuron, K, the DMFT is expected to hold also for randomly diluted networks, in which case, In the case of a single population with threshold linear transfer function, comparing the behaviors of different connectivity schemes (Gaussian, sparse and dense dilution) is relatively simple since the normalized autocorrelation q(τ ) depends only on g, not on ḡ (see above).These expectations are borne out by our numerical simulations, shown in Fig. 5.For simulations preformed on network with the same variance in their connectivity, the calculated normalized autocorrelation q(τ ) is identical in the Gaussian network and the randomly diluted networks with both p = K/N = 0.05 (sparse network) and p = 0.8 (dense network).

VI. TWO POPULATIONS WITH THRESHOLD-LINEAR TRANSFER FUNCTION
Here we address briefly the application of the general theory to the case of a two-population network with one excitatory (denoted as E) and one inhibitory (I) populations (Fig. 1C)..For simplicity we study the example of the semi-linear transfer function.Generalization to other values of ν is straight forward, as presented in III.

A. Fixed point and its stability
The fixed point equations for the variance and mean of the synaptic current are: alongside with the definition of x k , The eigenvalues of the stability matrix M kl = g 2 kl H(−x l ) are given by (53) Note that the two eigenvalues are real.For a fixed point to be stable Λ + < 1.Thus, the transition to chaos occurs at parameters s.t.Λ + = 1.When this eigenvalue becomes larger than 1, one must solve the DMF equations for the chaotic state.

B. Chaotic state
The DMF equations can be written as, for k ∈ {E, I}, where are the normalized autocorrelation function for each population, and here se set Unlike the simple case of a single population, no classical potential function can be defined for the above two particle motion in (54).Nevertheless, the dynamical equations can be integrated numerically, iteratively finding the values for x k that yield the desired asymptotic behavior of the normalized variance, Likewise, in general, the Hamiltonian governing the Lyapunov susceptibility G(τ ) is not Hermitian.However, as stated above, we expect the ground state to be real since the elements of G(τ ) are non negative by definition, and complex ground state would mean oscillations around zero (See appendix C).

C. Numerical results
Below we describe the chaotic state of this network, based on numerical investigations.In these simulations, we focus on the balanced regime, in which all the mean contribution of each population to the local input is of order √ K.In this case, to leading order, excitation and inhibition cancel each other, and the mean activity of each population is given by and where W = [W E W I ] T is the vector of feedforward connections, and J is the 2x2 matrix of the mean recurrent connections, as defined in the diluted model above.The balance conditions on the parameters are det J < 0, To demonstrate the properties of the two population system, we fix all the values of the connections except for a global gain g, and α, which controls the excitatory connections.For α = 0 the network is reduced to the single inhibitory network with the parameters used in section V.In Fig. 6A we show a section of the phase space showing the critical g c (α) line.The critical curve is calculated by solving the eigenvalues of the stability matrix (53) for each value of α.Fig. 6B, shows an example of the stability for α = 0.55.Numerical simulations confirm the theoretical results as seen in Fig. 6C.For convenience of comparison, parameters were chosen so that they correspond to the same network parameters studied in [8,9].Unlike the binary network in which no stable fixed point exists for any g, in the threshold-linear network a transition to chaos occurs at g = 1.21 (Fig 6B and C).Fig. 6C indicates that the normalized variance of the two populations are very similar to each other (see discussion in Section 10 below).Figure 7 shows an example of the chaotic fluctuations in the two population network.Same parameters were used as in Fig. 6A and 6B with g = 1.6, within the chaotic phase..As expected, inputs into neurons from both populations show large fluctuations, while the mean activity of each population is constant up to fluctuations of order 1/ √ K (Fig. 7A).The autocorrelation of both population decays monotonically with |τ | to an equilibrium value (Fig. 7B).A signature of a dynamical balanced state is the substantial synchrony in the fluctuations of the excitatory and inhibitory mean activities.(Fig. 7C).Consistent with Fig 6C above, Fig. 7B shows that the autocorrelation functions of the two populations are nearly proportional to each other, implying that the normalized autocorrelation functions, q E (τ ) and q I (τ ) are approximately equal (as observed recently also by [36]; see Section VII below).

VII. CRITICAL BEHAVIOR AT THE ONSET OF CHAOS
In this section, we analyze the characteristics of the system at the onset of the chaotic state, and ask what features determine the critical properties of this transition.
As will be seen below, these properties depend on the shape of the single neuron transfer function near the origin.As such, the class of power law functions defined in 33 can represent any continuous threshold function.Absence of stable chaotic phase Before we explore the critical properties near the transition to chaos, we note that not all transfer functions allow a stable chaotic solution.For example, Eq (25) may not have a solution satisfying all boundary conditions.In these instances dynamic is either stationary or explosive (i.e., 'epileptic').
An example of such behavior is the exponential curve φ(x) = e x , as shown in Appendix D. On the other hand all φ(x) which saturate as x → ∞ are expected to exhibit a stable chaotic state for large values of g.
In the following, we focus our study on transfer function of the type (33), with ν > 1/2, which exhibits a phase transition from FP to stable chaos.

A. Critical properties: A single population
When ν > 1/2, the critical properties near the transition to chaos depends on the value of ν.To see this, we first consider the case of a single inhibitory population.The critical value of g is given by the value at which (39).As the chaotic state approaches the critical g from above, the amplitude of the time dependent fluctuations becomes small, hence, we can expand the MF equations in powers of q(τ ) (which is small at all τ ).In addition, we will expand the equations in the small static parameter δx ≡ x − x c and the bifurcation parameter ǫ ≡ g 2 /g 2 c -1.To leading order, the DMF equation ( 54) takes the form (see appendixE) where b and c are parameters of order unity, and the exponent ρ obtained from expansion of the firing rate autocorrelation, C(τ The constant term a vanishes at the transition and depends on ǫ.In order for a nontrivial solution to exist, all the terms in (60) should be of the same order of ǫ, hence the time scale of q(τ ) should scale as τ ef f ∼ 1/ √ ǫ.The amplitude of q(τ ) should scale as ǫ 2 for ν ≤ 3/2 and as ǫ for larger ν.Finally, a should scale as O(ǫq) (as indeed found by an explicit evaluation of a, see appendix), yielding the following scaling behavior, where f (x) is of order 1.The Hamiltonian in (29) scales as where F (x) is some other function of order 1.From ( 63) it implies that the largest Lyapunov exponent, Eq (31), scales as We have confirmed the above predictions numerically, for the cases of ν = 1 (Fig. 8 ) and ν = 2 (Fig. 9).
In this work, we assume a synaptic transfer function of the form (33).In [6] the variance of a network with a sigmoid synaptic transfer function, φ(h) = tanh(h), and h 0 = 0, was shown to scale as ∆(τ ) = ǫf τ √ ǫ and its LE as λ L = O(ǫ 2 ).This behavior results from the h → −h symmetry of the system, which implies that ∆ 0 vanishes at the transition.This symmetry is not expected to hold in biological networks.Thus, the scaling shown in (62) and (64), in which the LE is larger than in the symmetric case, reflects the behavior in generic neuronal networks.
In [37] the authors study the distribution of equilibria points above criticality, for a single population with sigmoidal transfer function.An interesting result is that the mean (with respect to realizations of J ij ) number of equilibria behaves just like the Lyapunov exponent.Consequently, (64) may also elucidate the topological complexity of the flow above criticality for threshold power law transfer functions.

B. Multiple populations
The above critical properties were derived for the case of a single population.However, we argue that these properties hold also for multiple populations.In the case of several populations, transition to chaos occurs when the largest eigenvalue of the stability matrix, M, (22), equals 1. Near the transition (on the chaotic side), this eigenvalue is slightly larger than 1 while the real part of the rest remain stable.To leading order, the unstable direction, R 1 , in the space of q k (τ ) = 1 − ∆ k (τ )/∆ k (0) remains the same as the marginally stable eigenvector at the critical point.Hence, near the transition the dominant direction of temporal fluctuations are along the critical eigenvector, and the dynamic equations collapse into one dimension, similar to the one population case.In general, we expect that all q k have nonzero components on the critical direction, hence q k (τ ) ≈ q(τ )R 1 and the critical properties are the same as the single population properties, above, see Appendix E. Similar arguments apply to the scaling properties of the quantum Hamiltonian, hence the LE scales as (64).These results are supported by simulations of an excitatory -inhibitory network, defined in Section VI.The simulations displayed in Fig, 10 demonstrate the universality of the critical properties of the transition to chaos.In the specific case of a threshold-linear model, we find for a wide range of parameters, R 1 ≈ (1, 1, 1, ...1), implying that near the transition, the normalized autocorrelations of all populations are not only proportional but are expected to be approximately equal to each other, as detailed in appendix E. Note that the one-dimensional character of the chaotic  (27), scale as a single population, Eq (64).Simulations conducted on network with diluted connectivity matrix (N = 3600, K = 600 for each population), and parameter set as in Figs.7 and 6C.
fluctuations is exact only asymptotically close to the transition.Away from the transition, the non-linearity of C(q) couples the unstable mode to the other modes, inducing fluctuations in all modes.Interestingly, we find numerically that in many cases, even far from the transition, the autocorrelations are still close to being equal, q k (τ ) ≈ q(τ ) as is apparent in the example of Fig. 6C and 7B.Similar observations have been made recently in [36].

VIII. TRANSITION TO CHAOS IN SPIKING NETWORKS
In this section, we inquire under what conditions the transition to chaos observed in the rate dynamics occurs also in spiking networks.In contrast to rate models, the membrane potentials of spiking networks are not at a fixed point as long as some of the neurons are active.Hence, in general, there is no transition from fixed point to chaos.An exception is the case of networks with slow synapses.If the synaptic time constant is long compared to the spiking dynamics, it is possible that for low synaptic strength, the spiking dynamics is averaged out at the level of the slow synaptic currents, which will therefore stay approximately constant.Thus, there might be a critical g in which the state with almost constant synap-tic currents undergoes an instability leading to a chaotic (or at least temporally irregular) state in which temporal fluctuations of the synaptic currents are large even at the scale of the synaptic time constant.
To explore the possibility of transition to chaos in spiking networks with slow synaptic time constant, we consider a network of spiking neurons that fire with inhomogeneous Poisson statistics, specified by the instantaneous rate of each neuron, r i (t) = φ h i (t) .We assume that the network is in an asynchronous state [38] and that the typical firing rate is large compared with the inverse synaptic time constant,τ −1 s .Focusing on a single inhibitory populations, we write the dynamic equation of the local fields as, where J ij is either Gaussian distributed connections or the corresponding randomly dilute network, as appears in Eq (32) above.The spike train ξ j (t) = k δ t − t j k of the presynaptic neuron j at times t i k represents the random Poisson process of neuron j.The non-linear rate function is given by the rectified linear transfer function φ(x) = τ −1 0 [x] + where τ 0 is a microscopic time constant, which is related to the inverse slope of the single neuron f − I curve.Note that in this model, the connections have units of time, hence we define g 2 = N (J ij ) 2 τ −2 0 so that g (and ḡ) are dimensionless.For simplicity of notation, in the following we measure rates and times in units of τ 0 , i.e., we use units such that τ 0 = 1.To understand the effect of spiking dynamics we can separate the synaptic input into a rate contribution and a spiking noise contribution where The auto-covariances of the two term on the RHS of (66) are and where C(τ ) = φ i (t)φ i (t + τ ) is the autocorrelation of the rate functions given by ( 26) (in units of τ −2 0 ), andδ(τ ) is the Dirac delta function.The last equation results from the average over the Poisson process, (ξ j (t) − φ(h j (t))(ξ j (t + τ ) − φ(h j (t + τ )) = φ(h j (t))δ(τ ) .Thus, the Poisson noise is equivalent to an additive white noise with amplitude g 2 r, where r is the population averaged rate r (in units of τ −1 0 ).Thus, the spiking noise can be incorporated into the dynamic mean field theory, yielding, A. Perturbation analysis of the spiking noise As explained above, we are interested in the regime of large τ s (which is the synaptic time constant relative to τ 0 ).This limit can be illuminated by writing the rescaled mean field dynamics, where we have scaled time so that t = τ /τ s .The above equation demonstrates that the contribution of the Poisson noise (proportional to g 2 r τs ) is small relative to the rate contributions (which are proportional to g 2 r 2 ) and the ratio between the two is of the order of (rτ s ) −1 .
For gain values above g c = √ 2, and rτ s ≫ 1, the noise contribution will be small compared to the unperturbed rate autocorrelation given by the solution of (45) (Figure 11A).Below g c , the only time dependence comes from the spikes.To study this regime, we consider the spikes contribution as a small perturbation around the static autocorrelation, ∆(t) = ∆ st + ∆ sp (t), where ∆ st is the static solution for ∆ in the rate model, and expand the equation above to linear order in ∆ sp (t) yields where C ′ = H(−x), see Eq (40).Solving (73) yields where, following (40), Here x is the unperturbed net input, Eq (38), and we have substituted back τ = tτ S .[Note that g is in units of τ 0 ]. Figure 11B displays the simulation results for the autocorrelation δ∆(τ ) = ∆(t) − ∆(∞) for g below the critical value, and the theoretical estimates ∆ sp (τ ).72) and the autocorrelation calculated from simulation (rτs = 30 ,solid line) is fully explained by autocorrelations of the rates, found by (45) (dashed line).(B) For values of g well below the rate transition, the rates do not have intrinsic fluctuations, and the fluctuations in the fields is solely due to the Poisson firing.MFT results of (74) (dashed line) agrees with simulations (rτs = 30, sold line).In intermediate values of g, rate and spike fluctuations interact nonlinearly.(C) Normalized variance of the local fields q(∞) from simulations of networks at different gain levels in the transition region.For higher levels of rτs, the crossover between variance dominated by the Poisson process and variance dominated by the fields dynamics becomes sharper.(D) The coefficient of variation (CV ), Eq (76), for different values of gain.For low values of g, the CV approaches 1, indicating a Poisson process with (almost) constant rates.For higher values of the gain, the CV is larger, as expected from an inhomogeneous Poisson process.The crossover between the two regimes becomes sharper as rτs increases.Simulations preformed using an inhibitory population with Gaussian distributed connectivity (N = 3500, K = 700) and ḡ = h 0 = 1.

B. Scaling analysis
Approaching the transition point from below, the Poisson contribution, (74), is amplified by the divergent response of the fields' dynamics as λ → 0. For a finite rτ s however, the autocorrelation remains finite at all g and the transition is smoothed.Indeed, Figure 11C shows that q(∞) increases smoothly as a function of g but its increase become sharper the larger rτ s is is. Figure 11D shows similar behavior for a measure of the fluctuations in the spike times, known as Coefficient of Variation, CV , defined as the ratio between the standard deviation of the ISI distribution, σ ISI , and its mean, µ ISI [39],boundaty The CV increases from CV ≈ 1, at small g, which the value for a homogeneous Poisson values, to substantially larger values above g = √ 2, due to the fluctuations in the underlying rates.This increase is smooth but become sharper for large values of rτ s .
To study the effect of the small spiking noise on the transition, one needs to perform a nonlinear analysis.Here we use a scaling analysis, similar to that of second order phase transition in equilibrium statistical mechanics.The scaling regime is characterize by two variables, rτ s ≫ 1 and ǫ ≡ g 2 /g 2 c − 1, |ǫ| ≪ 1.We write the normalized variance near the transition as where the scaling variable z is given by Far below the transition, z → −∞ and according to (74) Similarly, above transition z → ∞, and from (62) q + (∞) ∼ ǫ 2 entailing f (z → ∞) ∝ z α and αβ = 2.It follows that the critical exponents are The behavior of the effective relaxation time of the network can also be treated in a similar manner.In the absence of the spiking noise, the effective time constant of the autocorrelation diverges as the transition is approached from above, as τ ef f ∼ ǫ −1/2 (Eq (62) and Fig. 8B).In the presence of small spiking noise this time constant never diverges and we write, where τ ef f is the effective correlation time in units of τ s .Here, the critical behavior both below and above transition, z → ±∞, is similar (as can be seen from ( 74)) and Note that the above results predict that at the (rate dynamic) transition (ǫ = 0) the amplitude of the variance vanishes as (rτ s ) −4/5 and the effective time constant diverges (rτ s ) 1/5 , respectively.Simulation results that support these analytical predications are presented in Fig. 12. Figure 12.Scaling behavior near criticality.Rescaled form for the normalized variance, (rτs) α q(∞), (A), and inverse effective relaxation time (rτs) γ τ −1 ef f , (B), using the scaling variable z = τsr|ǫ| β sign(ǫ).The data reveals the underlying scaling functions f (z) and F (z), respectively, consistent with (77) and (80) Dashed lines show the predicted asymptotic behavior (see text).Simulations preformed on Gaussian network of various sizes (N = 2000, 3500 and 6000, h 0 = 1) and with various synaptic time scales (see legend).Effective relaxation was calculated from simulations by taking τ ef f = ´τ δ∆(τ )dτ / ´δ∆(τ )dτ for τ ≥ 0.

Universality of the transition to chaos across network architectures
The results presented here show the universality of the dynamical transition from a fixed point to chaos in large networks across different network architectures.The theory, as well as simulations, show no dependence on the detailed distribution of the synaptic connectivity beyond the first two moments.In particular, a Gaussian network behaves similarly to randomly dilute networks with either sparse or dense connectivity.The two architectures differ in the mechanism by which fluctuations are not suppressed by the high connectivity.In the Gaussian networks, the mean of the connection distribution is assumed to be smaller by a factor of 1/ √ N than their variance.In contrast, in the dilute networks, the mean inputs from each population is strong compared to the fluctuations and the fluctuations are nevertheless potent due to the dynamic balance of excitation and inhibition.This balancing amplifies not only the temporal fluctuations but also the time averaged ones.For instance, the neurons in the dilute networks exhibit a broad distribution of time averaged firing rates.In the threshold linear case, the distribution of the firing rates will have a truncated Gaussian shape with a width which is related to ∆ k (∞).This balance mecha-nism is similar to the previously shown balanced state in spiking or binary networks.Here we show that this process is quite general and holds also in smooth dynamics with graded nonlinearity.It should be also noted that most of the previous work on balanced states assumed sparse connectivity, i.e., K ≪ N .In this limit, neurons share only few common sources, hence the network state is naturally asynchronous.Interestingly, we show here that even dense networks (where e.g.K/N is as large as 0.8, see Fig. 5) exhibit the same transition from a fixed point to asynchronous chaos, as in sparse or Gaussian networks (see also [9,26]).It would be interesting to prove analytically the absence of stable synchronous states in these systems.The analytical and numerical investigations indicate that the transitions to chaos in a single population and a two populations (E-I) network is of the same nature and the DMFT predicts that this is true also for networks with a general multiple populations architecture.The extension of the DMFT to multiple populations is similar in spirit to previous work on stochastic multi-population networks with weak connections (of order 1/K ) [40] and remains valid as long as the number of populations is small compared to K. The structure of the DMFT in the case of multi-population is different than in the single population case in that the dynamics of the autocorrelation function is not governed by a potential; thus, numerically solving these equations is more challenging.Furthermore, although near the transition the autocorrelations behave qualitatively similar to the single population case, deep in the chaotic phase, depending on the parameter value, they may not be monotonically decreasing with time but might exhibit damped oscillations.Likewise, in multiple population networks, the 'quantum' operator for the Lyapunov exponent is non Hermitian, the implications of which needs to be further explored.Stability against uniform perturbation Finally, we have focused primarily on asynchronous dynamical states and instabilities driven by the random components of the interactions.However, in the general architectures considered here, the fixed point state may undergo first an instability associated with the population averaged perturbations.Interestingly, due to the inherent nonlinearity of our system, the uniform and local degrees of freedom are coupled.As a result, the response to uniform perturbations involves equations that couple the susceptibility of the population mean activities and that of the population activity variances, Eq.( 18).This, together with (22) constitutes the stability conditions for the fixed points of our general architecture.These results can also be interpreted as results regarding the eigenvalue spectrum of the random matrix of effective couplings which incorporate the local gains φ ′ (h i l ) from linearizing around the fixed points, which are correlated with the random coupling (see also [41]).
In the examples studied here, the uniform instability leads to a runaway of the dynamics.In general, in the multi-population case, the fixed point instability to uni-form perturbations may exhibit an instability leading to coherent oscillatory states, and these states may further become destabilized by the random connectivity yielding possibly chaotic states in the form of partially synchronous oscillations.The exploration of these states is beyond the scope of this paper.Finally, we emphasize that in the case of dilute networks, the primary focus of our work, the underlying strong inhibition puts the system aways from these instabilities as long as they are in states with excitation-inhibition balance.In particular, multiply stable states occur only under special conditions [29,42].Spatially modulated networks In the multi-population model, the functional structure of the entire network is given in terms of pairwise connectivity which depends on the subpopulation membership of the pair.In the brain, connectivity probability depends also on the distance between neurons.This distance can be either with respect to physical location or functional, namely the preferred stimulus of a neuron.Likewise, the external input to neurons is not in general homogeneous but may depend on the physical or functional coordinate of each neuron.Because these location dependencies are broad and smooth, the system can be well described by dividing it into columns, each represents many neurons having similar location.The total system, a hyper-column can thus be considered a special case of our general multipopulation architecture.In the large N limit, the sums over columns will turn into smooth integrals over the spatial coordinates.In general, the architecture of a neural circuit if large enough will consist of both genuinely discrete subpopulations (e.g., Excitatory and Inhibitory) as well as continuously varying coordinates.An example is a model of orientation hyper-column in V1 where the connectivity between neurons depend on the difference in their preferred orientation and the external input depends on the difference between the preferred orientation and the stimulus orientation [29].In this case, the balance equations (15) determine the tuning properties of the mean firing rates of the network.The DMFT describes the statistical properties of the spatiotemporal fluctuations in the network dynamics.
The shape of the nonlinear transfer function We have found that the existence of a transition from a fixed point to chaos as well as its critical properties when it exists, depend on the shape of the input-output transfer function near the origin (i.e., near the firing threshold).An important result is that for a transfer function rising as square root or sharper, i.e., for φ(x) ∝ x ν with ν ≤ 1/2, there is no stable fixed point and the system is chaotic also for small g.This raises the interesting question about the value of ν in biologically relevant networks.In biological application of rate models, linear, quadratic or values of ν larger than 1 have been often used [43][44][45][46][47].The transfer function φ is often interpreted as reflecting the f-I curve of a spiking neuron.The linear leaky integrate and fire (LLIF) model [48,49] exhibits a sharp (1/| log(x)|) rise from zero, corresponding to ν = 0. Our theory pre-dicts that random networks with such f-I curves exhibit chaotic dynamics also for low g.In conductance based (Hodgkin Huxley type) models the behavior of the f-I curve near threshold depends on the type and parameter values of the various nonlinear conductances contributing to the spike generation.In Hodgkin Huxley (HH) Type I models, often used for modeling cortical neurons, the generic behavior of the f-I curve near threshold is a square root [50] making them border-line cases for a transition at finite g.Similar ν = 1/2 behavior is exhibited also by nonlinear (NLLIF) models [49].The presence of adaptation current with long adaptation time constant results in a linearization of the f-I curve [51,52], and thus corresponds to ν = 1.Thus, slow adaptation currents in randomly connected networks are predicted to stabilize fixed point states at low g.It is interesting to note that the fixed point equations for the population averaged activities ( 14) are stable to linear perturbation even for ν < 1  2 due to the smearing of the singularity near threshold by the gaussian fluctuations.On the other hand, as we have shown here, this smearing is not strong enough to avoid instability of the fixed point of the local activities and fields.Finally, our prediction that for ν ≤ 1/2 no stable fixed point exists was derived in the limit of large K where the distribution of local inputs is Gaussian, hence unbounded.However, in a finite system, where K is finite, there will be a finite gap between zero and the smallest input (in absolute value).Hence, in a finite system there should be a stable fixed point for sufficiently low values of g, even for low values of ν.Studying this finite size effect is an interesting topic for future research.Although the properties of the transition to chaos are determined by the behavior of φ(x) near threshold, the overall shape of φ may also affect the system's behavior.For instance, in the threshold linear case, the effect of changing the magnitude of the external input is marginal, as it can be scaled out from the equations determining the chaotic behavior, due to the inherent linearity above threshold (see Eq (39)).In contrast, when φ(x) saturates to a finite value at large x, large external input pushes neurons to saturation and suppresses the onset of chaos [53].Also, the unboundedness of the threshold-linear φ leads to a divergent dynamics for sufficiently large g, see Fig. 2. Furthermore, when φ(x) grows exponentially with x this instability sets in as soon as the fixed point is unstable.This divergent dynamics does not appear when φ has a finite saturation value.Spiking dynamics The question whether networks with spiking dynamics exhibit a phase transition to chaos at finite synaptic gain extends beyond the issue of the shape of the f-I curve.In contrast to recent claims [25], we have shown that a sharp transition from a fixed point to chaos in such networks is meaningful only when synaptic time constant is large, where there is a clear separation between spiking dynamics and rate dynamics.In this limit, depending on the shape of the f − I curve, the underlying rates may be constant in low synaptic gain and become chaotic above a critical gain, similar to the behavior of rate based dynamics.The general correspondence between smooth rate dynamic models and the dynamics of synaptic currents in neuronal systems with long synaptic time constants, has been studied previously [54].However, the implications on the transition to chaos in random neuronal networks were not previously addressed.In any realistic systems the time constants are finite, hence it is important to assess the smoothing of the transition due to finite synaptic times.Here we have characterized this smoothing by a scaling function with a single crossover exponent.This exponent determines the rate of change from stochastic spiking dynamics with static rates to smooth rate dynamics, as the synaptic time constant increases.This scaling analysis predicts relatively strong 'finite size' effects of the spiking dynamics near the transition of the corresponding rate dynamics.First, the scaling regime is relatively large, given by |g − g c | ∝ (rτ s ) −2/5 , where r is the mean firing rate and τ s is the synaptic integration time.In addition, the effective time constant of the synaptic fluctuations due to spiking, scales as τ ef f ∝ (rτ s ) 1/5 .Thus, a sharp transition requires rather large values of rτ s .Concerning the biologically relevant regime, typical values of mean rates range between an order of 1Hz to 100Hz.Fast synaptic currents (AMPA and GABA A ) have decay time constants of the order of 1 − 10 msec so are not expected to exhibit the above transition.Slow synaptic currents (e.g., NMDA, GABA B , and peptide neurotransmitters) have time constants ranging from a few hundred milliseconds to minutes [39] and thus might be appropriate candidates for exhibiting transition from spike dynamics to rate fluctuations for systems with moderate rates (such as primate visual and motor cortex).Our analysis of networks of spiking neurons assumed inhomogeneous Poisson spiking model, which is a well known and extensively used statistical model of spiking fluctuations [48,55].However, it is interesting to ask whether our results hold for deterministic spiking models with an appropriately smooth f − I curve.We believe our results regarding a transition to chaos for large K and large rτ s are valid also for conductance based spiking dynamics, since they rely only on the separation of time scales between the firing dynamics and synaptic fluctuations.In particular, we expect that for weak synaptic gain the network will be in an asynchronous state characterize by synaptic inputs and population firing rates whose fluctuation amplitude is small on the scale of τ s .On the other hand, for strong synaptic gain, these fluctuations will be large even on the scale of τ s and the statistics of these fluctuation will follow the chaotic dynamics of smooth rate dynamics.Beyond its simplicity, the advantage of the treatment the Poisson model is that it demonstrates the condition for rate chaotic fluctuations also for stochastically firing neurons.Our analysis of the Poisson model was restricted to threshold-linear rate function.It is straightforward to extend the DMFT equations to a general rate function, including one with a power-law behavior above threshold with ν = 1 .It will be of interest to study in more detail the effect of fast spiking noise on the network dynamics, particularly for the cases where the (noiseless) f −I curve has ν < 1/2.One difference between the Poisson and the deterministic spiking dynamics is the degree of irregularity in the spike times, as measured e.g., by the standard deviation of the ISI distribution divided by its mean (known as the coefficient of variation-CV).In the Poisson model, the CV is close to 1 in the 'fixed point' regime (as in homogeneous Poisson process) and increases above it in the chaotic regime, due to the induced rate fluctuations.In contrast, in deterministic spiking models, the CV is expected to be substantially lower than 1 in the 'fixed point' regime.The detailed study of random networks with deterministic spiking dynamics associated with sufficiently smooth f − I curve and slow synaptic time constants, is an interesting topic for future studies.In this work, we have addressed the effect of spiking dynamics in smearing the transition from a state with static underlying synaptic currents to a state where the currents themselves (hence the underlying rates) fluctuate in time.However, there can be other types of transitions from non-chaotic dynamics to chaotic dynamics in a spiking network even for fast synaptic time constants.In the case of a deterministic spiking networks, this transition may mark the separation from a limit cycle at low g to a chaotic attractor at high g.In this case, chaos is typically of fast time constant and cannot be described as instability in rates.In addition, it is likely that even in the non chaotic regime of deterministic spiking models, irregular transients (with effective negative Lyapunov exponent, termed as stable chaos) will persist for a long time, and convergence time to the limit cycle will grow exponentially with the system size (as in [56][57][58]).Interestingly, the existence of chaos vs. stable chaos in spiking networks has been shown to depend on the details of the spike initiation dynamics [59].A transition from irregular but non-chaotic state to a truly chaotic state might be important for the information processing capacity of the system but will be hard to observe experimentally, as it is not reflected in the properties of the correlation functions.Our discussion of biological relevance of rate based dynamics focused on identifying the units in the rate based models as single neurons and utilized temporal averaging as the mechanism for the adequacy of a rate based theory.An alternative scenario where rate description of a spiking network might be applicable is when the system consists of clusters of neurons, where spatial averaging can lead to rate dynamics (e.g.[52,60,61]).Under this interpretation, single units in our model represent weakly synchronous neuronal subpopulations and the random connectivity corresponds to the large scale effective connectivity between these populations.As such these models can serve as a useful framework for investigating aspects of the large scale nonlinear dynamics of the nervous system, as measured by EEG and fMRI signals.
This work explored the conditions under which random networks exhibit a transition from fixed point to chaos and the rate of development of chaotic fluctuations near the transition.These questions may bear important consequences for the computations that such network can produce.Recent models [13,14,62] have shown that random recurrent networks can be shaped through learning to generate a broad repertoire of desired trajectories with biologically relevant time scales.These abilities prevail only near the onset of chaos.For substantially low gain, the activity in the network is strongly suppressed, whereas high above the transition, the intrinsic fluctuations are too fast and erratic to match the smooth and slow desired trajectories.It would be very interesting to study in detail how the results of the present work affect the computational powers of recurrent neuronal networks.
Note added: After the submission of this manuscript we became aware of a manuscript [63]that partially overlaps with the present work.
and denote by χ(t) the spatial average of χ i .at the fixed point solution.From the equations of motion, we obtain B2) where φ ′ (h i ) are the derivatives of the activation function evaluated at the fixed points.Let us consider the general term Because the fixed point values of h i are independent of a perturbation at time 0 .Thus, using Eqs ( 10) and ( 11), we have where ∂ t stands for the time derivative operator.After averaging and using mean field theory, where ∆(t, t ′ ) = δh(t)δh(t ′ ) .Thus, with t ′ → ∞.Finally, we note that in this limit, where, Note that the factor of 2 was introduced, because in the fixed point, which is the susceptibility of the mean equal time variance.
Substituting the above results in (B2) and averaging we obtain, Calculating of χ ∆ From the DMFT we obtain where, where, by derivation of (B13), and ∂u ∂h 0 = ḡχ(t) + δ(t) (B17) We will assume t 2 > t 1 > t 0 are all large but their difference is of order 1.
Let us take t 2 − t 1 → ∞ .Then, substituting (B10) one obtains: Eqs (B11) and (B18) constitutes two coupled equations for the response functions of the mean and variance of the population activity.It is perhaps convenient to eliminate the time derivative in the RHS of (B11) by substituting in it Eq (B18), resulting in We can write these equations by or, in Fourier and matrix representation, where and e = 2g 2 φφ ′ ḡ.

(B26)
Multiple population network A straightforward generalization to the multiple population case yields for In this case, the equation for χ ∆ (t)( Eq (B18)) reads or, in matrix notation, Likewise, the dynamical equation on χ(t) is Where in the above we have defined the P × P matrices and Eq (B30) and (B32) can be written using a 2P × 2P matrix in Fourier space as Thus, the fixed point is stable against population average perturbations provided that all the eigenvalues of the matrix Furthermore, performing the averaging in the RHS of C5 and fourier transforming the result yields where Note that this population average susceptibility is the solution of the population averaged part of the dynamics, namely, Note that all elements of χ 0 kl are of order 1/N including the diagonal.This is becuase χ 0ii kk which is of order 1 adds only a 1/N contribution to the spatially averaged quantity, C3.Statistics of the full susceptibility We now turn to evaluate the full susceptibility,χ ij kl .The random fluctuations in the susceptibility, i.e, the off diagnoal elements of χ are of order 1/ √ N as will be seen below, i.e., locally they are much larger thanχ 0 kl .Their average second order moments are defined in terms of To evaluate C9, we multiply two realizations of Eq (C2) (differing in the time indices) and take the spatial average over all neurons, leading to Note that G kl are of order ; the contribution of χ 0 kl to them is of order 1/N hence negligible.Furthermore, the uniform components of the interactions (in Eq.C2) do not contribute to C9 to leading order as they are smaller by a factor of 1/N relative to the random contributions.Defining new time variables as τ = t a − t b , τ ′ = t c − t d , T = t a + t b and T ′ = t c + t d , equation (C10) can be written as

+
∂ ∂T  where Ω and ω are the frequencies associated with T and τ respectively, G(Ω, ω) = ˆdT ˆdτ G(T, 0, τ, 0) exp(−iΩT − iωτ ) (C16) .Thus, the zero frequency limit, yields a static matrix given (up to a constant) by where M is defined in (30) and fixed point stability requires that all eigenvalues of M have real part less than one.Note that G kl = N (χ ij kl (ω = 0)) 2 , i.e., the square average of the local static susceptibility.Chaotic regime The largest Lyapunov exponent is the exponential divergent rate of the squared susceptibilityN (χ ij kl (t, 0)) 2 = G kl (2t, 0, 0, 0), in the chaotic state, which corresponds to T = 2t, and T ′ = τ = τ ′ = 0. Thus, the largest Lyapunov exponent is found by To evaluate it, we first write the time dependent solution of (C12), as C18) where ψ µ (τ )| and |ψ µ (τ ) are the left and right µ − th eigenvectors of the Hamiltonian H(τ ).We note that in general H(τ ) is non-hermitic and the eigenvectors |ψ µ (τ ) are not necessarily orthogonal.From (C12), we find that the general solution for Γ µ (T, T ′ ) is proportional to e λµ(T −T ′ ) where λ µ = −1 ± 1 − ǫ µ and ǫ µ is the corresponding eigenvalue of H(τ ).For completeness, one must consider the boundary conditions on the solutions, namely that solution exists only for T > T ′ and that the second derivative of G is a delta function while the first derivative is finite.One obtains G kl (T, T ′ , τ, τ ′ ) = Thus, the maximal Lyapunov exponent is given by where ǫ 0 is the ground state of the Hamiltonian (29).

Appendix D: Exponential transfer function
We show that a single inhibitory population, with the architecture studied in Section III and an exponential transfer function, φ(x) = e x does not exhibit a chaotic phase.The fixed point of the dynamic is stable, when g 2 C ′ (x) < 1, where C = φ 2 i , leading to g 2 exp ∆ 0 z + u = g 2 e 2u+∆0 = g 2 m 2 < 1. (D1) The variance, given by ∆ 0 = g 2 C(x), is ∆ 0 = g 2 exp 2 ∆ 0 z + 2u = g 2 e 2u+∆0 = g 2 m 2 .
(D2) It follows from (D1) and (D2) that the fixed point loses its stability, at the critical point g c where ∆ 0 = g 2 m 2 = 1.For variance values higher than 1, the dynamics is characterized by the autocorrelation function given by the solution the dynamical MF differential equation in (45).A stable chaotic solution requires or, equivalently, the vanishing of the potential (46) at τ = ∞.Carrying the gaussian integrals in (46) one finds, that if the potential vanish then Since every solution for the autocorrelation function require 0 ≤ ∆(∞) ≤ ∆ 0 , Eq (D4), can not be obeyed for a non vanishing variance, and no chaotic phase exist.Consequentially, when ∆ 0 > 1, the mean activity diverges.(E1) Near and above criticality C(τ ) can be expanded in powers of q(τ ) about the fixed point value q(τ ) = 0. Since linear analysis is not sufficient to account for the critical phenomena we keep all terms up to the first non linear term in q(τ ), where c is a numerical factor of order 1.The gaussian integral in φ 2 ν−n diverges for 2 (ν − n) ≤ −1.As a result, the first nonlinear term may be larger than quadratic.The first non linear term is ρ = 2 for ν > 3  2 and ρ = 3 We write Mkl = µ Λ µ R k L * l where Λ µ are the eigenvalues ordered according to decreasing value of their real part, and R,L are the right and left eigenvectors of M respectively.At the transition, Λ 1 = 1 and ReΛ µ =1 ≤ 1.When g kl ≈ g c kl , the unstable eigenvalue is Λ 1 = 1 + ǫ where |ǫ| ≪ 1 and the sign of ǫ determines which direction in the space of g (away from g c kl ) is the unstable one.To leading order, the eigenvector R 1 does not change.Note that Λ 1 depends on both g and x.For convenience, we define ǫ via Λ 1 (g, x c ) i.e., as the change induced in Λ 1 due to change in g for x = x c .We are interested in the properties of the system for 0 < ǫ ≪ 1.For multiple sub-population, equation (E4) takes the form ∂ 2 q k ∂τ 2 = a k (ǫ) + kl (I − M) kl q l + g2 kl c l q ρ l , (E15) where a k = 1 − l g2 kl C φ 2 (z + x k , which vanishes at criticality due to (12), and Mkl = g2 kl φ ′2 (z + x k .Since the matrix I − M vanishes only in the 1− direction, the dominant component of the vector q is in this direction.To see this, we make the ansatz q k (τ ) ≡ 1 − ∆ k (τ )/∆ 0 k = µ ζ µ (τ )R k µ and assume that |ζ 1 | ≫ |ζ µ |.We then obtain, i where â and ĉ are the factors in the basis of M.Here 1−Λ 1 vanishes at the transition and is of the order of ǫ, δx, while (1 − Λ µ ) remains of order 1 near the transition.Thus, ζ µ (τ ) scales as ζ 1 (τ ) ρ ≪ ζ(τ ), and the "normal form" of the dynamics of q k , Eq (E15), is the same as that of a single population, Eq (E4).In addition, in the generic case, we expect all q k to have nonzero projection on R 1 .Hence, they will all scale as ζ 1 , giving Threshold -linear transfer function In Section V, we have shown that in one population with threshold linear transfer function ( ν = 1), the excess mean input is zero at the transition (Fig. 2).Our numerical solutions show that in two population networks with this transfer function, x l are no longer exactly zero at the transition; nevertheless they remain small in a wide region above the transition.Assuming |x l | ≪ 1, the stability matrix can be written as Mkl = g2 kl H(−x l ) ≈ 0.5g 2 kl . (E19) The mean field expression for the variance, (12) can be written as 1 = l g2 kl φ 2 (z + x l ) z ≈ 0.5 l g2 kl .Hence, we find that l Mkl = 1 + O(x l ), ∀k, implying that the eigenvector corresponding to the largest eigenvalue of M, is uniform to leading order in x k .It follows that near criticality, when |x k | is small, R 1 is approximately uniform, and all q k are nearly equal, as can be seen in Figure 10.

Figure 1 .
Figure 1.Network schematics.Arrows (circles) denote excitatory (inhibitory) connections.(A) The general network architecture studied here consists of multiple excitatory (E) and inhibitory (I) subpopulations driven by external inputs.Individual connections are randomly diluted or are Gaussian distributed.Subpopulations are distinguishable by the different statistics of connectivity.(B) The simplest model consists of a single population with random inhibitory recurrent connections with an external excitatory input.(C) Two interconnected randomly connected E and I populations.

Figure 2 .
Figure 2. Phase diagrams for a single inhibitory population.(A) No stable fixed point for sharp transfer functions.Simulation results for the normalized variance q(∞) for networks with g = 0.2 and g = 0.1 and different values of the powerlaw, ν, characterizing the rise of the transfer function, Eq (33).Below ν = 0.5 the variance is non-zero, implying there are temporal fluctuations even for low values of g.Simulation preformed on an inhibitory network (N = 5000) with randomly diluted connections and mean activity of m = 0.1.(B) Phase space diagram for a threshold-linear (ν = 1) network of an inhibitory neurons with mean connectivity ḡ < 0 and variance g .Dashed line shows an example for a transition in a diluted network (with K = 650 ) where because ḡ = − √ Kg (see Eq (49)) the network lies on a line in the phase diagram.(C) Phase diagram for a threshold-quadratic (ν = 2) network with an excitatory external field h 0 = 1.Dashed line show uniform instability of the fixed point and solid line shows the transition to the chaotic phase (shaded area).Dotted (black) line shows the instability of the chaotic phase found by simulations (Gaussian connectivity, N = 6000, h 0 = 1).Inset (D) same as (C) for larger values of |ḡ|.Dashed line shows existence line for a diluted network as in (B).

Figure 3 .
Figure3.Calculation of the largest Lyapunov exponent for a threshold-linear network .(A) Numerical integration of equation(45).The value of x is determined through the requirement that the maximum of the potential at nonzero q equals its value at zero q.Compare the form of the potential with the exact value of x (solid line) with those calculated with x that deviates by ±1% from the correct value (dashed lines).(B) numerical solution for the normalized variance q(∞)as a function of g. (C)-(F) The normalized autocorrelation function q(τ ) found by integrating the equation of motions(45) using the correct value of x for two values of g (top), and the corresponding quantum potential W = −∂ 2 V /∂q 2 (bottom).The ground state energies ǫ0 (horizontal lines) were found numerically.In both cases they are negative implying a positive Lyapunov exponent.

Figure 4 .
Figure 4. Numerical simulations of single population with linear threshold transfer function.(A) Distribution of the local fields h i in the fixed point regime.Thick curve shows normal distribution given the theoretical mean and variance obtained from the mean field theory.Dashed (red) vertical line marks the firing threshold which is taken to be zero.(B),(C) activities of two networks with gains g = 2.1 and g = 2.8 respectively.Bold lines show spatially averaged local fields; thin lines are local fields of a sample of four neurons.(D) the normalized AC function, ∆(τ )/∆0 = 1 − q(τ ) for two values of gain parameters as in (B) and (C).Solid black lines are the solutions of the equation of DMFT (45) superimposed on simulation results (dashed lines).Simulations preformed on a network (N = 6800) with Gaussian distributed connections and ḡ = √ Kg, where K = 680 and external field h 0 = 1.

Figure 5 .
Figure 5. Robustness to changes in the synaptic distribution: (A) Autocorrelation function for sparse (p = 0.05), dense (p = 0.8) and gaussian connectivity Networks (N = 7000, h 0 = 1).The gains were chosen such that all networks have the same variance for the connectivity distribution (g = 2.8 for the Gaussian distribution and g √ 1 − p = 2.8 for the randomly diluted networks).(B) Normalized autocorrelation function 1 − q(τ ) = ∆(τ )/∆0 compared with the solution of the DMFT.

Figure 6 .
Figure 6.Transition to chaos in a system of two populations.parameters used: JEE = JIE = WE = αg, JII = −g,JEI = −1.11gand WI = 0.44g.Note that g is a global gain multiplying all the synapses; α denotes the excitatory efficacy.(A) The critical value of g as a function of the excitatory efficacy α.For α = 0 the network is identical to the single inhibitory network and phase transition occurs at g = √ 2. (B) Largest eigenvalue of the stability matrix (53) for a network with α = 0.55 [marked by '^' in panel (A)].MF predicts a phase transition when Λ+ = 1 (asterisks).(C) Normalized variance, q k (∞), for inhibitory (blue triangles) and excitatory (red circles) populations as a function of the gain calculated from network simulations with α = 0.55.Calculated from simulation of a balanced diluted network with NE = NI = 3500, K = 700, connectivity parameters as in panel (B), and external activity m0 = 1.Theory predicts a phase transition at g = 1.21 (asterisks), as seen in panel (B).

Figure 7 .
Figure 7. (color) Fluctuations of an E-I balanced network with linear threshold transfer function.Same parameter set as in figure 6C.(A) Traces of the local fields of six neurons from excitatory (red) and inhibitory (blue) populations.Dashed lines show mean input into each population.(B) Time-lagged autocorrelation function of two population computed from the simulation.(C) Trace of the fluctuations in the spatially averaged activities δm k (t) ≡ m k (t)− m k (t) t of both populations [same color codes as (A)].Simulations of network similar the one used in Fig.6B, with g = 1.6.

Figure 9 .
Figure 9. Critical behavior of a single inhibitory population with quadratic (ν = 2) transfer function.Normalized variance (A), network relaxation time (B) and LE (C) for small ǫ ≡ g 2 /g 2 c − 1 gain above the critical point.Circles show average over 20 simulations (Gaussian connectivity in the balance regime, N = 6000, K = 1200, h 0 = 1) and dashed lines show theoretical predictions (See Appendix E).

= g 2 kl
Multiple populations In a network composed of P populations, stability is determined by the eigenvalues of the P xP stability matrix,(29).Here we define a normalized stability matrix Mkl = ∆ −1 k (0)M kl = g2 kl φ ′2 (z + x k and g2 kl