Self-organized critical balanced networks: a unified framework

Asynchronous irregular (AI) and critical states are two competing frameworks proposed to explain spontaneous neuronal activity. Here, we propose a mean-field model with simple stochastic neurons that generalizes the integrate-and-fire network of Brunel (2000). We show that the point with balanced inhibitory/excitatory synaptic weight ratio $g_c \approx 4$ corresponds to a second order absorbing phase transition usual in self-organized critical (SOC) models. At the synaptic balance point $g_c$, the network exhibits power-law neuronal avalanches with the usual exponents, whereas for nonzero external field the system displays the four usual synchronicity states of balanced networks. We add homeostatic inhibition and firing rate adaption and obtain a self-organized quasi-critical balanced state with avalanches and AI-like activity. Our model might explain why different inhibition levels are obtained in different experimental conditions and for different regions of the brain, since at least two dynamical mechanisms are necessary to obtain a truly balanced state, without which the network may hover in different regions of the presented theoretical phase diagram.


INTRODUCTION
Spontaneous brain activity happens in the form of nonlinear waves of action potentials that spread throughout the cortex. These waves are usually characterized by their sizes and duration under the critical brain hypothesis (see [1][2][3] for recent reviews), or by the underlying regularity and global synchronicity displayed by the firings of neuron populations under the balanced network hypothesis [4][5][6]. These two frameworks are often taken as discordant to each other [7], because a critical activity implies in long-range spatio-temporal correlations [8][9][10], whereas asynchronous irregular (AI) activity comes from Poissonian spike trains of interacting neurons [5,6,11].
Networks working at the critical point have the advantages of optimizing their dynamic range of response [12,13], memory and learning processes [14,15], computational power [16] and their flexibility to process information [17]. However, this highly susceptible state needs to be achieved and maintained spontaneously by an autonomous biological mechanism [18], introducing the problem of self-organized criticality (SOC). The attempts to model SOC in the context of neuronal networks involve mainly adaptive synapses [19,20] and adaptive neuronal gains [21]. Both of these approaches are proven to have theoretically equivalent effects in the system's phase space trajectory [21], resulting in a self-organized quasi-critical (SOqC) system. SOqC is a highly sensitive nearly-critical state able to maintain self-sustained activity, to reproduce the power-law exponents observed in experiments and produce large events (the "dragon kings") [20][21][22]. Other authors claim that the power-laws in the brain are observed because experiments sample its activity long enough to average out its spatial disorder into a Griffiths phase [10,23].
On the other hand, AI activity is intuitively expected to arise if the resting brain presented a statistically fair random noise. This state is characterized by irregular firing of individual neurons and global lack of synchronicity, leading Brunel [5] to conjecture AI, by visual inspection, as the typical spontaneous activity of the brain. This hypothesis is backed by theoretical evidence that AI states minimize redundancy [24] and speed-up the processing of inputs [5,25], whereas excitation/inhibition (E/I) balanced networks may serve to construct high-dimensional population codes [26]. This situation is usually modeled by the so-called balanced networks, in which the system presents a transition from high to low, or even null, average activity [5] when the proportion, g, of excitatory to inhibitory synaptic strength is varied. The transition point, g c = 4, is called balanced because the excitatory population, corresponding to 80% of the neurons in the network (a fraction that is estimated from cortical data for glutamate-activated synapses [27]) is balanced by inhibitory synapses four times as strong as the excitatory ones.
Here, we introduce a model of an E/I network and show that the balanced point is a critical point usual for SOC models, unifying both approaches. Our model predicts that the balance point can be shifted away from the usual g c ≈ 4 for regimes in which synaptic couplings and neuronal gains are relatively small, keeping fixed the ratio of excitatory neurons in 80%. The balance point displays power-law avalanches with exponents that match experiments [28]. In fact, our model presents a critical line over the balance parameter when the average input over the network equals the average thresholds of the neurons. It also presents all the synchronicity states found by Brunel [5], namely the synchronous regular (SR), synchronous irregular (SI), asynchronous regular (AR), and AI. We determine the transition lines between these states. We also show that firing rate adaptation together with a homeostatic inhibition mechanism self-organizes the system towards a quasi-critical state -a fact that could explain the prevalence of E/I balanced states in many experiments [26].
Moreover, the introduced dynamic inhibition leading to the critical point may explain the enhancing of the dynamic range observed in experiments when inhibition is considered [29], since the critical state is known to optimize the dynamic range of excitable systems [12].
There have been attempts to model E/I networks in the context of criticality [30][31][32][33], and also connecting criticality to a synchronization phase transition [34]. However, all of these models have limitations that are naturally solved in our proposed framework. First, we present our model with all its features in detail, and we finish the paper comparing our framework to the existing ones.

THE MODEL
We use discrete-time stochastic integrate-and-fire neurons [35,36]. A Boolean variable denotes if a neuron fires (X[t] = 1) or not (X[t] = 0) at time t. The membrane potentials evolve (in discrete time) as: where N = N E + N I is the total number of neurons, µ is a leakage parameter and I i [t] are external inputs. Here the indices E and I denote the excitatory and inhibitory populations.
A link W EI ij means a synapse from an inhibitory neuron to an excitatory neuron (the second index is always the presynaptic one), and the same rule applies to W EE , W II and W IE .
Notice that if some neuron fires at time t, in the next time step its voltage is reset to zero due to the factor (1−X i [t]). Also, the inhibitory character of the synapses is given by the negative sign in front of the summations: the W EI and W II are always absolute positive values. We notice that our network is a complete graph, instead of the sparse network examined by Brunel [5]. However, we will show that there is no qualitative difference between the two phase diagrams of both models.
The individual neurons fire following a piecewise linear probability function (see Fig. 1a): where Γ is the neuronal gain, θ is a firing threshold, V S = θ + 1/Γ is the saturation potential and Θ(x) is the step Heaviside function. The fact that 0 < Φ(V ) < 1 in the interval [θ, V S ] means that the spikes are stochastic, a feature that intends to model the effects of membrane noises. Notice that the limit Γ → ∞ reproduces the (discrete time) deterministic integrate-and-fire neuron with hard threshold V S = θ.

RESULTS
First, we show that the balance point is a critical point, characterizing its critical exponents and its avalanche distributions. We then determine the full phase diagram, locating the synchronicity states (SR, AR, AI, SI). And finally, we introduce an adaptive dynamics in the inhibitory synapses together with firing rate adaptation and show that the system then hovers towards the critical balanced point.

The mean-field limit
In the Methods section, we derive an expression for the mean firing rate of the excitatory and inhibitory neuronal populations to be used here. We approximate the excitatory/inhibitory synaptic weights by their average values, W EI = W EI ij (for all the E/I combinations), and write the firing densities (the fraction of active sites) ρ E = 1/N E j X E j and ρ I = 1/N I j X I j . We write the fractions of excitatory and inhibitory neurons as p = N E /N and q = 1 − p = N I /N , respectively. We also consider only the case with a stationary average input I = I i [t] , so that the neurons' membrane voltages evolve as: Dashed: in the Γ → ∞ limit, we recover the deterministic leaky integrate-and-fire (LIF) model with a sharp threshold θ = V S = 1. b, Phase transition for ρ * = ρ E = ρ I as a function of W along the line h = 0, for Γ = 1, 10 and 100 and J = 10 (W c = 1/Γ). b (inset), Phase transition for ρ * as a function of the ratio g along the line Y = 1 for the same Γ and J. The order parameter ρ * is equivalent to the firing frequency ν 0 of Brunel model [5]. Notice that the FP with ρ * > 1/2 (dot-dashed) represents cycle-2 due to the refractory period of one time step (Brunel's Synchronous Regular SR state).
The densities ρ E and ρ I of firing neurons are our order parameters (similar to the firing frequency ν 0 of the Brunel model [5]) and the synaptic weights W EE , W EI , W IE and W II are our control parameters. Together with p, I, µ, θ and Γ, we have a nine-dimensional parameter space that will be further reduced.
Analytical results are exact for µ = 0. The case 0 < µ < 1 admits numerical results for any leakage µ [22,36], but also has analytical solutions close to the phase transition.
Its fixed points are given by: where we defined the supra-threshold external current, h = I − θ, equivalent to an external field in usual SOC models.
In order to put Brunel's model [5] into the standard SOC framework given in the parameter space (W, h) (Fig. 2a), we introduce the parameter Y = I/θ -the fractional external current -entirely equivalent to Brunel's input ratio ν ext /ν thr where ν ext is the average of a noisy input current and ν thr is proportional to θ. We assume θ = 1 without loss of generality. Also, recall that g = p/q − W/(qJ), enabling the study of the system in the balanced notation phase diagram (g, Y ) ( Fig. 2b). At this point we have a six dimensional parameter space {Γ, J, g, Y, µ, p}, but we fix p = 0.8, and hence q = 0.2, from cortical data [27]. We henceforth call this the static model. If any of the parameters presented in this section is allowed to vary with time, then we have the dynamic model that will be discussed later in this manuscript.

Balanced point as a second order critical point
Considering the case h = I − θ = 0 ≡ h c in equation (5), we obtain an absorbing state ρ 0 = 0 (the quiescent phase, Q) which is stable for W < W c ≡ 1/Γ, and an active solution (also called H or L for high or low activity, respectively): stable for W > W c . This is a transcritical bifurcation usual in SOC models, see Fig. 1b.
Equation (6) is exact for µ = 0. For µ > 0, h c = −µ, and a good approximation near the phase transition results in W c = (1 − µ)/Γ, such that For W → W c , the active solution reduces to ρ * ∼ (W − W c ) β , with β = 1 being the critical exponent associated with the coupling intensity. Analogously, we can isolate h from equation (5) and expand for small ρ (due to small external h) on the critical point W = W c to obtain ρ * ∼ [h/W c ] 1/δ h with δ h = 2 the critical exponent associated with the external field. The susceptibility, χ, exponent is obtained by taking the derivative of ρ with respect to h at h = 0 in equation (5), using Γ = 1/W c , and expanding for W → W c . This procedure results in χ ∼ |W − W c | −γ with γ = 1. These mean field exponents are compatible with the directed percolation (DP) universality class [37], the framework proposed to govern SOC systems [21,22,36,38]. In the DP, the variance of the network activity defines the exponent γ by Var(ρ) ∼ |W − W c | −γ with γ = 0 [37]. This explains the jump in the coefficient of variation of the network activity observed by Brunel [5], since a zero-valued γ exponent indicates a discontinuous jump in the variance of ρ.
In the balanced notation, for the external input Y c = h c +1 = 1, and using the relationship between W and g, we can write the critical point as: where we chose the usual fractions p = 0.8 and q = 0.2 for cortical neurons. Our result generalizes the known condition g c ≈ 4: If the term 1/(qΓJ) turns out significant, the transition shifts towards lower values of g (see the inset of Fig. 1b). This means that when the synaptic strengths, J, or the firing rate gains, Γ, are small, the network will not need much inhibition, hence the smaller g value. This was only observed numerically in [5]. The large ΓJ regime, which approximates the LIF neurons in equation (2), recovers the usual balanced condition, g c ≈ 4, obtained by Brunel [5].
Rewriting equation (6) using g c , we get: and we recover the phase transition for g → g c with β = 1 and γ = 1. Nevertheless, the states are flipped in the g-axis: the active state now happens for g < g c . The expansion for So, balanced networks share the same second order phase transition of SOC models.

Avalanches in the balanced critical point
The balanced point (which is now proven to be a DP critical point) displays avalanchelike activity that can be measured from the ρ[t] time series (see the top inset of Fig. 2c). We simulate the static model as a mean-field network of N = 10 6 neurons with Γ = 1, J = 10, g = g c = 3.5, Y = Y c = 1, µ = 0 and p = 0.8. We define an avalanche as the sum of all the spikes between two consecutive silent absorbing states [39]. Avalanches are sparked independently when the activity goes to zero. We fit cutoff power laws to the complementary cumulative distributions (CDF) of avalanche sizes, F(s) ≡ P(S > s) = b + rs −τ +1 , and duration, F(T ) ≡ P(T > T ) = c + kT −τt+1 (b, r, c and k are fit constants related to the cutoff of the power laws [40]). This representation provides a clearer visualization of the data, because it is a continuous function of its variables; it also has very reduced noise, its precision does not depend on the size of the bins of the distribution's histogram, and it has a better defined cutoff [40].
Least squares fit yields τ = 1.46(4) and τ t = 2.1(1) for avalanche sizes and duration exponents, respectively (main plots of Figs. 2c and 2d). We also fit the histogram of sizes, The average avalanche size scales with avalanche duration according to s ∼ T a such that a = (τ t − 1)/(τ − 1) [37,39,41]. The theoretical value of a in DP is a DP = 2. In our simulation, we observe a cross-over in the s vs. T data (top inset of Fig. 2d), such that for large and long avalanches a = 2 (exactly equal to the theory), and for small and short avalanches, a = 2.5 due to the finite-size deviations in the P(s) and P(T ) distributions.
The fitted values τ = 1.46 and τ t = 2.1 result in a fit = 2.4, which is between the cross-over exponents, hence agreeing with the s vs. T data.
The synaptic balance vs. external current phase diagram The solutions to equation (5) for h = 0 (or Y = 1) are: The balanced notation can be recovered letting h = Y − 1 (recall that θ = 1) and W = pJ(1 − gγ), where γ = q/p (not to be confused with the critical exponent for the variance of ρ): Notice that there is no divergence when g = p/q (W = 0) because equation (5) then gives ρ = Γh/(1 + Γh). Solutions for equation (11) are plotted in Figs. 3a and 3b for different values of g and Y . Equation (11) yields the activity states of high and low activity (H and L, the ρ + ), and also the intermediary state (I, the unstable ρ − branch). However, there is also a stable quiescent solution Q, ρ 0 = 0, for Y ≤ Y c due to the step function in equation (4); but Q is unstable over the critical line for Y c = 1 − µ if g < g c due to the transcritical bifurcation at the critical point (horizontal dashed lines in Fig. 2b).
For Y < Y c , there is a first order transition line where the ρ + branch appears and start to coexist with the stable ρ 0 solution, generating the intermediary unstable state ρ − through a fold bifurcation (Fig. 3a). The first order transition line, shown in Fig. 3c, is (see Methods):  ). a, Firing density as a function of g for many Y [equation (11)], highlighting the activity states H for high, L for low (both corresponding to ρ + ), I for the intermediary and unstable ρ − (originating in a fold bifurcation, or first order phase transition) and Q for ρ = 0. b, Firing density as a function of Y for many g; linear behavior occurs for small Y , but saturates for large external current (inset). This plot has no SR behaviour and should be compared to Fig. 1B (right) in Ref. [5]. The dotted lines for ρ + in both panels a and b correspond to marginally stable fixed points. c, First order phase transition (fold bifurcation) lines [equation (12)] for J = 10 and many neuronal gains Γ; below the line Y = 1, we have the Q state, whereas to the left of each Y 1 line we also have the states H and I. c (inset), Same diagram as in the main plot, but for Γ = 1 and many synaptic intensities J.
d, Bifurcation lines for Γ = 1 towards cycle-2 at ρ + = 1/2, equation (13)  This transition depends on J and Γ and it ends at Y 1 (g c ) = Y c = 1. For large ΓJ, we recover Brunel's [5] diagram, with an almost vertical Y 1 line at g c , as shown in Fig. 2b.
For Y > Y c , the ρ − (I) solution disappears and the Q phase is unstable. The ρ + solution varies continuously from H to L activity state. There is no phase transition by changing g, although a clear change of behavior (large derivative) occurs close to g c . This happens because, for g < g c , we have a supercritical phase with self-sustained activity being fueled by the external input (the H state) whereas for the g > g c region, there is no self-sustained activity (Fig. 3a), making the L state appear due entirely to the external current. Nevertheless, both the H and L states bifurcate (see Fig. 3d) giving rise to distinct oscillatory behaviors to be discussed in the next section.
Even though equation (11) admits H solutions that have ρ + > 1/2 as a stable fixed point, the original system does not. This happens because the neurons defined in equation (3) have a refractory period of 1 time step (since Φ(0) ≡ 0), yielding a maximum firing rate of ρ = 1/2.
Thus, mean-field solutions that have ρ + > 1/2 correspond to a cycle-2 dynamics of the form for the network. The amplitude of this cycle depends on the initial condition, so ρ > 1/2 is marginally stable. The transition line towards this cycle-2 dynamics is given by (see Methods): This curve is plotted in Fig. 3d for different values of J. When ΓJ is large, it becomes almost vertical, turning almost all the H state into cycle-2 activity for g < g c . This transition is not a standard bifurcation of maps, but rather a particularity of the model raised by the refractory period.
The L state bifurcates through a flip. The consequence is that the ρ[t] map given in equation (4) starts to cycle between negative and positive values, even though the negative ones are turned to ρ = 0 due to the step function. The bifurcation line is given by (see Methods): This curve is shown in Fig. 3d for different values of J. It becomes almost vertical for large ΓJ, turning almost all the L state into oscillations for g > g c .

Spiking patterns emerging for external input currents
The H and L activity states (both given by ρ + ) can be further split into four different states, depending on the global synchronicity and local regularity of the firing patterns of the network. The network may or may not be globally synchronized; and the synchronization (or the lack of it) may happen in a locally regular or irregular fashion (meaning that each individual neuron either spikes regularly or not, respectively) [5]. Thus, we have the four different oscillatory states: globally synchronous and locally regular (SR); globally asynchronous and locally regular (AR); asynchronous and irregular (AI); and synchronous irregular (SI). To obtain slow oscillations, we simulated a large leakage regime, µ = 0.9, with a small input current, I = 0.001, giving Y = 0.101 (recalling that the critical line is at Y c = 1 − µ = 0.1), in the inhibition-dominated region of the diagram, g = 4.5. The (g, Y ) points for µ = 0 are marked in Fig. 4a, and the results of the simulations are shown Fig. 4b.
The self-sustained activity regime, when summed up to the external current, gives rise to regular microscopic behavior, SR and AR. The AR state turns into SR as excitation is increased when the system undergoes the ρ = 1/2 transition (see Fig. 4a). Note that this is not a standard bifurcation for map systems, but rather a particular feature of our model due to its 1 ms refractory period, causing the marginally stable cycle-2 activity.
The addition of an external current to the quiescent regime generates irregular microscopic activity. This happens because interactions are dominated by inhibitory synapses, such that the negative currents transmitted through the network compensate for the average external positive input. The AI state turns into SI as inhibition is increased when the system undergoes a flip bifurcation (see Fig. 4a). From then on, the system is so saturated with inhibition that the activity often dies out just to be sparked again by the constant input, generating fast oscillations that, globally, look almost like cycle-2. Differently from Brunel [5], all our model's randomness is condensed in the stochastic firing of individual neurons. The network is topologically regular (a full graph), and the synaptic and external inputs are homogeneous parameters. Thus, the only requirements to generate these four behaviors are: an intrinsic noise source, and a refractory period that is equal to or larger than the synaptic transmission time (which is 1 ms in our model).
When leakage is introduced into the model through µ > 0, the voltage of the neurons become distributed in an exponential way due to the slow recovery of the membrane potential after the spike [36]. This generates a chain of delayed firings intercalated by long periods of silence that depend on µ. The firings form sparse relatively large avalanches intertwined with small avalanches, and Brunel [5] called them as slow oscillations (SO, bottom panel of Fig. 4).
This behavior occurs for external input Y very close to the critical line Y Y c = 1 − µ.
Large inputs saturate the activity either through regular or irregular states, depending on the value of g.

Self-organized critical balanced networks
For the brain to reach and maintain the balanced state with g = g c and Y = Y c without fine tuning, there has to exist a self-organizing mechanism. While inhibition frequently increases together with excitation after the stimulation of a neuron, the reverse does not seem to happen; that is, excitation does not compensate for inhibition when the neuron is suppressed [26]. To model this feature, we introduce an adaptive mechanism on the weights of the inhibitory synapses inspired on the Levina-Herrmann-Geisel (LHG) dynamics [19], Here, τ W is a (large) recovery time, A is a recovery level and u W is the fraction of the synaptic strength facilitated when a presynaptic neuron fires. This mechanism potentiates inhibition by a factor u W when the presynaptic neuron fires and then relaxes with a time scale given by τ W . The dynamics in the W ij generates a response in the g axis of the phase However, our model has two degrees of freedom, requiring another independent mechanism to reach the balanced condition autonomously. This is achieved through firing rate adaptation [42], which we model by a homeostatic dynamics in the firing threshold of the neurons: where the parameter u θ is the fractional increase in the neuron threshold after it fires, and τ θ is a recovery time scale towards a null theta relative to the external current. This dynamics is sufficient to drive the system along the Y axis, because . When the neuron fires, its threshold adapts to prevent new firings, otherwise it decays with a large time scale In the long-term, the proposed dynamics generates adaptation relative to the input level of the network around the mean activity, given byḡ The system self-organized very close to the critical balanced point of the static model ( Fig. 5b). However, note thatḡ II/EI andȲ are slightly deviated towards the AI region (i.e.ḡ II/EI g c andȲ Y c , compare Fig. 5b with the phase diagram in Fig. 4a). This is confirmed the visual inspection of the network activity (Fig. 5c), showing a global desynchronized state with local irregular firing pattern.

DISCUSSION
We presented a model of an E/I network that has a balanced condition between excitation and inhibition when g = g c . The balanced point, g c , may be shifted away from the previously theoretical value g c ≈ 4 due to the soft threshold of the stochastic neurons, keeping the same fraction of excitatory to inhibitory neurons in the network (p = 0.8 for excitatory neurons, similar to the average fraction of glutamate-activated synapses in the brain [27]).
Other authors studied E/I network models and also found continuous phase transitions happening at points resembling criticality [30][31][32][33], and also connecting criticality to a synchronization phase transition [34]. However, all of these models have limitations that are naturally solved in our proposed framework. The model by Poil et al. [30] does not seem to present true criticality, since its exponents vary widely according to the models' parameters [33], and they do not follow the proper scaling law for critical systems [33,37,39,41].  tions follow power laws only for very small inhibition [32]. Di Santo et al. [34] studied a phenomenological model of a spiking network and found AI and critical states, but they do not consider E/I populations independently, neither they derive the rate equations from a microscopic model [34].
In contrast, our model: a) generalizes the balance point from the standard Brunel model [5], showing that it pertains to the expected DP universality class of SOC systems [37]; b) displays the power-law scaling for avalanche sizes and duration on the balanced critical point matching experiments [28]; c) contains all the four synchronicity states predicted by Brunel [5] when external input is considered; d) self-organizes towards the critical point via the adaptation of inhibitory synapses [26] and firing rates [42], both of which are known to occur in the brain; e) the self-organized point displays quasi-critical avalanches and a microscopic dynamics that resembles, through visual inspection, AI activity. Thus, we unify in a single framework all observed the features from critical neuronal network states with all the features of balanced networks, giving rise to a self-organized (quasi-)critical balanced network.
The levels of the adaption mechanisms are controlled by their respective time scales and fraction of potentiation. The two required dynamics give the system a good deal of freedom in the display of macroscopic activity. For instance, low levels of firing rate adaptation could drive the system towards either a switching between different synchronicity-regularity states for Y > 1 (say, switching between SR and AR, or AR and AI, or even AI and SI), or a self-organized bistable [43] dynamics due to the discontinuous transition for Y < 1. On the other hand, low levels of inhibition adaptation could generate long periods of high activity intertwined with periods of low activity. Therefore, our model could explain how drugs that modulate these mechanisms can change the global activity of the considered system; and it could also explain why different regions of the brain display different activity patterns, as discussed in [5] and [26] and in references therein, due to different levels of adaptability.
Nevertheless, if both mechanisms are strong, the system will be driven towards the balanced Even though we study a full graph and Brunel studies a sparse random network [5], both models share the same type of results because the author assumes that there is no correlations in the system other than those induced by a global firing rate [5]. Then, his results are also in the mean-field limit due to the high dimensionality of the network, and the critical exponents we found also apply to his model in the balanced point. In fact, the universality of critical phenomena guarantees that our results apply to any network with high dimensionality that keeps the same collective behavior, the symmetries, and the conservation laws of our model [44].
The simulation of large systems with the self-organizing mechanism is limited mainly by memory due to the number of variables that need to be kept for every time step of the dynamics (of the order of T N 2 , where T is the total number of time steps in the simulation).
However, our results serve as proof of concept, since increasing N is known to enhance the convergence to the critical (and balanced) point [20,22]. Other limitation is the purely stochastic description of the spikes. Nevertheless, if the time scales and dynamical features of the neurons and synapses are obeyed, our results are robust due to universality [44].
We have unified the balanced networks framework and the neuronal avalanches SOC framework into a single formalism. Even though the model needs deeper exploration, the general scenario is clear: standard balanced networks are a special case of the absorbing state SOC models in DP universality class (those with large ΓJ), and SOC models in DP are a particular case of standard balanced networks (those with field Y = 1). And homeostatic mechanisms in the system's inhibition and firing rate adaptation drives the network towards a quasi-critical that displays power-law avalanches and AI-like firing pattern.
[1] Janina Hesse and Thilo Gross. Self-organized criticality as a fundamental property of neural -recalling that Φ(V ) is the conditional probability of firing given V : This relation is sufficient for deriving the fixed point equation for ρ when µ = 0.
The evolution of firing rates can also be described in terms of the neuronal firing ages in the case µ > 0 [21,36]. This is possible because the reset of the potential causes a population of neurons that fire together to also evolve together until they fire again. We call U E k [t] the potential of a certain population of excitatory neurons that fired k time steps before time t and η E k [t] the proportion of such neurons with respect to the excitatory population. Then, the proportion of firing excitatory neurons evolves as: A neuron with firing age k at time t can, at time t + 1, either fire with probability Φ(U E k [t]) or become part of the population with firing age k + 1 of proportion η E k+1 [t + 1]. In the latter case, potentials and proportions evolve as: Where U E 0 [t] = 0 for any t (since the reset potential is zero). By writing equivalent relations for the inhibitory population, we obtain recurrence equations that describe the evolution of the whole system after a reasonably short transient that guarantees that every neuron has fired (or has had zero potential) at least once, allowing us to perform numerical studies of the system in the mean field framework.

B. The absorbing quiescent state
Let us determine the stability boundary of the Q (ρ 0 ) phase. In the stationary state of equation (22), the distribution of voltages has a single Dirac peak. When V < θ, we have the inactive state reducing equation (3) to: which has the stationary solution V [t + 1] = V [t] ≡ V 1 given by Since the equations (22) and (23)  : where the V E/I 1 [t] are now given by: The firing densities ρ E [t] and ρ I [t] evolve as: For large input current I, we may have V E/I 1 > V S and we have: This admits marginally stable cycle-2 solutions (i.e. they have no basins of attraction) [36].
When V E/I 1 < V S , we have: For Brunel's [5] Model A (W EE = W IE = J, W II = W EI = gJ), and using the weighted synaptic strength W = pJ − qgJ, we get: where we defined the field (suprathreshold current) h = I − θ.

The bistable phase
The bistable phase is composed of a stable branch H (ρ + ), an unstable branch I (ρ − ) and the quiescent branch Q (ρ 0 ). In the phase diagram, the bistable phase is separated from a phase where the quiescent state is unique through a fold bifurcation. To determine this phase boundary, we return to the solution for the ρ ± fixed points: We locate the transition line g 1 (Y ) where the two branches appear by using the condition ∆ = 0. We get: This enters the phase diagram (g, Y ) in Fig. 3 as a first order transition line. That means that there is no H or L states for g > g 1 and Y < 1 − µ, only the quiescent Q state, which is fully compatible with Brunel's phase diagram [5].
The first order transition line can also be written as: such that we recover the diagram of Brunel [5], with an almost vertical line Y 1 crossing Y = 1 at g 1 = g c for large ΓJ (see Fig. 3c).
The corresponding first order transition lines in the SOC notation are: The last relation is fully compatible with equation (32), which can be read as W 1 = qJ(p/q − g 1 ). At the bifurcation point, the first order transition step is given by: which is consistent with the continuous phase transition in the limit h → 0 (see Fig. 3a).

Transition between regular spiking states
The asynchronous regular (ρ + or AR) phase turns into a cycle-2 (synchronous regular, SR) if its activity achieves the value ρ = 1/2 as the excitation is increased. This occurs because the neurons have a refractory period of one time step, and the maximum firing rate of a single neuron is X = {. . . , 1, 0, 1, 0, 1, . . .}, that is, X = 1/2.
We can obtain the transition line W SR (h) [or g SR (Y )] where the cycle-2 appear. Inserting the condition ρ SR = 1/2 in equations (10) and (11) for ρ + , we get: The result for W SR is coherent with previous results for cycle-2 obtained in [36]. If ΓJ 1, the SR phase occurs for g < g c ≈ 4, independently of Y , which is similar to Brunel's phase diagram [5]. For moderate ΓJ, the transition line is where γ = q/p (see Fig. 3d).

Transition between irregular spiking states
Synchronous irregular (SI) appears as inhibition is increased from an asynchronous irregular (AI) phase through a flip bifurcation with small amplitude. For a map of the form , the bifurcation condition is given by: where ρ + is the fixed point given by equation (10). Simplifying the terms in the last equation, we obtain: These bifurcations lines are shown in Fig. 3d.
Solving this equation, we find that this cycle appears for g > g 0 , with g 0 given by

Low activity state with respect to the external current
We can also examine the behavior for fixed g > g c as a function of input Y . We obtain an almost linear behavior similar to that found by Brunel [5], see Fig. 3b. A first order expansion for large J in equation (11) gives: Equation (48) has the correct maximum limit ρ → 1 when Y → ∞. For moderate values of Y , equation (48) presents the linear behavior found in equation (24) of Brunel [5].
Due to leakage, the curves of Brunel [5] start at Y = 1 − µ and have some curvature, in contrast to our linear ρ ∝ (Y − 1) Θ(Y − 1) behavior (since we used µ = 0 in Fig. 3b). But it is possible to obtain curves that start at Y = 1 − µ with curvature similar to Brunel [5], using the solution for µ > 0 below.

D. Active state for nonzero leakage
The stationary firing rates for general µ are valid in two limits: either close to the critical region or close to the saturating regime, i.e. when the potential of the first peak in the distribution P t (V ) approaches V S = 1/Γ + θ.
When µ > 0, the active phases must be described by the recurrence equations for both excitatory and inhibitory populations. Using the simplified weight, we can rewrite the recurrence equations (19), (20) and (21) in the stationary state as: U k = µU k−1 + W ρ + I = (W ρ + I) k−1 n=0 µ n (51) with U 0 = 0 because we consider the resting potential to be equal to the reset potential.
When the system is close to the critical value, we can consider the last term to be negligible for µ < 1, because Φ(U 1 ) goes to zero as U 1 approaches θ. In this case, we can use: Whenh = 0, there is a second order phase transition because the solutions to equation (54) are ρ − = ρ * = 0 and where This relation can also be rewritten as: in good approximation close to the critical point (g g c ).