Emergence of Irregular Activity in Networks of Strongly Coupled Conductance-Based Neurons

Cortical neurons are characterized by irregular firing and a broad distribution of rates. The balanced state model explains these observations with a cancellation of mean excitatory and inhibitory currents, which makes fluctuations drive firing. In networks of neurons with current-based synapses, the balanced state emerges dynamically if coupling is strong, i.e., if the mean number of synapses per neuron K is large and synaptic efficacy is of the order of 1/K. When synapses are conductance-based, current fluctuations are suppressed when coupling is strong, questioning the applicability of the balanced state idea to biological neural networks. We analyze networks of strongly coupled conductance-based neurons and show that asynchronous irregular activity and broad distributions of rates emerge if synaptic efficacy is of the order of 1/ log(K). In such networks, unlike in the standard balanced state model, current fluctuations are small and firing is maintained by a drift-diffusion balance. This balance emerges dynamically, without fine-tuning, if inputs are smaller than a critical value, which depends on synaptic time constants and coupling strength, and is significantly more robust to connection heterogeneities than the classical balanced state model. Our analysis makes experimentally testable predictions of how the network response properties should evolve as input increases.


I. INTRODUCTION
Each neuron in cortex receives inputs from hundreds to thousands of pre-synaptic neurons.If these inputs were to sum to produce a large net current, the central limit theorem argues that fluctuations should be small compared to the mean, leading to regular firing, as observed during in vitro experiments under constant current injection [1,2].Cortical activity, however, is highly irregular, with a coefficient of variation of interspike intervals (CV of ISI) close to one [3,4].To explain the observed irregularity, it has been proposed that neural networks operate in a balanced state, where strong feedforward and recurrent excitatory inputs are canceled by recurrent inhibition and firing is driven by fluctuations [5,6].At the single neuron level, in order for this state to emerge, input currents must satisfy two constraints.First, excitatory and inhibitory currents must be fine tuned so to produce an average input below threshold.Specifically, if K and J represent the average number of input connections per neuron and synaptic efficacy, respectively, the difference between excitatory and inhibitory presynaptic inputs must be of order 1/KJ.Second, input fluctuations should be large enough to drive firing.
It has been shown that the balanced state emerges dynamically (without fine tuning) in randomly connected networks of binary units [7,8] and networks of currentbased spiking neurons [9,10], provided that coupling is strong, and recurrent inhibition is powerful enough to counterbalance instabilities due to recurrent excitation.However, these results have all been derived assuming that the firing of a presynaptic neuron produces a fixed amount of synaptic current, hence neglecting the dependence of synaptic current on the membrane potential, a key aspect of neuronal biophysics.In real synapses, synaptic inputs are mediated by changes in conductance, due to opening of synaptic receptor-channels on the membrane, and synaptic currents are proportional to the product of synaptic conductance and a driving force which depends on the membrane potential.Models that incorporate this description are referred to as 'conductance-based synapses'.
Large synaptic conductances has been shown to have major effects on the stationary [11] and dynamical [12] response of single cells, and form the basis of the 'highconductance state' [13][14][15][16][17][18][19] that has been argued to describe well in vivo data [20][21][22] (but see [23] and Discussion).At the network level, conductance modulation plays a role in controlling signal propagation [24], input summation [25], and firing statistics [26].However, most of the previously mentioned studies rely exclusively on numerical simulations, and in spite of a few attempts at analytical descriptions of networks of conductance-based neurons [17,[27][28][29][30][31], an understanding of the behavior of such networks when coupling is strong is still lacking.
Here, we investigate networks of strongly coupled conductance-based neurons.We find that, for synapses of order 1/ √ K, fluctuations are too weak to sustain firing, questioning the relevance of the balanced state idea to cortical dynamics.Our analysis, on the other hand, shows that stronger synapses (of order 1/ log(K)) generate irregular firing when coupling is strong.We characterize the properties of networks with such a scaling, showing that they match properties observed in cortex, and discuss constraints induced by synaptic time constant.The model generates qualitatively different predictions compared to the current-based model, which could be tested experimentally.

II. MODELS OF SINGLE NEURON AND NETWORK DYNAMICS
Membrane potential dynamics.We study the dynamics of networks of leaky integrate-and-fire (LIF) neurons with conductance-based synaptic inputs.The membrane potential V j of the j-th neuron in the network follows the equation where C j is the neuronal capacitance; E L , E E and E I are the reversal potentials of the leak, excitatory and inhibitory currents; while g j L , g j E and g j I are the leak, excitatory and inhibitory conductances.Assuming instantaneous synapses (the case of finite synaptic time constants is discussed at the end of the results section), excitatory and inhibitory conductances are given by In Eq. ( 2), τ j = C j /g j L is the single neuron membrane time constant, a jm are dimensionless measures of synaptic strength between neuron j and neuron m, n δ(t − t n m ) represents the sum of all the spikes generated at times t n m by neuron m.Every time the membrane potential V j reaches the firing threshold θ, the jth neuron emits a spike, its membrane potential is set to a reset V r , and stays at that value for a refractory period τ rp ; after this time the dynamics resumes, following Eq.(1).
We use a jm = a (a g) for all excitatory (inhibitory) synapses.In the homogeneous case, each neuron receives synaptic inputs from K E = K (K I = γK) excitatory (inhibitory) cells.In the network case, each neuron receives additional K X = K excitatory inputs from an external population firing with Poisson statistics with rate ν X .We use excitatory and inhibitory neurons with the same biophysical properties, hence the above assumptions imply that the firing rates of excitatory and inhibitory neurons are equal, ν = ν E = ν I .Models taking into account the biophysical diversity between the excitatory and inhibitory populations are discussed in Appendix D. When heterogeneity is taken into account, the above defined values of K E,I,X represent the means of Gaussian distributions.We use the following single neuron parameters: τ rp = 2ms, θ = −55mV, V r = −65mV, E E = 0mV, E I = −75mV, E L = −80mV, τ j = τ L = 20ms.We explore various scalings of a with K and, in all cases, we assume that a 1.When a 1, an incoming spike produced by an excitatory presynaptic neuron produces a jump in the membrane potential of amplitude a(E E − V ), where V is the voltage just before spike arrival.In cortex, V ∼ −60mV and average amplitudes of post-synaptic potentials are in the order 0.5 − 1.0mV [32][33][34][35][36][37][38].Thus, we expect realistic values of a to be in the order of 0.01.
Diffusion and effective time constant approximations.We assume that each cell receives projections from a large number of cells (K 1), neurons are sparsely connected and fire approximately as Poisson processes, each incoming spike provides a small change in conductance (a 1), and that temporal correlations in synaptic inputs can be neglected.Under these assumptions, we can use the diffusion approximation, and approximate the conductances as where r E and r I are the firing rates of pre-synaptic E and I neurons, respectively, and ζ E and ζ I are independent Gaussian white noise terms with zero mean and unit variance density.In the single neuron case, we take r E = ν X , r I = ην X where η represents the ratio of I/E input rate.In the network case, r E = ν X + ν, r I = ν where ν X is the external rate, while ν is the firing rate of excitatory and inhibitory neurons in the network, determined self-consistently (see below).We point out that, for some activity levels, the assumption of Poisson presynaptic firing made in the derivation of Eq. ( 3) breaks down, as neurons in the network show interspike intervals with CV significantly different from one (e.g.see Fig. 3C).However, comparisons between mean field results and numerical simulations (see Appendix E) show that neglecting non-Poissonianity (as well as other contributions discussed above Eq.( 3)) generates quantitative but not qualitative discrepancies, with magnitude that decreases with coupling strength.Moreover, in Appendix B, we show that if a 1 the firing of neurons in the network matches that of a Poisson process with refractory period and hence, when ν 1/τ rp , deviations from Poissonianity become negligible.
Using the diffusion approximation, Eq. ( 1) reduces to where ζ is a white noise term, with zero mean and unit variance density, while ( In Eq. ( 4), τ is an effective membrane time constant, while µ and σ 2 (V ) represent the average and the variance of the synaptic current generated by incoming spikes, respectively.The noise term in Eq. ( 4) can be decomposed into an additive and a multiplicative component.The latter has an effect on membrane voltage statistics that is of the same order of the contribution coming from synaptic shot noise [39], a factor which has been neglected in deriving Eq. (3).Therefore, for a consistent analysis, we neglect the multiplicative component of the noise in the above derivation; this leads to an equation of the form of Eq. ( 4) with the substitution This approach has been termed the effective time constant approximation [39].Note that the substitution of Eq. ( 6) greatly simplifies mathematical expressions but it is not a necessary ingredient for the results presented in this paper.In fact, all our results can be obtained without having to resort to this approximation (see Appendix A, B and D).
Current-based model.The previous definitions and results translate directly to current-based models, with the only exception that the dependency of excitatory and inhibitory synaptic currents on the membrane potential are neglected (see [10] for more details).Therefore, Eq. ( 1) becomes where represent the excitatory and inhibitory input currents.Starting from Eq. ( 7), making assumptions analogous to those discussed above and using the diffusion approximation [10], the dynamics of current-based neurons is given by an equation of the form of Eq. ( 4) with Note that, unlike what happens in conductance-based models, τ is a fixed parameter and does not depend on network firing rate or external drive.Another difference between the current-based and conductance-based models is that in the latter, but not the former, model σ depends on V ; as we discussed above, this difference is neglected in the main text, where we use the effective time constant approximation.

III. BEHAVIOR OF SINGLE NEURON RESPONSE FOR LARGE K
We start our analysis investigating the effects of synaptic conductance on single neuron response.We consider a neuron receiving K (γK) excitatory (inhibitory) inputs, each with synaptic efficacy J (gJ), from cells firing with Poisson statistics with a rate and analyze its membrane potential dynamics in the frameworks of current-based and conductance-based models.In both models, the membrane potential V follows a stochastic differential equation of the form of Eq. (4); differences emerge in the dependency of τ , µ and σ on the parameters characterizing the connectivity, K and J.In particular, in the current-based model, the different terms in Eq. ( 8) can be writen as where τ curr 0 , µ curr 0 , and σ curr 0 are independent of J and K.In the conductance-based model, the efficacy of excitatory and inhibitory synapses depend on the membrane potential as J = a(E E,I − V ); the different terms in Eq. ( 4), under the assumption that Ka 1, become of order Here, all these terms depend on parameters in a completely different way than in the current-based case.As we will show below, these differences drastically modify how the neural response changes as K and J are varied and hence the size of J ensuring finite response for a given value of K.
The dynamics of a current-based neuron is shown in Fig. 1Ai, with parameters leading to irregular firing.Because of the chosen parameter values, the mean excitatory and inhibitory inputs approximately cancel each other, generating subthreshold average input and fluctuation-driven spikes, which leads to irregularity of firing.If all parameters are fixed while K is increased (J ∼ K 0 ), the response changes drastically (Fig. 1Aii), since the mean input becomes much larger than threshold and firing becomes regular.To understand this effect, we analyze how terms in Eq. ( 4) are modified as K increases.The evolution of the membrane potential in time is determined by two terms: a drift term −(V − µ)/τ , which drives the membrane potential toward its mean value µ, and a noise term σ/ √ τ , which leads to fluctuations around this mean value.Increasing K modifies the equilibrium value µ of the drift force and the input noise, which increase proportionally to KJ(1 − γgη) and KJ 2 (γg 2 η + 1), respectively (Fig. 1B,C).
This observation suggests that, to preserve irregular firing as K is increased, two ingredients are needed.First, the rates of excitatory and inhibitory inputs must be fine tuned to maintain a mean input below threshold; this can be achieved choosing γgη − 1 ∼ 1/KJ.Second, the amplitude of input fluctuations should be preserved; this can be achieved scaling synaptic efficacy as J ∼ 1/ √ K. Once these two conditions are met, irregular firing is restored (Fig. 1Aiii).Importantly, in a network with J ∼ 1/ √ K, irregular firing emerges without fine tuning, since rates dynamically adjust to balance excitatory and inhibitory inputs and maintain mean inputs below threshold [7,8].We now show that this solution does not work once synaptic conductance is taken into account.The dynamics of a conductance-based neuron in response to the inputs described above is shown in Fig. 1Di.As in the current-based neuron, it features irregular firing, with mean input below threshold and spiking driven by fluctuations, and firing becomes regular for larger K, leaving all other parameters unchanged (Fig. 1Dii).However, unlike the current-based neuron, input remains below threshold at large K; regular firing is produced by large fluctuations, which saturate response and produce spikes that are regularly spaced because of the refractory period.These observations can be understood looking at the equation for the membrane potential dynamics: increasing K leaves invariant the equilibrium value of the membrane potential µ but increases the drift force and the input noise amplitude as Ka and √ Ka, respectively (Fig. 1E,F).Since the equilibrium membrane potential is fixed below threshold, response properties are determined by the interplay between drift force and input noise, which have opposite effects on the probability of spike generation.The response saturation observed in Fig. 1Dii shows that, as K increases at fixed a, fluctuations dominate over drift force.On the other hand, using the scaling a ∼ 1/ √ K leaves the amplitude of fluctuations unchanged, but generates a restoring force of order √ K (Fig. 1E) which dominates and completely abolishes firing at strong coupling (Fig. 1Diii).Results in Fig. 1 show that the response of a conductance-based neuron when K is large depends on the balance between drift force and input noise.The scalings a ∼ O(1) and a ∼ 1/ √ K leave one of the two contributions dominate; suggesting that an intermediate scaling could keep a balance between them.Below we derive such a scaling, showing that it preserves firing rate and CV of ISI when K becomes large.

IV. A SCALING RELATION THAT PRESERVES SINGLE NEURON RESPONSE FOR LARGE K
We analyze under what conditions the response of a single conductance-based neuron is preserved when K is large.For a LIF neuron driven described by Eqs.(4,5,6), the single cell transfer function, i.e. the dependency of the firing rate ν on the external drive ν X , is given by [40,41]  In the biologically relevant case of a 1, Eq. ( 10) simplifies significantly.In fact, the distance between the equilibrium membrane potential measured in units of noise u max is of order 1/ √ a (except for inputs ν X 1/aKτ L , where it is of order 1/a √ Kν X τ L 1/ √ a.) Therefore u max is large when a is small; in this limit, the firing rate is given by Kramers escape rate [42], and Eq.(10) becomes: where we have defined v2 = av 2 max and τ = aKν X τ .The motivation to introduce v and τ is that they remain of order 1 in the small a limit, provided the external inputs ν X are at least of order 1/(aKτ L ).When the external inputs are such that ν X 1/(aKτ L ), these quantities become independent of ν X , a and K and are given by The firing rate given by Eq. ( 12) remains finite when a is small and/or K is large if Q remains of order one; this condition leads to the following scaling relationship i.e. a should be of order 1/ log(K).
In Appendix C, we show that expressions analogous to Eq. ( 12) can be derived in integrate-andfire neuron models which feature additional intrinsic voltage-dependent currents, as long as synapses are conductance-based and input noise is small (a 1).Examples of such models include the exponential integrateand-fire neurons with its spike-generating exponential current [43], and models with voltage-gated subthreshold currents [23].Moreover, we show that, in these models, firing remains finite if a ∼ 1/ log(K), and voltagedependent currents generate corrections to the logarithmic scaling which are negligible when coupling is strong.
Since v and τ vary with ν X , Eq. ( 14) can be satisfied, and hence firing can be supported, only if the inputs span a small range of values, such that τ and v are approximately constant, or if ν X 1/aKτ L .Note that, while in the strong coupling limit (i.e. when K goes infinity), only the second of these two possibilities can be implemented with input rates spanning physiologically relevant values, both are are admissible when coupling is moderate (i.e. when K is large but finite, a condition consistent with experimental data on cortical net-works [44,45]).In what follows, with the exception of the section on finite synaptic time constant, we focus on the case ν X 1/aKτ L , and investigate how different properties evolve with K using the scaling defined by Eq. ( 14) with v and τ given by Eq. (13).Importantly, all the results discussed below hold for inputs outside the region ν X 1/aKτ L , as long as ν X is at least of order 1/aKτ L (a necessary condition for the derivation of Eq. ( 12) to be valid), and that inputs span a region small enough for the variations of v and τ to be negligible.
In Fig. 2A, we compare the scaling defined by Eq. ( 14) with the a ∼ 1/ √ K scaling of current-based neurons.At low values of K, the values of a obtained with the two scalings are similar; at larger values of K, synaptic strength defined by Eq. ( 14) decays as a ∼ 1/log(K), i.e. synapses are stronger in the conductance-based model than in the current-based model.Examples of single neuron transfer function computed from Eq. ( 10) for different coupling strength are shown in Fig. 2B,C.Responses are nonlinear at onset and close to saturation.As predicted by the theory, scaling a with K according to Eq. ( 14) preserves the firing rate over a region of inputs that increases with coupling strength (Fig. 2C,D), while the average membrane potential remains below threshold (Fig. 2D).The quantity v/ √ a represents the distance from threshold of the equilibrium membrane potential in units of input fluctuations; Eq. ( 14) implies that this distance increases with coupling strength.When K is very large, the effective membrane time constant, which is of order τ ∼ 1/aKν X , becomes small and firing is driven by fluctuations that, on the time scale of this effective membrane time constant, are rare.
We next investigated if the above scaling preserves irregular firing by analyzing the CV of interspike intervals.This quantity is given by [10] ) and, for the biologically relevant case of a 1 and µ < θ, reduces to (see Appendix B for details) i.e. the CV is close to one at low rates and it decays monotonically as the neuron approaches saturation.Critically, Eq. ( 16) depends on coupling strength only through ν, hence any scaling relation preserving firing rate will also produce CV of order one at low rate.We validated numerically this result in Fig. 2E.We now investigate how Eq. ( 14) preserves irregular firing in conductance-based neurons.We have shown that increasing K at fixed a produces large input and membrane fluctuations, which saturate firing; the scaling a ∼ 1/ √ K preserves input fluctuations but, because of the strong drift force, suppresses membrane potential fluctuations, and hence firing.The scaling of Eq. ( 14), at every value of K, yields the value of a that balances the contribution of drift and input fluctuations, so that membrane fluctuations are of the right size to preserve the rate of threshold crossing.Note that, unlike what happens in the current-based model, both input fluctuations and drift force increase with K (Fig. 2F,G) while the membrane potential distribution, which is given by [46] slowly becomes narrower (Fig. 2H).This result can be understood by noticing that, when a 1 and neglecting the contribution due to the refractory period, Eq. ( 17) reduces to Hence, the probability distribution becomes Gaussian when coupling is strong, with a variance proportional to σ 2 ∼ a.We note that, since a is of order 1/ log K, the width of the distribution becomes small only for unrealistically large values of K.

V. ASYNCHRONOUS IRREGULAR ACTIVITY IN NETWORK RESPONSE AT STRONG COUPLING
We have so far considered the case of a single neuron subjected to stochastic inputs.We now show how the above results generalize to the network case, where inputs to a neuron are produced by a combination of external and recurrent inputs.
We consider networks of recurrently connected excitatory and inhibitory neurons, firing at rate ν, stimulated by an external population firing with Poisson statistics with firing rate ν X .Using again the diffusion approximation, the response of a single neuron in the networks is given by Eq. ( 10) (and hence Eq. ( 12)) with Eq (10), if all neurons in a given population are described by the same single cell parameters and the network is in an asynchronous state in which cells fire at a constant firing rate, provides an implicit equation whose solution is the network transfer function.Example solutions are shown in Fig. 3B (numerical validation of the mean field results is provided in Appendix E).In Appendix D, we prove that firing in the network is preserved when coupling is strong if parameters are rescaled according to Eq. ( 14).Moreover, we show that response nonlinearities are suppressed and the network response in the strong coupling limit (i.e. when K goes infinity) is given, up to saturation, by Response of networks of conductancebased neurons for large K.
(A) Scaling relation defined by self-consistency condition given by Eqs .( 14) and (19) 10) and ( 15) (colored lines) with strong coupling limit solution of Eqs. ( 20) and ( 16) (black line).(D) Probability distribution of the membrane potential obtained from (17).In panels B-D, dotted and dashed lines represent quantities obtained with the scalings J ∼ K 0 and J ∼ 1/ √ K, respectively, for values of K and J indicated in panel A (black dots).Parameters: γ = 1/4 and g = 30.
The parameter ρ, which is obtained solving Eq. ( 12) self-consistently (see Appendix D for details), is the response gain in the strong coupling limit.Finally, our derivation implies that Eq. ( 14) preserves irregular firing and creates a probability distribution of membrane potential whose width decreases only logarithmically as K increases (Fig. 3C,D and numerical validation in Appendix E), as in the single neuron case.While this logarithmic decrease is a qualitative difference with the current-based balanced state in which the width stays finite in the large K limit, in practice for realistic values of K, realistic fluctuations of membrane potential (a few mV) can be observed in both cases.
We now turn to the question of what happens in networks with different scalings between a and K. Our analysis of single neuron response described above shows that scalings different from that of Eq. ( 14) fail to preserve firing for large K, as they let either input noise or drift dominate.However, the situation in networks might be different, since recurrent interactions could in principle adjust the statistics of input currents such that irregular firing at low rates is preserved when coupling becomes strong.Thus, we turn to the analysis of the network behavior when a scaling a ∼ K −α is assumed.For α ≤ 0, the dominant contribution of input noise at the single neuron level (Figs. 1 and 2) generates saturation of response and regular firing in the network (Fig. 3).This can be understood by noticing that, for large K, the factor Q in Eq. ( 12) becomes negligible and the selfconsistency condition defining the network rate is solved by ν = 1/τ rp .For α > 0, the network response for large K is determined by two competing elements.On the one hand, input drift dominates and tends to suppress firing (Figs. 1 and 2).On the other hand, for the network to be stable, inhibition must dominate recurrent interactions [9].Hence, any suppression in network activity reduces recurrent inhibition and tends to increase neural activity.When these two elements conspire to generate a finite network response, the factor Q in Eq. ( 12) must be of order one and v ∼ a ∼ K −α .In this scenario, the network activity exhibits the following features (Fig. 3): (i) the mean inputs drive neurons very close to threshold (θ − μ ∼ aσ ∼ K −α ); (ii) the response of the network to external inputs is linear and, up to corrections of order (iii) firing is irregular (because of Eq. ( 16)); (iv) the width of the membrane potential distribution is of order a ∼ K −α (because of Eq. ( 18)).Therefore, scalings different from that of Eq. ( 14) can produce asynchronous irregular activity in networks of conductance-based neurons, but this leads to networks with membrane potentials narrowly distributed close to threshold, a property which seems at odds with what is observed in cortex [47][48][49][50][51][52].

VI. ROBUST LOGNORMAL DISTRIBUTION OF FIRING RATES IN NETWORKS WITH HETEROGENEOUS CONNECTIVITY
Up to this point, we have assumed a number of connections equal for all neurons.In real networks, however, this number fluctuates from cell to cell.The goal of this section is to analyze the effects of heterogeneous connectivity in networks of conductance-based neurons.
We investigated numerically the effects of connection heterogeneity as follows.We chose a Gaussian distribution of the number of connections per neuron, with mean K and variance ∆K 2 for excitatory connections, and mean γK and variance γ 2 ∆K 2 for inhibitory connections.The connectivity matrix was constructed by drawing first randomly E and I in-degrees K i E,X,I from these Gaussian distributions for each neuron, and then selecting at random K i E,X,I E/I pre-synaptic neurons.We then simulated network dynamics and measured the distribution of rates and CV of the ISI in the population.Results for different values of CV K = ∆K/K are shown in Fig. 4A-C.For small and moderate values of connection heterogeneity, increasing CV K broadens the distribution of rates and CV of the ISI, but both distributions remain peaked around a mean rate that is close to that of homogeneous networks (Fig. 4A,B).For larger CV K , on the other hand, the distribution of rates changes its shape, with a large fraction of neurons moving to very low rates, while others increase their rates (Fig. 4A) and the distribution of the CV of ISI becomes bimodal, with a peak at low CV corresponding to the high rate neurons, while the peak at a CV close to 1 corresponds to neurons with very low firing rates (Fig. 4B).
To characterize more systematically the change in the distribution of rates with CV K , we measured, for each value of CV K , the fraction of quiescent cells, defined as the number of cells that did not spike during 20 seconds of the simulated dynamics (Fig. 4C).This analysis shows that the number of quiescent cells, and hence the distribution of rates, changes abruptly as the CV K is above a critical value CV * K .Importantly, unlike our definition of the fraction of quiescent cells, this abrupt change is a property of the network that is independent of the duration of the simulation.
To understand these numerical results, we performed a mean field analysis of the effects of connection heterogeneity on the distribution of rates (Appendix F).This analysis captures quantitatively numerical simulations (Fig. 4A) and shows that, in the limit of small CV K and a, rates in the network are given by where ν 0 is the population average in the absence of heterogeneity, z i is a Gaussian random variable, and the prefactor Ω is independent of a, K and ν X .The exponent in Eq. ( 22) represents a quenched disorder in the value of v i , i.e. in the distance from threshold of the single cell µ i in units of input noise.As shown in Appendix F, Eq. ( 22) implies that the distribution of rates is lognormal, a feature consistent with experimental observations [53][54][55] and distributions of rates in networks of current-based LIF neurons [56].It also implies that the variance of the distribution ∆ν/ν should increase linearly with CV K , a prediction which is confirmed by numerical simulations (Fig. 4C).The derivation in Appendix F also provides an explanation for the change in the shape of the distribution for larger CV K .In fact, for larger heterogeneity, the small CV K approximation is not valid and fluctuations in input connectivity produce cells for which µ i far from θ, that are either firing at extremely low rate (µ i < θ) or regularly (µ i > θ).The latter generates the peak at low values in the CV of the ISI seen for large values CV K .
The quantity CV * K represents the level of connection heterogeneity above which significant deviations from the asynchronous irregular state emerges, i.e. large fractions of neurons show extremely low or regular firing.Eq. ( 22) suggests that CV * K should increase linearly with a.We validated this prediction with our mean field model, by computing the minimal value of CV K at which 1% of the cells fire at rate of 10 −3 spk/s.(Fig. 4D).Note that the derivation of Eq. ( 22) only assumes a to be small and does not depend on the scaling relation between a and K. On the other hand, the fact that CV * K increases linearly with a makes the state emerging in networks of conductance-based neurons with a ∼ 1/ log(K) significantly more robust to connection fluctuations than that emerging with a ∼ K −α , for which CV * K ∼ K −α , and with current-based neurons, where . Note that, while in randomly connected networks CV K ∼ 1/ √ K, a larger degree of heterogeneity has been observed in cortical networks [51,[57][58][59][60][61].Our results show that networks of conductance-based neurons could potentially be much more robust to such heterogeneities than networks of current-based neurons.

VII. COMPARISON WITH EXPERIMENTAL DATA
The relation between synaptic efficacy and number of connections per neuron has been recently studied experimentally using a culture preparation [62].In this paper, it was found that cultures in which K was larger had weaker synapses than cultures with smaller K (Fig. 5).
In what follows we compare this data with the scalings expected in networks of current-based and conductancebased neurons, and discuss implications for in vivo networks.
In the current-based model, the strength of excitatory and inhibitory post synaptic potentials as a function of K can be written as J E = J 0 / √ K and J I = g J E .In the conductance-based model, these quantities become J E = (V − E E )a and J I = g(V − E I )a; where a = a(K, v) is given by Eq. ( 14) while, for the dataset of [62], V ∼ −60mV, J E ∼ J I , E E ∼ 0mV and E I ∼ −80mV.For each model, we inferred free parameters from the data with a least-squares optimization in logarithmic scale (best fit: g = 1.1 and J 0 = 20mV in the current-based model; g = 3.4 and v = 0.08 in the conductance-based model) and computed the expected synaptic strength as a function of K (lines in Fig. 5A).Our analysis shows that the performance of the currentbased and the conductance-based model in describing the data, over the range of K explored in the experiment, are similar, with the former being slightly better than the latter (root mean square 2.2mV vs 2.4mV).This result is consistent with the observation made in [62] that, when fitted with a power-law J ∼ K −β , data are best described by β = 0.59 but are compatible with a broad range of values (95% confidence interval: [0.47:0.70]).Note that even though both models give similar results for PSP amplitudes in the range of values of K present in cultures (∼50-1000), they give significantly different predictions for larger values of K.For instance, for K = 10, 000, J E is expected to be ∼ 0.2 mV in the current-based model and ∼ 0.7 mV in the conductancebased model.
In Fig. 5B, we plot the distance between the equilibrium membrane potential µ and threshold θ in units of input fluctuations, v/ √ a as a function of K using the value of v obtained above, and find that the expected value in vivo, where K ∼ 10 3 − 10 4 , is in the range 2-3.In Fig. 5C,D, we plot how total synaptic excitatory conductance, and effective membrane time constant, change as a function of K.Both quantities change significantly faster using the conductance-based scaling (g E /g L ∼ K/ log(K); τ /τ L ∼ log(K)/K) than what expected by the scaling of the current-based model For K in the range 10 3 − 10 4 and mean firing rates in the range 1-5 spk/s, the total synaptic conductance is found to be in a range from about 2 to 50 times the leak conductance, while the effective membrane time constant is found to be smaller than the membrane time constant by a factor 2 to 50.
We compare these values with available experimental data in the Discussion.

VIII. EFFECT OF FINITE SYNAPTIC TIME CONSTANTS
Results shown in Fig. 5 beg the question whether the assumption of negligible synaptic time constants we have made in our analysis is reasonable.In fact, synaptic decay time constants of experimentally recorded postsynaptic currents range from a few ms (for AMPA and GABA A receptor-mediated currents) to tens of ms (for GABA B and NMDA receptor-mediated currents, see e.g.[63]), i.e. they are comparable to the membrane time constant already at weak coupling, where τ ∼ τ L is typically in the range 10-30ms [64,65].In the strong coupling limit, the effective membrane time constant goes to zero, and so this assumption clearly breaks down.In this section, we investigate the range of validity of this assumption, and what happens once the assumption of negligible time constants is no longer valid.
With finite synaptic time constants, the temporal evolution of conductances of Eq. ( 2) is replaced by It follows that the single-neuron membrane potential dynamics is described by Eqs.(1,23).Here, for simplicity, we take excitatory and inhibitory synaptic currents to have the same decay time constant τ S .Fig. 6A shows how increasing the synaptic time constant modifies the mean firing rate of single integrate-and-fire neurons in response to K (γK) excitatory (inhibitory) inputs with synaptic strength a (ga) and frequency ν X (ην X ).The figure shows that, though the mean firing rate is close to predictions obtained with instantaneous synapses for small τ S /τ , deviations emerge for τ S /τ ∼ 1, and firing is strongly suppressed as τ S /τ becomes larger.To understand these numerical results, we resort again to the diffusion approximation [67], together with the effective time constant approximation [11,68] to derive a simplified expression of the single neuron membrane potential dynamics with finite synaptic time constant (details in Appendix G); this is given by where τ , µ and σ are as in the case of negligible synaptic time constant (Eq.( 5)), whilst z is an Ornstein-Uhlenbeck process with correlation time τ S .It follows that, with respect to Eq. ( 4), input fluctuations with frequency larger than 1/τ S are suppressed and, for large τ S /τ , the membrane potential dynamics is given by i.e. the membrane potential is essentially slaved to a time dependent effective reversal potential corresponding to the r.h.s. of Eq. ( 25) [14].Note that Eq. ( 25) is valid only in the subthreshold regime.When the r.h.s. of Eq. ( 25) exceeds the threshold, the neuron fires a burst of action potentials whose frequency, in the strong coupling limit, is close to the inverse of the refractory period [69].As ν X increases, the equilibrium value µ remains constant while τ decreases, leading to a suppression of membrane fluctuations (Fig. 6D), and in turn to the suppression of response observed in Fig. 6A.
In Appendix G, we use existing analytical expansions [67,69,70], as well as numerical simulations, to shows that neural responses obtained with finite τ S are in good agreement with predictions obtained with instantaneous synapses as long as τ S /τ 0.1.It follows that the single neuron properties we discussed in the case of instantaneous synapses hold in the region of inputs for which τ S /τ 0.1 (i.e.ν X 0.1/aKτ S ) and the derivation of Eq. ( 14) is valid (i.e.ν X is at least of order 1/aKτ L ).Thus, there is at best a narrow range of inputs for which these properties carry over to the finite synaptic constant case.Interestingly, when biologically relevant parameters are considered (e.g.Fig. 6), inputs within this region generate firing rates that are in the experimentally observed range in cortical networks [23,[47][48][49][50][51][52][53][54][55].The analysis of Appendix G also shows that, when τ S /τ ∼ 1, i.e. once the input rate ν X is of order 1/aKτ S , firing is suppressed exponentially.The scaling relation of Eq. ( 14) does not prevent this suppression, which emerges for external rates of order 1/aKτ S ∼ log(K)/Kτ S .Scalings of the form a ∼ K −α , with α > 0, on the other hand, create a larger region of inputs for which τ S /τ 1 but, as we showed when studying the neural dynamics with instantaneous synapses, fail in generating response for large K.We next asked if another scaling relation between a and K could prevent suppression of neural response when τ S is finite.The single neuron response computed in Appendix G is a nonlinear function of the input ν X , which depends parametrically on a and K.It follows that, in order to preserve the single neuron response, a should scale differently with K for different values of ν X .Since in cortical networks input rates, i.e. ν X , change dynamically on a time scale much shorter than that over which plasticity can modify synaptic connections, we conclude that a biologically realistic scaling between a and K, which prevents suppression of neural response when τ S is finite in a broad range of external inputs, does not exist.Moreover, the membrane potential dynamics for large K and τ S /τ (Eq.( 25)) becomes independent of a.This shows that rescaling synaptic efficacy with K cannot prevent suppression of response.
We next examined the effect of synaptic time constant on network response.Numerically computed responses in networks of neurons with finite synaptic time constant are shown in Fig. 6B,C.Network response is close to the prediction obtained with instantaneous synapses for small τ S /τ , and deviations emerge for τ S /τ ∼ 1.Hence, analogously to the single neuron case, the network properties discussed in the case of instantaneous synapses remain valid for low inputs.However, unlike Colored bars below the first and the fourth row indicate inputs that gives 0.1 < τS/τ < 1, i.e. the region where deviations in the neural response from the case of instantaneous synapses emerge.Firing rates (first row) match predictions obtained for instantaneous synapses for small τS/τ ; significant deviations and response suppression emerge for larger τS/τ .The effective membrane time constant (τ , second row) decreases with input rate, is independent of τS, and reaches the value τS/τ ∼ 1 for different levels of external drive (dashed lines represent the different values of τS).The equilibrium value of the membrane potential (µ, third row) increases with input rate and is independent of τS (dotted line represents spiking threshold).The fluctuation of the membrane potential (σV , fourth row) has a non-monotonic relationship with input rate, and peaks at a value of νX for which τ is of the same order as τS.(B) Analogous to panel A but in the network case.Firing rates are no longer suppressed as τS/τ increases, but approach the response scaling predicted by Eq. ( 21 the single neuron case, no suppression appears for larger τ S /τ .This lack of suppression in the network response, analogously to the one we discussed in networks with instantaneous synapses and a ∼ K −α , is a consequence of the fact that, to have stable dynamics when K is large, inhibition must dominate recurrent interactions [9].In this regime, any change which would produce suppres-sion of single neuron response (e.g. increase of ν X or τ S ) lowers recurrent inhibition and increases the equilibrium value of the membrane potential µ (Fig. 6B,C,E).The balance between these two effects determines the network firing rate and, when τ S /τ 1, generates a response which (see derivation in Appendix G), up to corrections of order 1/ √ Kτ S , is given by Eq. (21) (dashed line in Fig. 6B).Similarly to what happens in networks with instantaneous synapses and a ∼ K −α , this finite response emerges because recurrent interactions set µ very close to threshold, at a distance θ − µ ∼ 1/ √ K that matches the size of the membrane potential fluctuations (Eq.( 25), σ τ /τ S ∼ 1/ √ K).In addition, the network becomes much more sensitive to connection heterogeneity, with CV * K ∼ 1/ √ K.However, here the dynamics of the single neuron membrane potential is correlated over a timescale τ S (Fig. 6E) and firing is bursty, with periods of regular spiking randomly interspersed in time.Moreover, the properties discussed here are independent of the scaling of a with K, since they always emerge once τ S /τ 1, a condition that is met for any scaling once ν X 1/aKτ S .The specific scaling relation, on the other hand, is important to determine the input strength at which τ S /τ ∼ 1.
In the previous sections, we have shown that networks of conductance-based neurons with instantaneous synapses present features similar to those observed in cortex if synaptic efficacy is of order a ∼ 1/ log(K), while other scalings generate network properties that are at odds with experimental data (see Tab.I for a summary).In this section, we have found that, when the synaptic time constant is considered, these properties are preserved in the model for low inputs.As the input increases, the structure of the network response evolves gradually and, for large inputs (ν X 1/aKτ S ), significant deviations from the case of instantaneous synapses emerge (see Tab.I for a summary).In particular, as the input to the network increases, our analysis shows that: the membrane potential approaches threshold while its fluctuations become smaller and temporally correlated; firing becomes more bursty; the network becomes more sensitive to heterogeneity in the in-degree and, if the heterogeneity is larger than that of random networks, significant fractions of neurons become quiescent or fire regularly.These features of the model provide a list of predictions which could be tested experimentally by measuring the evolution of the membrane potential dynamics of cells in cortex with the intensity of inputs to the network.

IX. DISCUSSION
In this work, we analyzed networks of strongly coupled conductance-based neurons.The study of this regime is motivated by the experimental observation that in cortex K is large, with single neurons receiving inputs from hundreds or thousands of pre-synaptic cells.We showed that the classical balanced state idea [5,6], which was developed in the context of current-based models and features synaptic strength of order 1/ √ K [7,8], results in current fluctuations of very small amplitude, which can generate firing in networks only if the mean membrane potential is extremely close to threshold.This seems problematic since intracellular recordings in cor-tex show large membrane potential fluctuations (see e.g.[47][48][49][50][51][52]). To overcome this problem, we introduced a new scaling relation which, in the case of instantaneous synaptic currents, maintains firing by preserving the balance of input drift and diffusion at the single neuron level.Assuming this scaling, the network response automatically shows multiple features that are observed in cortex in vivo: irregular firing, wide distribution of rates, membrane potential with non-negligible distance from threshold and fluctuation size.When finite synaptic time constants are included in the model, we showed that these properties are preserved for low inputs, but are gradually modified as inputs increase: the membrane mean approaches threshold while its fluctuations decrease in size and develop non-negligible temporal correlations.These properties, which are summarized in Tab.I, provide a list of predictions that could be tested experimentally by analyzing the membrane potential dynamics as a function of input strength in cortical neurons.
When synaptic time constants are negligible with respect to the membrane time constant, our theory shows properties that are analogous to those of the classical balanced state model: linear transfer function, CV of order one, and distribution of membrane potentials with finite width.However, these properties emerge from a different underlying dynamics than in the current based model.In current-based models, the mean input current is at distance of order one from threshold in units of input fluctuations.In conductance-based models, this distance increases with coupling strength and firing is generated by large fluctuations at strong coupling.The different operating mechanism manifests itself in two ways: the strength of synapses needed to sustain firing and the robustness to connection heterogeneity, as we discuss in the next paragraphs.
The scaling relation determines how strong synapses should be to allow firing at a given firing rate, for a given a value of K.In current-based neurons, irregular firing is produced as long as synaptic strengths are of order 1/ √ K.In conductance-based neurons, stronger synapses are needed, with a scaling which approaches 1/ log(K) for large K.We showed that both scaling relations are in agreement with data obtained from culture preparations [62], which are limited to relatively small networks, and argued that differences might be important in vivo, where K should be larger.
In current-based models, the mean input current must be set at an appropriate level to produce irregular firing; this constraint is realized by recurrent dynamics in networks with random connectivity and strong enough inhibition [7][8][9].However, in networks with structural heterogeneity, with connection heterogeneity larger than 1/ √ K, the variability in mean input currents produces significant departures from the asynchronous irregular state, with large fractions of neurons that become silent or fire regularly [57].This problem is relevant in cortical networks [57], where significant heterogeneity of  in-degrees as been reported [51,[58][59][60][61], and different mechanisms have been proposed to solve it [57].Here we showed that networks of conductance-based neurons also generate irregular activity without any need for finite tuning, and furthermore can support irregular activity with substantial structural heterogeneity, up to order 1/ log(K).Therefore, these networks are more robust to connection heterogeneity than the current-based model, and do not need the introduction of additional mechanism to sustain the asynchronous irregular state.The strength of coupling in a network, both in the current-based model [71,72] and in the conductancebased model (e.g.Fig. 3) determines the structure of its response and hence the computations it can implement.Recent theoretical work, analyzing experimental data in the framework of current-based models, has suggested that cortex operates in a regime of moderate coupling [44,45], where response nonlinearities are prominent.In conductance-based models, the effective membrane time constant can be informative on the strength of coupling in a network, as it decreases with coupling strength.Results from in vivo recordings in cat parietal cortex [21] showed evidence that single neuron response is sped up by network interactions.In particular, measurements are compatible with inhibitory conductance approximately 3 times larger than leak conductance and support the idea that cortex operates in a "high-conductance state" [22] (but see [23] and discussion below).This limited increase in conductance supports the idea of moderate coupling in cortical networks, in agreement with what found in previous work [44,45].
When the synaptic time constant is much larger than the membrane time constant, we showed that, regardless of synaptic strength, the size of membrane poten-tial fluctuations decreases and firing in the network is preserved by a reduction of the distance from threshold of the mean membrane potential.Moreover, the robustness to heterogeneity in connection fluctuations decreases substantially (the maximum supported heterogeneity becomes of order 1/ √ K) and the membrane potential dynamics becomes correlated over a time scale fixed by the synaptic time constant.For really strong coupling, the regime of large synaptic time constant is reached for low input rates.In the case of moderate coupling, which is consistent with experimental data on cortical networks [44,45], the network response at low rates is well approximated by that of networks with instantaneous synapses, and the regime of large synaptic time constant is reached gradually, as the input to the network increases (Fig. 6).This observation provides a list of prediction on how properties of cortical networks should evolve with input strength (summary in Tab.I), that are testable experimentally.
Experimental evidence suggests that the response to multiple inputs in cortex is non-linear (for an overview, see [73]).Such nonlinearities, which are thought to be fundamental to perform complex computations, cannot be captured by the classical balanced state model, as it features a linear transfer function [7,8].Alternative mechanisms have been proposed [71,73,74], but their biophysical foundations [71,73] or their ability to capture experimentally observed nonlinearities [74] are still not fully understood.We have recently shown [72] that, in networks of current-based spiking neurons, nonlinearities compatible with those used in [71,73] to explain phenomenology of inputs summation in cortex emerge at moderate coupling.Here we have shown that, as in the case of networks of current-base neurons [72], nonlin-ear responses appears in networks of conductance-based neurons at moderate coupling, both at response onset and close to single neuron saturation.In addition, we have found that synaptic time constants provide an additional source of nonlinearity, with nonlinear responses emerging as the network transitions between the two regimes described above.A full classification of the nonlinearities generated in these networks is outside the scope of this work, but could be performed generalizing the approach developed in [72].
Recent whole cell recording have reported that an intrinsic voltage-gated conductance, whose strength decreases with membrane potential, contributes to the modulation of neuronal conductance of cells in primary visual cortex of awake macaques and anesthetized mice [23].For spontaneous activity, this intrinsic conductance is the dominant contribution to the cell conductance and drives its (unexpected) decrease with increased depolarization.For activity driven by sensory stimuli, on the other hand, modulations coming from synaptic interactions overcome the effect of the intrinsic conductance and neuronal conductance increases with increased depolarization.Our analysis shows that voltage-dependent currents, such as that produced by the voltage-gated channels [23] or during spike generation [43], affect quantitatively, but not qualitatively, the single neuron response and the scaling relation allowing firing.Therefore, the results we described in this contribution seem to be a general property of networks of strongly coupled integrate-and-fire neurons with conductance-based synapses.
Understanding the dynamical regime of operation of the cortex is an important open question in neuroscience, as it constrains which computations can be performed by a network [71].Most of the theories of neural networks have been derived using rate models or currentbased spiking neurons.Our work provides the first theory of the dynamics of strongly coupled conductancebased neurons, it can be easily related to measurable quantities because of its biological details, and suggests predictions that could be tested experimentally.
Appendix A: Calculations in the multiplicative noise case In the main text, we analyze the distribution of membrane potential, firing rate and CV using the effective time constant approximation, which neglects the dependence of the noise amplitude on the membrane potential.This approximation is motivated by the fact that corrections to this approximation are of the same order of shot noise corrections to the diffusion approximation used to describe synaptic inputs [75].In this section, we derive results without resorting to the effective time constant approximation (i.e.keeping the voltage dependence of the noise term), and show that the results derived in the main text remain valid, even though it complicates the calculations.The inclusion of shot noise corrections is outside the scope of this contribution.

Equations for arbitrary drift and diffusion terms
In this section, we compute the probability distribution of the membrane potential, the firing rate, and the CV of ISI of a neuron whose membrane potential follows the equation Eq. ( 4) of the main text is a special form of Eq. (A1) with The Fokker-Plank equation associated to Eq. (A1), in the Stratonovich regularization scheme, is given by where P is the probability of finding a neuron with membrane potential V and J is the corresponding probability current given by We are interested in the stationary behavior of the system in which P does not depend on time and the current J is piecewise constant.In particular, for V between the activation threshold θ and the resting potential V r , J is equal to the neuron firing rate ν and the normalization condition implies where τ rp is the refractory period.
To derive the probability distribution of the neuron potential, we introduce in Eq. (A3) the integrating factor and obtain Using the boundary condition P (θ) = 0, we find and as well as two variables with voltage dimensions, The terms − (V − µ) /τ and σ(V )ζ/ √ τ of Eq. ( 4) represent the input drift and noise to the membrane dynamics respectively.The voltage dependence of these terms is sketched in Fig. 7.
In the large K limit, the different terms in Eq. ( 4) scale as while the values of ω, µ, E S , and E D are independent of K.It follows that the noise term σ √ τ and the time constant τ in Eq. ( 4) become small in the strong coupling limit.This result is analogous to what we obtained in the main text with the effective time constant approximation, since this approximation does not change how these terms scale with a and K.
We now insert the drift and diffusion terms of the conductance-based LIF neuron in Eqs.(A4), (A5), and (A7), and obtain and where Eqs. (A12) and (A13) are analogous to those derived in [11].To simplify the following analysis, we will neglect the contribution of the term a 2 Kτ /2χ, which derives from the regularization scheme.This assumption is justified by the fact that, for large K, τ ∼ 1/aK and the factor a 2 Kτ /2χ is of order a 1.
Appendix B: Calculations in the strong coupling regime -Single neurons In the main text, we derived a simplified expression for the single neuron response neglecting the dependency of noise on membrane potential.In this section we generalize this result to the case in which the full noise expression is considered.We compute simplified expressions of the single neuron transfer function and CV, both in the subthreshold regime µ < θ, and the supra-threshold regime µ > θ.These expressions are validated numerically in Fig. 8 and used in the last part of this section to define a scaling relation between a and K which preserves single neuron firing in the strong coupling limit.13) and (A14) for different values of a and K (colored dots).For the two regimes µ < θ (first row) and µ > θ (second row), the transfer function saturates as K increases.Note the same change in a has a more drastic effect if µ < θ, this is due to the exponential dependence that appears in Eq. (B6).The approximated expressions (continuous lines) capture the properties of transfer function (A Eq.(B6) and E, Eq. (B4)) and CV (C, Eq (B17) and G, Eq (B9)).For small inputs (F), Eq. (B4) fails to describe the transfer function for some values of K because the corresponding µ is below threshold.Simulations parameter are: g = 12; γ = 1/4; η = 1.5 (top) or 0.6 (bottom).

Single neuron transfer function at strong coupling
The starting point of our analysis is the observation that the integrand in Eq. (A13) depends exponentially on 1/a 1.This suggests to perform the integration with a perturbative expansion of the exponent.We will show below that, since the exponent has a stationary point at x = v = α (see Fig. 9), the integration gives two qualitatively different results if α is larger or smaller than the upper bound of the integral v max .Moreover, since the condition α ≶ v max corresponds to θ ≶ µ, the two behaviors correspond to supra/sub-threshold regimes, respectively.Supra-threshold regime v max < α (θ < µ) The exponent in Eq. (A13) is negative for every value of x, except for x = v in which it is zero.The integral in x can be written has With a change of variable z = (x − v)/a we obtain Neglecting all the terms of order a we get Performing the integration in v we obtain Eq. (B4) is the transfer function of a deterministic conductance-based neuron with the addition of the refractory period.This is not surprising since the noise term becomes negligible compared to mean inputs in the small a limit.
In Fig. 8B we show that Eq. (B4) gives a good description of the transfer function predicted by the mean field theory in the supra-threshold regime.Sub-threshold regime v max > α (θ > µ) First we consider α < v min (µ < V r ).For every value of v, the integral in x in Eq (A13) has a maximum in the integration interval, hence it can be performed through saddle-point method and gives In the last equation, the exponent in the integrand has a minimum for v = α and is maximum at v = v max ; we expand the exponent around v = v max and, keeping term up to the first order, obtain In the regime v min < α < v max , the integral in v of Eq. (A13) can be divided into three parts the third integral is analogous to case α < v min , hence it has an exponential dependency on the parameters and dominates the other terms.In Fig. 8A we show that Eq. (B6) gives a good description of the transfer function predicted by the mean field theory for µ < θ.

Single neuron CV of ISI at strong coupling
In this section we provide details of the derivation the approximated expressions of the response CV .Starting from the mean field result of Eq. (A14), we compute integrals using the approach discussed above.
Suprathreshold regime v max < α (θ < µ) The inner integral in Eq. (A14) yields in the small a limit from which we obtain hence the rescaling needed to preserve the deterministic component a ∼ 1/K produces CV 2 ∼ a 1.We validated this result numerically in Figs.8H and 10F.
Subthresold regime v max > α (θ > µ) The integral defining the CV , Eq. (A14), can be expressed as The first integral gives In the second integral Integrating in z we obtain Integrating in v we obtain Using Eq. (B6) we obtain which corresponds to the CV of the ISIs of a Poisson process with dead time, with rate ν and refractory period τ rp .We validated this result numerically in Figs.8D and 10C.

Scaling relations preserving firing in the strong coupling limit
In this section we use the simplified expressions derived above to define scaling relations of a with K which preserves neural response in the strong coupling limit.Importantly, the scaling defined here depends on the operating regime of the neuron, i.e. on the asymptotic value of µ.
In the limit of large K, terms in Eq. (A8) can be written as while µ, E D , E S , v max , α and the function F(x) are independent of K, a and ν E .We have shown in the previous section that the single neuron transfer function is given by For µ > θ, the parameters a and K in Eq. (B20) appear only in the combination aK.It follows that a rescaling leaves invariant the neural response for large K.For µ < θ, Eq. (B20), and hence the transfer function, is invariant under the rescaling aK 1/τ L ν X is automatically implemented.To solve the self consistency condition, we express the firing rate ν with a Taylor expansion Note that in Eq. (D7) we assumed ρ 0 = 0, we will come back to this point at the end of the section.Under this assumption y := ν/ν X = k=∞ k=1 ρ k x k−1 and the function Q depends only on powers of the dimensionless variable x.Keeping only terms up to first order in x, Eq. (D4) becomes from which we find The solution of Eq. (D9) provides the linear component of the network response, i.e. its gain; we will discuss this function in more detail at the end of this section.From Eq. (D9) we find that the network gain ρ 1 is preserved in the strong coupling limit if the factor is constant.Eq. (D10) uniquely defines a scaling between a and K (see Fig. 11C for an example of the scaling function).We test the validity of the scaling in Fig. 11 as follows: given a set of parameters a, K and ρ 1 , we compute numerically the transfer function from Eq. (A13), then we increased K, determined the corresponding change in a using Eq.(D10) and compute again the transfer function; results of this procedure are shown in Fig. 11A.The numerical analysis shows that, as K increases our scaling relation prevent saturation and the network response remains finite.As in the case with diffusion approximation, the shape of the transfer function is not preserved by the scaling and an increasing linear response is observed.We can understand this suppression of nonlinearities by looking at the second order terms in the expansion of Eq. (D4); we find and, keeping the dominant contribution in 1/a at the denominator, ρ 2 ∼ −a ρ 1 dF(vmax(y),y) dy ρ1 + dF(α(y),y)) dy ρ1 .

(D12)
Hence ρ 2 goes to zero as a decreases, producing a linear transfer function.The nonlinearities at low rate in Fig. 11A (e.g.see red and yellow lines) show that our assumption ρ 0 = 0 is not valid in general.However it turns out that the above defined scaling relation suppresses also these nonlinearities in the limit of strong coupling (e.g.blue and cyan lines).
We now characterize the dependency of the transfer function gain, i.e. its slope, on network parameters.For fixed network parameters, the network gain ρ 1 is defined as the solution of Eq. (D9); solutions as a function of a and g are shown in Fig. 11E.At fixed values of a, the gain initially decreases as g increases and, for g large enough, the opposite trend appears.This behavior is due to two different effects which are produced by the increase of g: on one hand, it increases the strength of recurrent inhibition; on the other hand, it decreases the equilibrium membrane potential µ and bring it closer to the inhibitory reversal potential E i , which in turn weakens inhibition (see Fig. 11F).Fig. 11E shows that the gain is finite only for a finite range of the parameter g; divergences appear because recurrent inhibition is not sufficiently strong to balance excitation.At small g, the unbalance is produced by week efficacy of inhibitory synapses; at large g, inhibition is suppressed by the approach of the membrane potential to the reversal point of inhibitory synapses.Increasing the value of a produces an upward shift in the curve and, at the same time, decreases the range of values in which the gain is finite.The observed decrease in gain generated at low values of g is observed also networks of current-based neurons [10] where the gain is found to be 1/(gγ − 1).Finally, we note that the difference between conductance and current-based model decreases with a.
To conclude this analysis, we give an approximated expression of the probability distribution of the membrane potential of Eq. (A12) which, in the strong coupling limit, becomes where V max is the value of the membrane potential V which maximizes the integrand of Eq. (A12) while the function u() has been defined in Eq. (A15).Examples of the probability distribution and the corresponding approximated expressions are given in Fig. 11D.

Model B, multiplicative noise
In this section we generalize the results obtained so far to the case of networks with excitatory and inhibitory neurons with different biophysical properties.

a. Model definition
Here we take into account the diversity of the two types of neurons with for excitatory neurons and for inhibitory neurons.We use the parametrization and Eq. ( 1) becomes The expressions for excitatory neurons are analogous expressions are valid for inhibitory neurons.The firing rate is given by solving a system of two equations with (D21) and analogous expressions for the inhibitory population.The probability distribution of the membrane potential and the CV are straightforward generalizations of Eqs.(A12) and (A14).

b. Scaling analysis
We parametrize inputs to the two populations as ν EX and ν IX = ην EX .Using an analysis analogous to the one depicted above, we obtain a simplified expression for the self-consistency Eq. (D20) that is where and are expected to produce a distribution of rates in the population, characterized by mean and variance ν and ∆ν 2 .As a result, the rates of incoming excitatory and inhibitory spikes differ from cell to cell and become (F1) where r E,I are the average presynaptic rates and z i E,I are realizations of a quenched normal noise with zero mean and unit variance, fixed in a given realization of the network connectivity.Starting from Eq. (F1), the rate ν i of the cell is derived as in the case without heterogeneities, the main difference is that it is now a function of the particular realizations of z i E and z i I .The quantities ν and ∆ν 2 are obtained from population averages through the self consistency relations ν = ν(z E , z I ) , ∆ν 2 = ν(z E , z I ) 2 − ν 2 , (F2) where .represents the Gaussian average over the variables z E and z I .Once ν and ∆ν 2 are known, the probability distribution of firing rate in the population is given by As showed in the main text (Fig. 4A), Eq. (F3) captures quantitatively the heterogeneity in rates observed in numerical simulations.
In the large K (small a) limit, the mathematical expressions derived above simplify significantly.First, as long as the parameter µ i of the i-th neuron is below threshold, its rate is given by an expression analogous to Eq. ( 12) which, for small ∆ E,I , can be written where z i is generated from a Gaussian random variable with zero mean and unit variance.Moreover, if responses are far from saturation, the single rate can be written as where ν 0 is the rate in the absence of quenched noise (i.e.Eq. ( 20) of the main text).It is easy to show that, in Eq. (F5), Ω 2 is independent of a, K and ν X in the large K (small a) limit.Finally, as noted in [56], if the single neuron rate can be expressed as an exponential function of a quenched variable z, Eq. (F3) can be integrated exactly and the distribution of rates is lognormal and given by Therefore, when the derivation of Eq. (F5) is valid, rates in the network should follow a log normal distribution, with parameters given by For Γ 2 1, we find ∆ν/ν ≈ Γ/2 which scales linearly with CV K , consistent with numerical results shown in Fig. 4C.

Appendix G: Finite synaptic time constants
In this section, we discuss the effect of synaptic time constant on single neuron and network responses.First, we derive an approximated expression for the single neuron membrane time constant; we then compute approximated expressions which are valid for different values of the ratio τ S /τ ; at the end of the section, we discussing the response of networks of neurons with large τ S /τ .
The single neuron membrane potential dynamics is given by Using the effective time constant approximation [39], we have where g AF represents the fluctuating component of the conductance g A , i.e.
g A (t) = g A0 + g AF (t) , (G3) and We are interested in stationary response, we introduce the term Since we are interested in understanding the effect of an additional time scale, we can simplify the analysis assuming a unique synaptic time scale τ E = τ I = τ S and obtain To have the correct limit for τ S → 0, we impose a A = a A0 τ L /τ S , where a A0 is the value of the synaptic efficacy in the limit of instantaneous synaptic time scale.With these assumptions the system equation becomes One can check that in the limit τ S → 0, the equations become analogous to those of the main text with η = z/ √ τ S .In what follows, we provide approximated expressions for the single neuron transfer function in three regimes: small time constant [67], large time constant [69], and for intermediate values [70].We also note that a numerical procedure to compute the firing rate exactly for any value synaptic time constant was introduced recently, using Fredholm theory [79].

Single neuron transfer function for different values of τS/τ
For τ S /τ 1, as shown in [67], the firing rate can be computed with a perturbative expansion and is given by For τ S /τ ≈ 1, as shown in [70] using Rice formula [80], the single neuron firing rate is well approximated by the rate of upward threshold crossing of the membrane potential dynamics without reset.Starting from Eq. (G8) and using the results of [70], we obtain In particular, solutions of the implicit equation generated by Eq. (G11) give the network response in the region of inputs for which τ S /τ 1.In this region of inputs, assuming coupling to be strong, the implicit equation becomes )

FIG. 1 .
FIG. 1. Effects of coupling strength on the firing behavior of current-based and conductance-based neurons.(A) Membrane potential of a single current-based neuron for (i) J = 0.3mV, K = 10 3 , g = γ = 1, η such that 1 − gγη = 0.075; (ii) with K = 5 10 4 ; (iii) with K = 5 10 4 and scaled synaptic efficacy (J ∼ 1/ √ K, which gives J = 0.04mV) and input difference 1 − gγη = 0.01; (B,C) Effect of coupling strength on drift force and input noise in a current-based neuron.(D) Membrane potential of a single conductance-based neuron for fixed input difference (g1 − γη = −2.8)and (i) a = 0.01, K = 10 3 ; (ii) K = 5 10 4 ; (iii) K = 5 10 4 and scaled synaptic efficacy (a ∼ 1/ √ K, a = 0.001).(E,F) Effect of coupling strength on drift force and input noise in a conductance-based neuron.In panels A and D, dashed lines represent threshold and reset (black) and equilibrium value of membrane potential (green).In panels Aii and Dii, light purple traces represent dynamics in the absence of spiking mechanism.Input fluctuations in C and F represent input noise per unit time, i.e. the integral of σ √ τ ζ of Eq. (4) computed over an interval ∆t and normalized over ∆t.

FIG. 2 .
FIG. 2. The scaling of Eq. (14) preserves the response of a single conductance-based neuron for large K. (A) Scaling relation preserving firing in conductance-based neurons (Eq.(14), solid line); constant scaling (a ∼ K 0 , dotted line) and scaling of the balanced state model (a ∼ 1/ √ K, dashed line) are shown as a comparison.Colored dots indicate values of a, K used in the subsequent panels.(B-H) Response of conductance-based neurons, for different values of coupling strength and synaptic efficacy (colored lines).The scaling of Eq. (14) preserves how firing rate (B,C); equilibrium value of the membrane potential (D); and CV of the inter-spike interval distribution (E) depend on external input rate νX .This invariance is achieved increasing the drift force (F) and input fluctuation (G) in a way that weakly decreases (logarithmically in K) membrane potential fluctuations (H).Different scalings either saturate or suppress response (B, black lines correspond to K = 10 5 and a values as in panel A).Parameters: a = 0.01 for K = 10 3 , g = 12, η = 1.8, γ = 1/4.
FIG. 3.Response of networks of conductancebased neurons for large K.(A) Scaling relation defined by self-consistency condition given by Eqs .(14) and(19) (black line), values of parameters used in panels B-D (colored-dots).Constant scaling (a ∼ K 0 , dotted line) and scaling of the balanced state model (a ∼ 1/ √ K, dashed line) are shown for comparison.(B,C) Firing rate and CV of ISI as a function of external input, obtained from Eqs. (10) and (15) (colored lines) with strong coupling limit solution of Eqs.(20) and (16) (black line).(D) Probability distribution of the membrane potential obtained from(17).In panels B-D, dotted and dashed lines represent quantities obtained with the scalings J ∼ K 0 and J ∼ 1/ √ K, respectively, for values of K and J indicated in panel A (black dots).Parameters: γ = 1/4 and g = 30.

FIG. 4 .
FIG. 4. Effects of heterogeneous connectivity on the network response.(A-B) Distribution of ν and CV of ISI computed from network simulations (dots) and from the mean field analysis (A, black lines) for different values of CVK (values are indicated by dots in panel C).(C) ∆ν/ν (green, left axis) and fraction of quiescent cells (brown, right axis) computed from network simulations as a function of CVK .For CVK CV * K , ∆ν/ν increases linearly, as predicted by the mean field analysis; deviations from linear scaling emerge for CVK CV * K , when a significant fraction of cells become quiescent.The deviation from linear scaling at low CVK is due to sampling error in estimating the firing rate from simulations.(D) CV *K as a function of K computed from the mean field theory (green, left axis), with a rescaled according to Eq. (14).For large K, CV * K decays proportionally to a (brown, right axis).When K is too low, the network is silent and CV * K = 0.In panels A-C K = 10 3 , g = 20, a = 1.610 −3 , NE = NX = NI /γ = 10K, νX = 0.05/τrp.In network simulations, the dynamics was run for 20 seconds using a time step of 50µs Parameters in panel D are as in Fig.3.

FIG. 5 .
FIG. 5. Comparison of predictions given by current-based and the conductance-based models in describing experimental data from cultures.A Strength of excitatory (EPSP) and inhibitory (IPSP) post synaptic potentials recorded in [62] are compared with best fits using scaling relationships derived from networks with current-based synapses (dashed line) and conductance-based synapses (continuous line).Root mean square (RMS) and best fit parameters are: RMS=2.2mV,g = 1.1, J0 = 20mV, for the current-based model; and RMS=2.4mV,g = 3.4, v = 0.08, for the conductance-based model.B Value of v/ √ a predicted by the conductance-based model as a function of K. C Ratio between excitatory and leak conductance as a function of K, for νE = νI = νX = 1.spk/s (black) and νE = νI = νX = 5spk/s (gray) obtained with a rescaled as Eq.(14) (continuous line) and as 1/ √ K (dashed line).D Ratio between τ and τL as a function of K, parameters and scaling as in panel C.

FIG. 6 .
FIG.6.Effects of synaptic time constant on single neuron and network response.(A) Single neuron response as a function of input rate νX , computed numerically from Eqs. (1),(23) for different synaptic time constants τS (indicated in the bottom right of the figure).In all panels, black lines correspond to the prediction obtained with instantaneous synapses.Colored bars below the first and the fourth row indicate inputs that gives 0.1 < τS/τ < 1, i.e. the region where deviations in the neural response from the case of instantaneous synapses emerge.Firing rates (first row) match predictions obtained for instantaneous synapses for small τS/τ ; significant deviations and response suppression emerge for larger τS/τ .The effective membrane time constant (τ , second row) decreases with input rate, is independent of τS, and reaches the value τS/τ ∼ 1 for different levels of external drive (dashed lines represent the different values of τS).The equilibrium value of the membrane potential (µ, third row) increases with input rate and is independent of τS (dotted line represents spiking threshold).The fluctuation of the membrane potential (σV , fourth row) has a non-monotonic relationship with input rate, and peaks at a value of νX for which τ is of the same order as τS.(B) Analogous to panel A but in the network case.Firing rates are no longer suppressed as τS/τ increases, but approach the response scaling predicted by Eq. (21) (dashed line).As discussed in the text, high firing rates are obtained by increasing the value of µ towards threshold.(C) Zoom of panel B in the neurobiologically relevant region of low rates.(D, E) Examples of membrane potential dynamics for single neuron (D) and network (E) in the absence of spiking mechanisms (νX = 5spk/s in D and 20spk/s in E).High frequency fluctuations are suppressed at large τS.In the network case, increasing τS reduces recurrent inhibition and produces membrane potential trajectories which are increasingly closer to firing threshold.Single neuron parameters: a = 0.01, K = 10 3 , g = 8, η = 1.5, γ = 1/4.Network parameters: a = 0.0016, K = 10 3 , g = 20, γ = 1/4.Simulations were performed with the simulator BRIAN2 [66], with neurons receiving inputs from NEX,IX = 10 K independent Poisson units firing at rates νX , ηνX , in the single neuron case, or νX , in the network case .Network simulations used NE,I = 10K excitatory and inhibitory neurons.
FIG.6.Effects of synaptic time constant on single neuron and network response.(A) Single neuron response as a function of input rate νX , computed numerically from Eqs. (1),(23) for different synaptic time constants τS (indicated in the bottom right of the figure).In all panels, black lines correspond to the prediction obtained with instantaneous synapses.Colored bars below the first and the fourth row indicate inputs that gives 0.1 < τS/τ < 1, i.e. the region where deviations in the neural response from the case of instantaneous synapses emerge.Firing rates (first row) match predictions obtained for instantaneous synapses for small τS/τ ; significant deviations and response suppression emerge for larger τS/τ .The effective membrane time constant (τ , second row) decreases with input rate, is independent of τS, and reaches the value τS/τ ∼ 1 for different levels of external drive (dashed lines represent the different values of τS).The equilibrium value of the membrane potential (µ, third row) increases with input rate and is independent of τS (dotted line represents spiking threshold).The fluctuation of the membrane potential (σV , fourth row) has a non-monotonic relationship with input rate, and peaks at a value of νX for which τ is of the same order as τS.(B) Analogous to panel A but in the network case.Firing rates are no longer suppressed as τS/τ increases, but approach the response scaling predicted by Eq. (21) (dashed line).As discussed in the text, high firing rates are obtained by increasing the value of µ towards threshold.(C) Zoom of panel B in the neurobiologically relevant region of low rates.(D, E) Examples of membrane potential dynamics for single neuron (D) and network (E) in the absence of spiking mechanisms (νX = 5spk/s in D and 20spk/s in E).High frequency fluctuations are suppressed at large τS.In the network case, increasing τS reduces recurrent inhibition and produces membrane potential trajectories which are increasingly closer to firing threshold.Single neuron parameters: a = 0.01, K = 10 3 , g = 8, η = 1.5, γ = 1/4.Network parameters: a = 0.0016, K = 10 3 , g = 20, γ = 1/4.Simulations were performed with the simulator BRIAN2 [66], with neurons receiving inputs from NEX,IX = 10 K independent Poisson units firing at rates νX , ηνX , in the single neuron case, or νX , in the network case .Network simulations used NE,I = 10K excitatory and inhibitory neurons.

FIG. 7 .
FIG. 7. Drift and diffusion terms of Eq. (4) as a function of voltage.(A) Input drift as a function of membrane potential V produced with both inhibitory and excitatory inputs (black line), excitatory inputs only (red dotted line), or inhibitory inputs only (blue dotted line).The drift term decreases monotonically with V and it is zero at V = µ, which is a stable fixed point of the deterministic dynamics.(B) The noise variance is quadratic in V .Its minimum at V = E S is equal to E D a 2 K/χ.Note that the minimum amplitudes of drift and variance are obtained at different values of V .

FIG. 8 .
FIG.8.Response of single conductance-based neuron to noisy inputs.Estimates of firing rate (A, B, E, F), µ (C, G) and CV (D, H) obtained with numerical integration of Eqs (A13), (13) and (A14) for different values of a and K (colored dots).For the two regimes µ < θ (first row) and µ > θ (second row), the transfer function saturates as K increases.Note the same change in a has a more drastic effect if µ < θ, this is due to the exponential dependence that appears in Eq. (B6).The approximated expressions (continuous lines) capture the properties of transfer function (A Eq.(B6) and E, Eq. (B4)) and CV (C, Eq (B17) and G, Eq (B9)).For small inputs (F), Eq. (B4) fails to describe the transfer function for some values of K because the corresponding µ is below threshold.Simulations parameter are: g = 12; γ = 1/4; η = 1.5 (top) or 0.6 (bottom).

FIG. 9 .
FIG. 9. Graphical representation of the exponent in Eq. (A13) The function F (v) − F (x) is stationary at x = v = α, this point is a maximum for x and a minimum for v. Parameters are as in Fig. 7.In this figure, α = 1.2 (black diamond).

FIG. 12 .
FIG. 12. Limit of large K for networks, model B. Firing rate and CV of excitatory and inhibitory neurons in a network predicted by the mean field model for different values of inputs and K; the expected asymptotic behavior is shown in black.On the left (C, F),we show the corresponding scaling relations with dots associated to the connectivity parameters.Simulations parameter: the two populations have ge = 20.0 and gi = 19.0;for both populations the a = 0.0005 for K = 10 5 ; other parameters as in Fig. 8.

)FIG. 13 .
FIG.13.Comparison of mean field theory and numerical simulations.Network transfer function (first row), CV of ISI distribution (second row) and probability distribution of the membrane potential at νE = 0.05τrp (third row).In every panel we show mean field prediction (green), results from numerical simulations (red) and value expected in the strong coupling limit (black).Different columns correspond to different values of K and a which were scaled according to Eq. (D10).The agreement between network simulations (red) and mean field predictions (green) improves as a decreases, as expected since we used the diffusion approximation to derive the results.Simulation parameters are: g = 20, NE = NI = NEX = NIX = 100K.

FIG. 15 .
FIG. 15.Approximation of network response for large τS/τ .Plots analogous to Fig. 6B,C of the main text.Dots represent network response as a function of input rate νX , computed numerically from Eqs. (1), (23) for τS = 1ms (green) and τS = 10ms (red).Continuous lines correspond to the prediction obtained with instantaneous synapses (black) and for large synaptic time constant (Eqs.(G12,5, G13), colored lines).As explained in the text, the latter predictions are valid only for large τS/τ ; because of this, we plotted only values obtained for τS/τ > 1.For τS/τ 1, the network response is well describe by Eq. (21) of the main text.

TABLE I .
Overview of of networks of current-based and conductance-based neurons.Synaptic strength and time constant strongly affect response properties in networks of conductance based neurons.Properties similar to what is observed in cortex emerge in these networks if a ∼ 1/ log K and input rates are lower than a critical value, which is fixed by synaptic time constant and coupling strength.The model predicts that these properties should gradually mutate as the input to the network increases and, for large inputs, should coincide with those indicated in the last line of the table.In the table, the different quantities related to the membrane potential represent: the mean distance from threshold (θ − µ); the size of temporal fluctuations (σV ); the membrane potential correlation time constant (τV ).