Exact Analysis of the Subthreshold Variability for Conductance-Based Neuronal Models with Synchronous Synaptic Inputs

The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically, we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects postspiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime yields realistic subthreshold variability (voltage variance ≃4–9 mV2) only when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that, without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.


I. INTRODUCTION
A common and striking feature of cortical activity is the high degree of neuronal spiking variability [1].This high variability is notably present in sensory cortex and motor cortex, as well as in regions with intermediate representations [2][3][4][5].The prevalence of this variability has led to it being a major constraint for modeling cortical networks.Cortical networks may operate in distinct regimes depending on species, cortical area, and brain states.In asleep or anesthetized state, neurons tend to fire synchronously with strong correlations between the firing of distinct neurons [6][7][8].In the awake state, although synchrony has been reported as well, stimulus drive, arousal, or attention tend to promote an irregular firing regime whereby neurons spike in a seemingly random manner, with decreased or little correlation [1,8,9].This has lead to the hypothesis that cortex primarily operates asynchronously [10][11][12].In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experiences synchronous synaptic inputs is exceedingly low.That said, the asynchronous state hypothesis appears at odds with the high-degree of observed spiking variability in cortex.Cortical neurons are thought to receive a large number of synaptic inputs (≃ 10 4 ) [13].Although the impact of these inputs may vary across synapses, the law of large numbers implies that variability should average out when integrated at the soma.In principle, this would lead to clock-like spiking responses, contrary to experimental observations [14].
A number of mechanisms have been proposed to explain how high spiking variability emerges in cortical networks [15].The prevailing approach posits that excitatory and inhibitory inputs converge on cortical neurons in a balanced manner.In balanced models, the overall excitatory and inhibitory drives cancel each other so that transient imbalances in the drive can bring the neuron's membrane voltage across the spike-initiation threshold.Such balanced models result in spiking statistics that match those found in the neocortex [16,17].However, these statistics can emerge in distinct dynamical regimes depending on whether the balance between excitation and inhibition is tight or loose [18].In tightly balanced networks, whereby the net neuronal drive is negligible compared to the antagonizing components, activity correlation is effectively zero, leading to a strictly asynchronous regime [19][20][21].By contrast, in loosely-balanced networks, the net neuronal drive remains of the same order as the antagonizing components, which allows for strong neuronal correlations during evoked activity, compatible with a synchronous regime [22][23][24].
While the high spiking variability is an important constraint for cortical network modeling, there are other biophysical signatures that may be employed.We now have access to the subthreshold membrane voltage fluctuations that underlie spikes in awake, behaving animals (see Fig. 1).Membrane voltage recordings reveal two main deviations from the asynchronous hypothesis: first, membrane voltage does not hover near spiking threshold and is modulated by the synaptic drive; second, it exhibits state-or stimulus dependent non-Gaussian fluctuation statistics with positive skewness [25][26][27][28].In this work, we further argue that membrane voltage recordings reveal much larger voltage fluctuations than predicted by balanced cortical models [29,30].Below the membrane voltage traces are records of horizontal and vertical eye movements, illustrating that the animal was fixating during the stimulus.Red and green traces indicate different trials under the same conditions.Adapted from [27].
How could such large subthreshold variations in membrane voltage emerge?One way that fluctuations could emerge, even for large numbers of input, is if there is synchrony in the driving inputs [31].In practice, input synchrony is revealed by the presence of positive spiking correlations, which quantify the propensity of distinct synaptic inputs to coactivate.Measurements of spiking correlations between pairs of neurons vary across reports, but have generally been shown to be weak [10][11][12].That said, even weak correlations can have a large impact when the population of correlated inputs is large [32,33].Further, the existence of input synchrony, supported by weak but persistent spiking correlations, is consistent with at least two other experimental observations.First, intracellular recordings from pairs of neurons in both anesthetized and awake animals reveal a high degree of membrane voltage correlations [7,34,35].Second, excitatory and inhibitory conductance inputs are highly correlated with each other within the same neuron [35,36].These observations suggest that input synchrony could explain the observed level of subthreshold variability.
While our focus is on achieving realistic subthreshold variability, other challenges to asynchronous networks have been described.In particular, real neural networks exhibit distinct regimes of activity depending on the strength of their afferent drives [37].In that respect, Zerlaut et al. showed that asynchronous networks can exhibit a spectrum of realistic regimes of activity if they have moderate recurrent connections and are driven by strong thalamic projections (see also [17]).Furthermore, it has been a challenge to identify the scaling rule that should apply to synaptic strengths for asynchrony to hold stably in idealized networks [38].Recently, Sanzeni et al. proposed that a realistic asynchronous regime is achieved for a particular large-coupling rule, whereby synaptic strengths scale in keeping with the logarithmic size of the network.Both studies consider balanced networks with conductance-based neuronal models but neither focuses on the role of synchrony, consistent with the asynchronous state hypothesis.The asynchronous state hypothesis is theoretically attractive because it represents a naturally stable regime of activity in infinite-size, balanced networks of current-based neuronal models [16,17,20,21].Such neuronal models, however, neglect the voltage dependence of conductances and it remains unclear whether the asynchronous regime is asymptotically stable for infinite-size, conductance-based network models.
Here, independent of the constraint of network stability, we ask whether biophysically relevant neuronal models can achieve the observed subthreshold variability under realistic levels of input synchrony.To answer this question, we derive exact analytical expressions for the stationary voltage variance of a single conductance-based neuron in response to synchronous shot-noise drives [39,40].A benefit of shot-noise models compared to diffusion models is to allow for individual synaptic inputs to be temporally separated in distinct impulses, each corresponding to a transient positive conductance fluctuations [41][42][43].We develop our shot-noise analysis for a variant of classically considered neuronal models.We call this variant the all-or-noneconductance-based (AONCB) model for which synaptic activation occurs as an all-or-none process rather than as an exponentially relaxing process.To perform an exact treatment of these models, we develop original probabilistic techniques inspired from Marcus' work about shot-noise driven dynamics [44,45].To model shot-noise drives with synchrony, we develop a statistical framework based on the property of input exchangeability, which assumes that no synaptic inputs play a particular role.In this framework, we show that input drives with varying degree of synchrony can be rigorously modeled via jump processes, while synchrony can be quantitatively related to measures of pairwise spiking correlations.
Our main results are biophysically interpretable for-mulas for the voltage mean and variance in the limit of instantaneous synapses.Crucially, these formulas explicitly depend on the input numbers, weights, and synchrony, and hold without any forms of diffusion approximation.This is in contrast with analytical treatments which elaborate on the diffusion and effective-timeconstant approximations [37,38,46,47].We leverage these exact, explicit formulas to determine under which synchrony conditions a neuron can achieve the experimentally observed subthreshold variability.For biophysically relevant synaptic numbers and weights, we find that achieving realistic variability is possible in response to a restricted number of large asynchronous connections, compatible with the dominance of thalamo-cortical projections in the input layers of the visual cortex.However, we find that achieving realistic variability in response to a large number of moderate cortical inputs, as in superficial cortical visual layers, necessitates nonzero input synchrony in amounts that are consistent with the weak levels of measured spiking correlations observed in vivo.
In practice, persistent synchrony may spontaneously emerge in large but finite neural networks, as nonzero correlations are the hallmark of finite-dimensional interacting dynamics.The network structural features responsible for the magnitude of such correlations remains unclear, and we do not address this question here (see [48,49] for review).The persistence of synchrony is also problematic for theoretical approaches that consider networks in the infinite-size limits.Indeed, our analysis supports that in the absence of synchrony and for all scaling of the synaptic weights, subthreshold variability must vanish in the limit of arbitrary large numbers of synapses.This suggests that independent of any balanced condition, the mean-field dynamics that emerge in infinite-size networks of conductance-based neurons will not exhibit Poisson-like spiking variability, at least in the absence of additional constraints on the network structure or on the biophysical properties of the neurons.In current-based neuronal models, however, variability is not dampened by a conductance-dependent effective time constant.These findings therefore challenge the theoretical basis for the asynchronous state in conductance-based neuronal networks.
Our exact analysis, as well as its biophysical interpretations, is only possible at the cost of several caveats: First, we neglect the impact of the spike-generating mechanism (and of the post-spiking reset) in shaping the subthreshold variability.Second, we quantify synchrony under the assumption of input exchangeability, that is, for synapses having a typical strength as opposed to being heterogeneous.Third, we consider input drives that implement an instantaneous form of synchrony with temporally precise synaptic coactivations.Fourth, we do not consider slow temporal fluctations in the mean synaptic drive.Fifth, and perhaps most concerning, we do not account for the stable emergence of a synchronous regime in network models.We argue in the discussion that all the above caveats but the last one can be addressed without im-pacting our findings.Addressing the last caveat remains an open problem.
For reference, we list in Table I the main notations used in this work.These notations utilize the subscript {} e , {} i to refer to excitation or inhibition, respectively.The notation {} e/i means that the subscript can be either {} e or {} i .The notation {} ei is used to emphasize that a quantity depends jointly on excitation and inhibition.

II. STOCHASTIC MODELING AND ANALYSIS
A. All-or-none-conductance-based neurons We consider the subthreshold dynamics of an original neuronal model, which we called the all-or-noneconductance-based (AONCB) model.In this model, the membrane voltage V obeys the first-order stochastic differential equation (1) where randomness arises from the stochastically activating excitatory and inhibitory conductances, respectively denoted by g e and g i (see Fig. 2a).These conductances result from the action of K e excitatory and K i inhibitory synapses: g e (t) = Ke k=1 g e,k (t) and g i (t) = Ki k=1 g i,k (t).In the absence of synaptic inputs, i.e., when g e = g i = 0, and of external current I, the voltage exponentially relaxes toward its leak reversal potential V L with passive time constant τ = C/G, where C denotes the cell's membrane capacitance and G denotes the cellular passive conductance [50].In the presence of synaptic inputs, the transient synaptic currents I e = g e (V e − V ) and I i = g i (V i − V ) cause the membrane voltage to fluctuate.Conductance-based models account for the voltagedependence of synaptic currents via the driving forces V e −V and V i −V , where V e and V i denotes the excitatory and inhibitory reversal potential, respectively.Without loss of generality, we assume in the following that V L = 0 and that V i < V L = 0 < V e .
We model the spiking activity of the K e +K i upstream neurons as shot noise [39,40], which can be generically modeled as a (K e +K i )-dimensional stochastic point process [51,52].Let us denote by {N e,k (t)} 1≤k≤Ke its excitatory component and by {N i,k (t)} 1≤k≤Ki its inhibitory component, where t denotes time and k is the neuron index.For each neuron k, the process N e/i,k (t) is specified as the counting process registering the spiking occurrences of neuron k up to time t.In other words, N e/i,k (t) = k 1 {T e/i,k,n ≤t} , where {T e/i,k,n } n∈Z denotes the full sequence of spiking times of neuron k and where 1 A denotes the indicator function of set A. Note that by convention, we label spikes so that T e/i,k,0 ≤ 0 < T e/i,k,1 for all neuron k.Given a point-process model for the upstream spiking activity, classical conductance-based models consider that a single input to a synapse causes an instantaneous increase of its conductance, followed by an  where w e/i,k ≥ 0 is the dimensionless synaptic weight.
The above equation prescribes that the n-th spike delivery to synapse k at time T e/i,k,n is followed by an instantaneous increase of that synapse's conductance by an amount w e/i,k for a period τ s .Thus, the synaptic response prescribed by Eq. ( 2) is all-or-none as opposed to being graded as in classical conductance-based models.Moreover, just as in classical models, Eq. ( 2) allows synapses to multi-activate, thereby neglecting nonlinear synaptic saturation (see Fig. 2b).
To be complete, AONCB neurons must in principle include a spike-generating mechanism.A customary choice is the integrate-and-fire mechanism [53,54]: a neuron emits a spike whenever its voltage V exceeds a threshold value V T , and reset instantaneously to some value V R afterwards.Such a mechanism impacts the neuronal subthreshold voltage dynamics via post-spiking reset, which implements a nonlinear form of feedback.However, in this work we focus on the variability that is generated by fluctuating, possibly synchronous, synaptic inputs.For this reason, we will neglect the influence of the spiking reset in our analysis and actually, we will ignore the spikegenerating mechanism altogether.Finally, although our analysis of AONCB neurons will apply to positive synaptic activation time τ s > 0, we will only discuss our results in the limit of instantaneous synapses.This corresponds to taking τ s → 0 + while adopting the scaling g e/i ∝ 1/τ s in order to maintain the charge transfer induced by a synaptic event.We will see that this limiting process preserves the response variability of AONCB neurons., where X k,i = 1 if input k activates in time bin i.Under assumptions of exchangeability, the input spiking correlation is entirely captured by the count statistics of how many inputs coactivate within a given time bin.In the limit Ke → ∞, the distribution of the fraction of coactivating inputs coincides with the directing de Finetti measure, which we consider as a parametric choice in our approach.In the absence of correlation, synapses tend to activate in isolation: ρe = 0 in (a).In the presence of correlation, synapses tend to coactivate yielding disproportionately large synaptic activation event: ρe = 0.1 in (b).Considering the associated cumulative counts specifies discrete-time jump processes that can be generalized to the continuous-time limit, i.e., for time bins of vanishing duration ∆t → 0 + .

B. Quantifying the synchrony of exchangeable synaptic inputs
Our goal here is to introduce a discrete model for synaptic inputs, whereby synchrony can be rigorously quantified.To this end, let us suppose that the neuron under consideration receives inputs from K e excitatory neurons and K i inhibitory neurons, chosen from arbitrary large pools of N e ≫ K e excitatory neurons and N i ≫ K i inhibitory neurons.Adopting a discrete-time representation with elementary bin size ∆t, we denote by {{x 1,n , . . ., x Ke,n }, {y 1,n , . . ., y Ki,n }} in {0, 1} Ke × {0, 1} Ki the input state within the n-th bin.Our main simplifying assumption consists in modeling the N e excitatory inputs and the N i inhibitory inputs as separately exchangeable random variables {X 1,n , . . ., X Ke,n } and {Y 1,n , . . ., Y Ki,n } that are distributed identically over {0, 1} Ne and {0, 1} Ni , respectively, and independently across time.This warrants dropping the dependence on time index n.By separately exchangeable, we mean that no subset of excitatory inputs or inhibitory inputs plays a distinct role so that at all time, the respective distributions of {X 1,n , . . ., X Ke,n } and {Y 1,n , . . ., Y Ki,n } are independent of the inputs labelling.In other words, for all permutations σ e of {1, . . ., N e } and σ i of {1, . . ., N i }, the joint distribution of {X σe(1) , . . ., X σe(Ne) } and {Y σi(1) , . . ., Y σi(Ni) } is identical to that of {X 1 , . . ., X Ne } and {Y 1 , . . ., Y Ni } [55,56].By contrast with independent random spiking variables, exchangeable ones can exhibit nonzero correlation structure.By symmetry, this structure is specified by three correlation coefficients , where C [X, Y ] and V [X] denote the covariance and the variance of the binary variables X and Z, respectively.Interestingly, a more explicit form for ρ e , ρ e , and ρ ei can be obtained in the limit of an infinite-size pool N e , N i → ∞.This follows from de Finetti's theorem [57], which states that the probability of observing a given input configuration for K e excitatory neurons and K i inhibitory neurons is given by where F ei is the directing de Finetti measure, defined as a bivariate distribution over the unit square [0, 1] × [0, 1].In the equation above, the number θ e and θ i represent the (jointly fluctuating) probabilities that an excitatory neuron and an inhibitory neuron spike in a given time bin, respectively.The core message of de Finetti theorem is that the spiking activity of neurons from infinite exchangeable pools is obtained as a mixture of conditionally independent binomial laws.This mixture is specified by the directing measure F ei , which fully parametrizes our synchronous input model.Independent spiking corresponds to choosing F ei as a point-mass measure concentrated on some probabilities π e/i = r e/i ∆t, where r e/i denotes the individual spiking rate of a neuron: dF ei (θ) = δ(θ e − π e )δ(θ i − π i )dθ e dθ i (see Fig 3a).By contrast, a dispersed directing measure F ei corresponds to the existence of correlations among the inputs (see Fig 3b).Accordingly, we show in Appendix A that the spiking pairwise correlation ρ e/i take the explicit form whereas ρ ei , the correlation between excitation and inhibition, is given by In the above formulas, E θ e/i , V θ e/i , and C [θ e , θ i ] denote expectation, variance, and covariance of (θ e , θ i ) ∼ F ei , respectively.Note that these formulas show that nonzero correlations ρ e/i correspond to nonzero variance, as is always the case for dispersed distribution.Independence between excitation and inhibition for which ρ ei = 0 corresponds to directing measure F ei with product form, i.e., F ei (θ e , θ i ) = F e (θ e )F i (θ i ), where F e and F i denote the marginal distributions.Alternative forms of the directed measure F ei generally lead to nonzero cross correlation ρ ei , which necessarily satisfies 0 In this exchangeable setting, a reasonable parametric choice for the marginals F e and F i is given by beta distributions Beta(α, β), where α and β denote shape parameters [58].Practically, this choice is motivated by the ability of beta distributions to efficiently fit correlated spiking data generated by existing algorithms [59].Formally, this choice is motivated by the fact that beta distributions are conjugate priors for the binomial likelihood functions, so that the resulting probabilistic models can be studied analytically [60][61][62].For instance, for F e ∼ Beta(α e , β e ), the probability that k e synapses among the K e inputs are jointly active within the same time bin follows the beta-binomial distribution Accordingly, the mean number of active excitatory inputs is E [k e ] = K e α e /(α e + β e ) = K e r e ∆t.Utilizing Eq. ( 3), we also find that ρ e = 1/(1+α e +β e ).Note that the above results show that by changing de Finetti's measure, one can not only modify the spiking correlation but also the mean spiking rate.
In the following, we will exploit the above analytical results to illustrate that taking the continuous-time limit ∆t → 0 + specifies synchronous input drives as compound Poisson processes [51,52].To do so, we will consider both excitation and inhibition, which in a discrete setting corresponds to considering bivariate probability distributions P ei,kl defined over {0, . . ., K e } × {0, . . ., K i }.Ideally, these distributions P ei,kl should be such that its conditional marginals P e,k and P i,l , with distributions given by Eq. ( 5).Unfortunately, there does not seem to be a simple low-dimensional parametrization for such distributions P ei,kl , except in particular cases.To address this point, at least numerically, one can resort to a variety of methods including copulas [63,64].For analytical calculations, we will only consider consider two particular cases for which the marginals of F ei are given by the beta distributions: (i) the case of maximum positive correlation for which θ e = θ i , i.e., dF ei (θ e , θ i ) = δ(θ e − θ i )F (θ e ) dθ e dθ i with F e = F i = F and (ii) the case of zero correlation for which θ e and θ i are independent, i.e., F ei (θ e , θ i ) = F e (θ e )F i (θ i ).

C. Synchronous synaptic drives as compound Poisson processes
Under assumption of input exchangeability and given typical excitatory and inhibitory synaptic weights w e/i , the overall synaptic drive to a neuron is determined by (k e , k i ), the numbers of active excitatory and inhibitory inputs at each discrete time step.As AONCB dynamics unfolds in continuous time, we need to consider this discrete drive in the continuous-time limit as well, i.e., for vanishing time bins ∆t → 0 + .When ∆t → 0 + , we show in Appendix B that the overall synaptic drive specifies a compound Poisson process Z with bivariate jumps (W e , W i ).Specifically, we have where (W e,n , W i,n ) are i.i.d.samples with bivariate distribution denoted by p ei and where the overall driving Poisson process N registers the number of synaptic events without mulitple counts (see Fig. 4).By synaptic events, we mean these times for which at least one excitatory synapse or one inhibitory synapse activates.We say that N registers these events without multiple count as it counts one event independent of the number of possibly coactivating synapses.Similarly, we denote by N e and N i the counting processes registering synaptic excitatory events and synaptic inhibitory events alone, respectively.These processes N e and N i are Poisson processes that are correlated in the presence of synchrony, as both N e and N i may register the same event.Note that this implies that max(N e (t), N i (t)) ≤ N (t) ≤ N e (t) + N i (t).More generally, denoting by b and b e/i the rates of N and N e/i , respectively, the presence of synchrony implies that max(b e , b i ) ≤ b ≤ b e + b i and r e/i ≤ b e/i ≤ K e/i r e/i , where r e/i is the typical activation rates of a single synapse.
For simplicity, we explain how to obtain such limit compound Poisson processes by reasoning on the excitatory inputs alone.To this end, let us denote the marginal jump distribution of W e as p e .Given a fixed typical synaptic weight w e , the jumps are quantized as W e = kw e , with k distributed on {1, . . ., K e }, as by convention jumps cannot have zero size.These jumps are naturally defined in the discrete setting, i.e., with ∆t > 0, and their discrete distribution is given via conditioning Under assumption of partial exchangeability, synaptic inputs can only be distinguished by the fact that they are either excitatory or inhibitory, which is marked by being colored in red or blue in the discrete representation of correlated synaptic inputs with bin size ∆t = 1ms.Accordingly, considering excitation and inhibition separately specifies two associated input-count processes and two cumulative counting processes.For nonzero spiking correlation ρ = 0.03, these processes are themselves correlated as captured by the joint distribution of excitatory and inhibitory input counts P ei,kl (center) and by the joint distribution of excitatory and inhibitory jumps P ei,kl /(1 − P00) (right).(b) The input count distribution P ei,kl is a finite-size approximation of the bivariate directing de Finetti measure Fei, which we consider as a parameter as usual.For a smaller bin size ∆t = 0.1ms, this distribution concentrates in (0, 0), as an increasing proportion of time bins does not register any synaptic events, be they excitatory or inhibitory.In the presence of correlation however, the conditioned jump distribution remains correlated but also dispersed.(c) In the limit ∆t → 0, the input-count distribution is concentrated in (0, 0), consistent with the fact that the average number of synaptic activations remains constant while the number of bins diverges.By contrast, the distribution of synaptic event size conditioned to distinct from (0, 0) converges toward a well-defined distribution: p ei,kl = lim ∆t→0 + P ei,kl /(1 − Pei,00).This distribution characterizes the jumps of a bivariate compound Poisson process, obtained as the limit of the cumulative count process when considering ∆t → 0 + .
as P e,k /(1 − P e,0 ).For beta distributed marginals F e , we show in Appendix B that considering ∆t → 0 + yields the jump distribution p e,k = lim where ψ denotes the digamma function.In the following, we will explicitly index discrete count distributions, e.g.p e,k , to distinguish them form the corresponding jump distributions, i.e., p e .Eq. ( 7) follows from observing that the probability to find a spike within a bin is E [X i ] = α e /(α e +β e ) = r e ∆t, so that for fixed excitatory spiking rate r e , α e → 0 + when ∆t → 0 + .As a result, the continuous-time spiking correlation is ρ e = 1/(1 + β e ), so that we can interpret β e as a parameter controlling correlations.More generally, we show in Appendix C that the limit correlation ρ e only depends on the count distribution p e,k via , where E e [•] denotes expectations with respect to p e,k .This shows that zero spiking correlation corresponds to single synaptic activations, i.e., to an input drive modeled as a Poisson process, as opposed to a compound Poisson process.For Poisson-process models, the overall rate of synaptic events is necessarily equal to the sum of the individual spiking rate: b e = K e r e .This is no longer the case in the presence of synchronous spiking, when nonzero input correlation ρ e > 0 arises from coincidental synaptic activations.Indeed, as the population spiking rate is conserved when ∆t → 0 + , the rate of excitatory Let us reiterate for clarity, that if k e synapses activate synchronously, this only counts as one synaptic event, which can come in variable size k.Consistently, we have in general r e ≤ b e ≤ K e r e .When β e → 0, we have perfect synchrony with ρ e = 1 and b e → r e , whereas the independent spiking regime with ρ e = 0 is attained for β e → ∞, for which we have b e → K e r e .It is possible to generalize the above construction to mixed excitation and inhibition but a closed-from treatment is only possible for special cases.For the independent case (i), in the limit ∆t → 0 + , jumps are either excitatory alone or inhibitory alone, i.e., the jump distribution p ei has support on {1, . . ., K e } × {0} ∪ {0} × {1, . . ., K i }.Accordingly, we show in Appendix D that p ei,kl = lim where p e/i,k and b e/i are specified by Eq. ( 7) and Eq. ( 9) the parameters β e/i and K e/i , respectively.This shows that as expected, in the absence of synchrony the driving compound poisson process Z with bidimensional jump is obtained as the direct sum of two independent compound Poisson processes.In particular, the driving processes are such that N = N e + N i , with rates satisfying b = b e + b i .By contrast, for the maximally correlated case with r e = r i = r (ii), we show in Appendix D that the jumps are given as (W e , W i ) = (kw e , lw i ), with (k, l) distributed on {0, . . ., K e } × {0, . . ., K i } \ {0, 0} (see Fig. 4b and c) according to p ei,kl = lim Incidentally, the driving Poisson process N has a rate b determined by adapting Eq. ( 9) for which one can check that r ≤ b ≤ (K e + K i )r.
All the closed-form results so far have been derived for synchrony parametrization in terms of beta distribution.There are other possible parametrizations and these would lead to different count distributions p ei,kl , but without known closed-form.To address this limitation in the following, all our results will hold for arbitrary distributions p ei of the jump sizes (W e , W i ) on the positive orthant (0, ∞) × (0, ∞).In particular, our results will be given in terms of expectations with respect to p ei , still denoted by E ei [•].Nonzero correlation between excitation and inhibition will correspond to those choices of p ei for which W e W i > 0 with nonzero probability, which indicates the presence of synchronous excitatory and inhibitory inputs.Note that this modeling setting restricts nonzero correlations to be positive, which is an inherent limitation of our synchrony-based approach.When considering an arbitrary p ei , the main caveat is understanding how such a distribution may correspond to a given input numbers K e/i and spiking correlations ρ e/i and ρ ei .For this reason, we will always consider that k e = W e /w e and k i = W i /w i follows beta distributed marginal distributions when discussing the roles of w e/i , K e/i , ρ e/i , and ρ ei in shaping the voltage response of a neuron.In that respect, we show in Appendix C that the coefficient ρ ei can always be deduced from the knowledge of a discrete count distribution p ei,kl on {0, . . ., K e } × {0, . . ., K i } \ {0, 0} via where the expectations are with respect to p ei,kl .

D. Instantaneous synapses and Marcus integrals
We are now in a position to formulate the mathematical problem at stake within the framework developed by Marcus to study shot-noise driven systems [44,45].Our goal is quantifying the subthreshold variability of an AONCB neuron subjected to synchronous inputs.Mathematically, this amounts to computing the first two moments of the stationary process solving the following stochastic dynamics (12) where V i < 0 < V e are constants and where the reduced conductances h e = g e /C and h i = g i /C follows stochastic processes defined in terms of a compound Poisson process Z with bivariate jumps.Formally, the compound Poisson process Z is specified by b, the rate of its governing Poisson process N , and by the joint distribution of its jumps p ei .Each point of the Poisson process N represents a synaptic activation time T n , where n is in Z with the convention that T 0 ≤ 0 ≤ T 1 .At all these times, the synaptic input sizes are drawn as i.i.d.random variables (W e,n , W i,n ) in R + × R + with probability distribution p ei .
At this point, it is important to observe that the driving process Z is distinct from the conductance process h = (h e , h i ).The latter process is formally defined for AONCB neurons as W e,n , where the dimensionless parameter ϵ = τ s /τ > 0 is the ratio of the duration of synaptic activation relative to the passive membrane time constant.Note that the amplitude of h scales in inverse proportion to ϵ in order to maintain the overall charge transfer during synaptic events of varying durations.Such a scaling ensures that the voltage response of AONCB neurons has finite, nonzero variability for small or vanishing synaptic time constant, i.e., for ϵ ≪ 1 (see Fig 5).The simplifying limit of instantaneous synapses is obtained for ϵ = τ s /τ → 0 + , which corresponds to infinitely fast synaptic activation.By virtue of its construction, the conductance process h becomes a shot noise in the limit ϵ → 0 + , which can be formally identified to dZ/dt.This is consistent with the definition of shot-noise processes as temporal derivative of compound Poisson processes, i.e., as collections of randomly weighted Dirac-delta masses.Due to their high degree of idealization, shot-noise models are often amenable to exact stochastic analysis, albeit with some caveats.For equations akin to Eq. ( 12) in the limit of instantaneous synapses, such a caveat follows from the multiplicative nature of the conductance shot noise h.In principle, one might expect to solve Eq. ( 12) with shot-noise drive via stochastic calculus, as for diffusion-based drive.This would involve interpreting the stochastic integral representations of solutions in terms of Stratonovich representations [65].However, Stratonovich calculus is not well-defined for shot-noise drives [66].To remedy this point, Marcus has proposed to study stochastic equations subjected to regularized versions of shot noises, whose regularity is controlled by a nonnegative parameter ϵ [44,45].For ϵ > 0, the dynamical equations admit classical solutions, whereas the shotnoise-driven regime is recovered in the limit ϵ → 0 + .The hope is to be able to characterize analytically the shot-noise-driven solution, or at least some of its moments, by considering regular solutions in the limit ϵ → 0 + .We choose to refer to the control parameter as ϵ by design in the above.This is because AONCB models represent Marcus-type regularizations that are amenable to analysis in the limit of instantaneous synapses, i.e., when ϵ = τ s /τ → 0 + , for which the conductance process h converges toward a form of shot noise.Marcus interpretation of stochastic integration has practical implications for numerical simulations with shot noise [41].According to this interpretation, shot-noisedriven solutions shall be conceived as limits of regularized solutions for which standard numerical scheme applies.Correspondingly, shot-noise-driven solutions to Eq. ( 12) can be simulated via a limit numerical scheme.We derive such a limit scheme in Appendix E. Specifically, we show that the voltage of shot-noise-driven AONCB neurons exponentially relaxes toward the leak reversal potential V L = 0, except when subjected to synaptic impulses at times {T n } n∈Z .At these times, the voltage V updates discontinuously according to , where the jumps are given in Appendix E via the Marcus rule Observe that the above Marcus rule directly implies that no jump can cause the voltage to exit (V i , V e ), the allowed range of variation for V .Moreover, note that this rule specifies an exact even-driven simulation scheme given knowledge of the synaptic activation times and sizes {T n , W e,n , W i,n } n∈Z [67].We adopt the above Marcustype numerical scheme in all the simulations that involve instantaneous synapses.

E. Moment calculations
When driven by stationary compound-Poisson processes, AONCB neurons exhibit ergodic voltage dynamics.As a result, the typical voltage state, obtained by sampling the voltage at random time, is captured by a unique stationary distribution.Our main analytical results, which we give here, consists in exact formulas for the first two voltage moment with respect to that stationary distribution.Specifically, we derive the stationary mean voltage Eq. ( 14) in Appendix F and the stationary voltage variance Eq. ( 16) in Appendix G.These results are obtained by a probabilistic treatment exploiting the properties of compound Poisson processes within Marcus' framework.This treatment yields compact, interpretable formulas in the limit of instantaneous synapses ϵ = τ s /τ → 0 + .Readers who are interested in the method of derivation for these results are encouraged to go over the calculations presented in Appendices F, G, H, I, J, K, and L.
In the limit of instantaneous synapses, ϵ → 0 + , we find  14) and ( 16) with numerical estimates obtained via Monte-Carlo simulations for the synchrony conditions considered in (a).
that the stationary voltage mean is where we define the first-order synaptic efficacies as Note the E ei [•] refers to the expectation with respect to the jump distribution p ei in Eq. ( 15), whereas E [•] refers to the stationary expectation in Eq. ( 14).Eq. ( 14) has the same form as for deterministic dynamics with constant conductances, in the sense that the mean voltage is a weighted sum of the reversal potentials V e , V i and V L = 0.One can check that for such deterministic dynamics, the synaptic efficacies involved in the stationary mean simply read a e/i,1 = K e/i r e/i w e/i .Thus, the impact of synaptic variability, and in particular of synchrony, entirely lies in the definition of the efficacies in Eq. ( 15).In the absence of synchrony, one can check that accounting for the shot-noise nature of the synaptic conductances leads to synaptic efficacies under exponential form: a e/i,1 = K e/i r e/i (1 − e −w e/i ).In turn, accounting for input synchrony leads to synaptic efficacies expressed as expectation of these exponential forms in Eq. ( 15), consistent with the stochastic nature of the conductance jumps (W e , W i ).Our other main result, the formula for the stationary voltage variance, involves synaptic efficacies of similar form.Specifically, we find that ) where we define the second-order synaptic efficacies as Eq. ( 16) also prominently features auxiliary second-order efficacies defined by a e/i,12 = a e/i,1 − a e/i,2 .Owing to their prominent role, we also mention their explicit form: The other quantity of interest featuring in Eq. ( 16) is the crosscorrelation coefficient which entirely captures the (nonnegative) correlation between excitatory and inhibitory inputs and shall be seen as an efficacy as well.
To conclude, let us stress that for AONCB models, establishing the above exact expressions does not require any approximation other than taking the limit of instantaneous synapses.In particular, we neither resort to any diffusion approximations [37,38] nor invoke the effective-time-constant approximation [41][42][43].We give in Appendix L an alternative factorized form for V [V ] to justify the nonnegativity of expression (16).In Fig. 6, we illustrate the excellent agreement of the analytically derived expressions ( 14) and ( 16) with numerical estimates obtained via Monte-Carlo simulations of the AONCB dynamics for various input synchrony conditions.Discussing and interpreting quantitatively Eq. ( 14) and Eq. ( 16) within a biophysically relevant context will be the main focus of the remaining of this work.

III. COMPARISON WITH EXPERIMENTAL DATA A. Experimental measurements and parameter estimations
Cortical activity typically exhibits a high degree of variability in response to identical stimuli [68,69], with individual neuronal spiking exhibiting Poissonian characteristics [3,70].Such variability is striking because neurons are thought to typically receive a large number (≃ 10 4 ) of synaptic contacts [13].As a result, in the absence of correlations, neuronal variability should average out, leading to quasi-deterministic neuronal voltage dynamics [71].To explain how variability seemingly defeats averaging in large neural networks, it has been proposed that neurons operate in a special regime, whereby inhibitory and excitatory drive nearly cancel one another [16,17,[19][20][21].In such balanced networks, the voltage fluctuations become the main determinant of the dynamics, yielding a Poisson-like spiking activity [16,17,[19][20][21].However, depending upon the tightness of this balance, networks can exhibit distinct dynamical regimes with varying degree of synchrony [18].
In the following, we will exploit the analytical framework of AONCB neurons to argue that the asynchronous picture predicts voltage fluctuations are an order of magnitude smaller than experimental observations [1,[26][27][28].Such observations indicate that the variability of the neuronal membrane voltage exhibits typical variance values of ≃ 4 − 9mV 2 .Then we will claim that achieving such variability requires input synchrony within the setting of AONCB neurons.Experimental estimates of the spiking correlations are typically thought as weak with coefficients ranging from 0.01 to 0.04 [10][11][12].Such weak values do not warrant the neglect of correlations owing to the typically high number of synaptic connections.Actually, if K denotes the number of inputs, all assumed to play exchangeable roles, an empirical criterion to decide whether a correlation coefficient ρ is weak is that ρ < 1/K [32,33].Assuming the lower estimate of ρ ≃ 0.01, this criterion is achieved for K ≃ 100 inputs, which is well below the typical number of excitatory synapses for cortical neurons.In the following, we will only consider the response of AONCB neurons to synchronous drive with biophysically realistic spiking correlations (0 ≤ ρ ≤ 0.03).
Two key parameters for our argument will be the excitatory and inhibitory synaptic weights denoted by w e and w i , respectively.Typical values for these weights can be estimated via biophysical considerations within the framework of AONCB neurons.In order to develop these considerations, we assume the values V i = −10mV < V L = 0 < V e = 60mV for reversal potentials and τ = 15ms for the passive membrane time constant.Given these assumptions, we set the upper range of excitatory synaptic weights so that when delivered to a neuron close to its resting state, unitary excitatory inputs cause peak membrane fluctuations of ≃ 0.5mV at the soma, attained after a peak time of ≃ 5ms.Such fluctuations correspond to typically large in-vivo synaptic activations of thalamocortical projections in rats [72].Although activations of similar amplitude have been reported for cortico-cortical connections [73,74], recent large-scale in vivo studies have revealed that cortico-cortical excitatory connections are typically much weaker [75,76].At the same time, these studies have shown that inhibitory synaptic conductances are about fourfold larger than excitatory ones, but with similar timescales.Fitting these values within the framework of AONCB neurons for ϵ = τ s /τ ≃ 1/4 reveals that the largest possible synaptic inputs correspond to dimensionless weights w e ≃ 0.01 and w i ≃ 0.04.Following on [75,76], we consider that the comparatively moderate cortico-cortical recurrent connections are an order of magnitude weaker than typical thalamo-cortical projections, i.e., w e ≃ 0.001 and w i ≃ 0.004.Such a range is in keeping with estimates used in [38].

B. The effective-time constant approximation holds in the asynchronous regime
Let us consider that neuronal inputs have zero (or negligible) correlation structure, which corresponds to assuming that all synapses are driven by independent Poisson processes.Incidentally, excitation and inhibition act independently.Within the framework of AONCB neurons, this latter assumption corresponds to choosing a joint jump distribution of the form where δ(•) denotes the Dirac delta function so that W e W i = 0 with probability one.Moreover, b e and b i are independently specified via Eq.( 9) and the overall rate of synaptic events is purely additive: b = b e + b i .Consequently, the crosscorrelation efficacy c ei in Eq. ( 16) vanishes and the dimensionless efficacies simplify to Further assuming that individual excitatory and inhibitory synapses act independently leads to considering that p e and p i depict the size of individual synaptic inputs, as opposed to aggregate events.This corresponds to taking β e → ∞ and β i → ∞ in our parametric model based on beta distributions.Then, as intuition suggests, the overall rates of excitation and inhibition activation are recovered as b e = K e r e and b i = K i r i , where r e and r i are the individual spiking rates.Individual synaptic weights are small in the sense that w e , w i ≪ 1, which warrants neglecting exponential corrections for the evaluation of the synaptic efficacies, at least in the absence of synchrony-based correlations.Accordingly, we have a e,1 ≃ K e r e τ w e and a e,12 ≃ K e r e τ w 2 e /2 , as well as symmetric expressions for inhibitory efficacies.Plugging these values in Eq. ( 16) yields the classical mean-field estimate for the stationary variance which is exactly the same expression as that derived via the diffusion and effective-time-constant approximations in [46,47].However, observe that the only approximation we made in obtaining the above expression is to neglect exponential corrections due to the relative weakness of biophysically relevant synaptic weights, which we hereafter refer to as the small-weight approximation.

C. Asynchronous inputs yield exceedingly small neural variability
In Fig. 7, we represent the stationary mean E [V ] and variance V [V ] as a function of the neuronal spiking input rates r e and r i , but for distinct values of synaptic weights w e and w i .In Fig. 7a, we consider synaptic weights as large as biophysically admissible based on recent in-vivo studies [75,76], i.e., w e = 0.01 and w i = 0.04.By contrast, in Fig. 7b, we consider moderate synaptic weights w e = 0.001 and w i = 0.004, which yield somatic postsynaptic deflections of typical amplitudes.In both cases, we consider input numbers K e and K i such that the mean voltage E [V ] covers the same biophysical range of values as r e and r i varies between 0Hz and 50Hz.Given a zero resting potential, we set this biophysical range to be bounded by ∆E [V ] ≤ 20mV as typically observed experimentally in electrophysiological recordings.These conditions correspond to constant aggregate weights set to K e w e = K i w i = 1 so that This implies that the AONCB neurons under consideration do not reach the high-conductance regime for which the passive conductance can be neglected, i.e., K e r e w e + K e r e w i ≫ 1/τ [77].Away from the high-conductance regime, the variance magnitude is controlled by the denominator of Eq. ( 20).Accordingly, the variance in both cases is primarily dependent on the excitatory rate r e since for K e w e = K i w i = 1, the effective excitatory driving force F e = K e w 2 e (V e − E [V ]) 2 dominates the effective inhibitory driving force . This is because the neuronal voltage typically sits close to the inhibitory reversal potential but far from the excitatory reversal potential For instance, when close to rest E [V ] ≃ 0, the ratio of the effective driving forces is (K e w 2 e V 2 e )/(K i w 2 i V 2 i ) ≃ 9 fold in favor of excitation.Importantly, the magnitude of the variance is distinct for moderate synapses and for large synapses.This is because for constant aggregate weights K e w e = K i w i = 1, the ratio of effective driving forces for large and moderate synapses scales in keeping with the ratio of the weights, and so does the ratio of variances away from the high conductance regime.Thus we have and the variance decreases by one order of magnitude from large weights in Fig. 7a to moderate weights in Fig. 7b.
The above numerical analysis reveals that achieving realistic levels of subthreshold variability for a biophysical mean range of variation requires AONCB neurons to be exclusively driven by large synaptic weights.This is confirmed by considering the voltage mean E [V ] and variance V [V ] in Fig. 8 as a function of the number of inputs K e and of the synaptic weights w e for a given level of inhibition.We choose this level of inhibition to be set by K i = 250 moderate synapses w i = 0.004 with r i = 20Hz in Fig. 8a and by K i = 25 large synapses w i = 0.04 with r i = 20Hz in Fig. 8b.As expected, assuming that r e = 20Hz in the absence of input correlations, the voltage mean E [V ] only depends on the product K e w e , which yields similar mean range of variations for K e varying up to 2000 in Fig. 8a and up to 200 in Fig. 8b.Thus, it is possible to achieve the same range of variations as with moderate synaptic with a fewer number of larger synaptic weights.By contrast, the voltage variance V [V ] only achieves realistic levels for large synaptic weights in both conditions, with w e ≥ 0.015 for moderate inhibitory background synapses in Fig. 8a and w e ≥ 0.01 for large inhibitory background synapses in Fig. 8b.

D. Including input correlations yields realistic subthreshold variability
Without synchrony, achieving the experimentally observed variability necessitates an excitatory drive mediated via synaptic weights w e ≃ 0.01, which corresponds to the upper bounds of the biophysically admissible range and is in agreement with numerical results presented in [38].Albeit possible, this is unrealistic given the wide distribution of amplitudes observed experimentally, whereby the vast majority of synaptic events are small to moderate, at least for cortico-cortical connections [75,76].In principle, one can remedy this issue by allowing for synchronous activation of, say, k e = 10 synapses with moderate weight w e = 0.001, as it amounts to the activation of a single synapse with large weight k e w e = 0.01.A weaker assumption that yields a similar increase in neural variability is to only ask for synapses to tend to synchronize probabilistically, which amounts to require k e to be a random variable with some distribution mass on {k e > 1}.This exactly amounts to model the input drive via a jump process as presented in Section II, with a jump distribution p e that probabilistically captures this degree of input synchrony.In turn, this distribution p e corresponds to a precise input correlation ρ e via Eq.(8).
We quantify the impact of nonzero correlation in Fig. 9 where we consider the cases of moderate weights w e = 0.001 and w e = 0.004 and large weights w e = 0.01 and w i = 0.04 as in Fig. 7 but for ρ e = ρ i = 0.03.Specifically, we consider an AONCB neuron subjected to two independent beta-binomial-derived compound Poisson process drives with rate b e and b i , respectively.These rates b e and b i are obtained via Eq.( 9) by setting β e = β i = 1/ρ e − 1 = 1/ρ i − 1 and for given input numbers K e and K i and spiking rates r e and r i .This ensures that the mean number of synaptic activations b e E ei [k e ] = K e r e and b i E [k i ] = K i r i remains constant when compared with Fig. 7.As a result, the mean response of the AONCB neuron is essentially left unchanged by the presence of correlations, with virtually identical biophysical range of variations ∆E ei [V ] ≃ 10-20mV.This is because for correlation ρ e = ρ i ≃ 0.03, the aggregate weights still satisfy k e w e , k i w i < 1 with probability close to one given that K e w e = K i w i = 1.Then, in the absence of crosscorrelation, i.e., ρ ei = 0, we still have where we recognize K e r e τ w 2 e /2 = a e,12 | ρe=0 as the second-order efficacy in the absence of correlations from Fig. 7.A similar statement holds for a i, 12 .This shows that correlations increase neural variability whenever ρ e > 1/K e or ρ i > 1/K i , which coincides with our previously given criterion to assess the relative weakness of correlations.Accordingly, when excitation and inhibition act independently, i.e., ρ ei = 0, we find that the increase in variability due to input synchrony ∆ ρ e/i = V The above relation follows from the fact that the smallweight approximation for E [V ] is independent of correlations and from neglecting the exponential corrections due to the nonzero size of the synaptic weights.The above formula remains valid as long as the correlations ρ e and ρ i are weak enough so that the aggregate weights satisfy k e w e , k i w i < 1 with probability close to one.To inspect the relevance of exponential corrections, we estimate in Appendix N the error incurred by neglecting exponential corrections.Focusing on the case of excitatory inputs, we find that for correlation coefficients ρ e ≤ 0.05, neglecting exponential corrections incurs less than a 3% error if the number of inputs is smaller than K e ≤ 1000 for moderate synaptic weight w e = 0.001 or than K e ≤ 100 for large synaptic weight w e = 0.01.

E. Including correlations between excitation and inhibition reduces subthreshold variability
The voltage variance estimated for realistic excitatory and inhibitory correlations, e.g., ρ e = ρ i = 0.03 and ρ ei = 0, exceeds the typical levels measured in vivo, i.e., 4 − 9mV 2 , for large synaptic weights.The inclusion of correlations between excitation an inhibition, i.e., ρ ei > 0 can reduce the voltage variance to more realistic levels.We confirm this point in Fig. 10 where we consider the cases of moderate weights w e = 0.001 and w e = 0.004 and large weights w e = 0.01 and w i = 0.04 as in Fig. 9 but for ρ e = ρ i = ρ ei = 0.03.Positive crosscorrelation between excitation and inhibition only marginally impacts the mean voltage response.This is due to the fact that exponential corrections become slightly more relevant as the presence of crosscorrelation leads to larger aggregate weights: W e + W i with W e and W i possibly being jointly positive.By contrast with this marginal impact on the mean response, the voltage variance is significantly reduced when excitation and inhibition are correlated.This is in keeping with the intuition that the net effect of such crosscorrelation is to cancel excitatory and inhibitory synaptic inputs with one another, before they can cause voltage fluctuations.The amount by which the voltage variance is reduced can be quantified in the small-weight approximation.In this approximation, we show in Appendix M that the efficacy c ei capturing the impact of crosscorrelations simplifies to Using the above simplified expression and invoking the fact that the small-weight approximation for E [V ] is independent of correlations show a decrease in the amount Despite the above reduction in variance, we also show in Appendix M that positive input correlations always cause an overall increase of neural variability: Note that the reduction of variability due to ρ ei > 0 crucially depends on the instantaneous nature of correlations between excitation and inhibition.To see this, observe that Marcus rule Eq. ( 13) specifies instantaneous jumps via a weighted average of the reversal potentials V e and V i , which represent extreme values for voltage updates.Thus, perfectly synchronous excitation and inhibition updates the voltage toward an intermediary value rather than extreme ones, leading to smaller jumps on average, Such an effect can vanish or even reverse when synchrony breaks down, e.g., when inhibition substantially lags behind excitation.F. Asynchronous scaling limits require fixed-size synaptic weights Our analysis reveals that the correlations must significantly impact the voltage variability whenever the number of inputs are such that K e > 1/ρ e or K i > 1/ρ i .Spiking correlations are typically measured in vivo to be larger than 0.01.Therefore, synchrony must shape the response of neurons that are driven by more than 100 active inputs, which is presumably allowed by the typically high number of synaptic contacts (≃ 10 4 ) in cortex [13].
In practice, we find that synchrony can explain the relatively high level of neural variability observed in the subthreshold neuronal responses.Beyond these practical findings, we predict that input synchrony also have significant theoretical implications with respect to modeling spiking networks.Analytically tractable models for cortical activity are generally obtained by considering spiking networks in the infinite-size limit.Such infinite-size networks are tractable because the neurons they comprise only interact via population averages, erasing any role for nonzero correlation structure.Distinct mean-field models assume that synaptic weights vanish according to dis-tinct scalings with respect to the number of synapses, i.e., w e/i → 0 as K e/i → ∞.In particular, classical mean-field limits consider the scaling w e/i ∼ 1/K e/i , balanced meanfield limits consider the scaling w e/i ∼ 1/ K e/i , with K e w e − K i w i = O(1), and strong coupling limits consider the scaling w e/i ∼ 1/ ln K e/i , with K e w e − K i w i = O(1) as well.Importantly, all these mean-field limits assume no correlation, and in particular, no synchrony.
Our analysis of AONCB neurons shows that the neglect of synchrony-based correlations is incompatible with the maintenance of neural variability in the infinitesize limit.Indeed, Eq. (20) shows that for any scaling with 1/w e = o(K e ) and 1/w i = o(K i ), as for all the mean-field limits mentioned above, we have Thus, in the absence of correlation and independent of the synaptic weight scaling, the subthreshold voltage variance of AONCB neurons must vanish in the limit of arbitrary large numbers of synapses.We expect such decay of the voltage variability to be characteristic of conductance-based models in the absence of input correlation.Indeed, dimensional analysis suggests that voltage variance for both current-based and conductancebased models are generically obtained via normalization by the reciprocal of the membrane time constant.However, by contrast with current-based models, the reciprocal of the membrane time constant for conductancebased models, i.e., 1/τ + K e w e r e + K i w i r i , involves contributions from synaptic conductances.Thus, to ensure nonzero asymptotic variability, the denominator scaling O(K e w e ) + O(K i w i ) must be balanced by the natural scaling of the Poissonian input drives, i.e., O(K e w 2 e ) + O(K i w 2 i ).In the absence of input correlations, this is only possible for fixed-size weights, which is incompatible with any scaling assumptions.

G. Synchrony allows for variability-preserving scaling limits with vanishing weights
Infinite-size networks with fixed-size synaptic weights are problematic for restricting modeled neurons to operate in the high-conductance regime, whereby the intrinsic conductance properties of the cell play no role.Such a regime is biophysically unrealistic as it implies that the cell would respond to perturbations infinitely fast.We propose to address this issue by considering a new type of variability-preserving limit models obtained for the classical scaling but in the presence of synchrony-based correlations.For simplicity, let us consider our correlated input model with excitation alone in the limit of an arbitrary large number of inputs K e → ∞.When ρ e > 0, the small-weight approximation Eq. (20) suggests that adopting the scaling w e ∼ Ω e /K e , where Ω e denotes the aggregate synaptic weight, yields a nonzero contribution when K e → ∞ as the numerator scales as O(K The above density has infinite mass over [0, Ω e ] owing to its diverging behavior in zero and is referred to as a degenerate beta distribution.In spite of its degenerate nature, it is known that densities of the above form define well-posed processes, the so-called beta processes, which have been studied extensively in the field of nonparametric Bayesian inference [61,62].These beta processes represent generalizations of our compound Poisson process drives in so far as they allow for a countable infinity of jumps to occur within a finite time window.This is a natural requirement to impose when considering an infinite pool of synchronous synaptic inputs, the overwhelming majority of which having nearly zero amplitude.
The above arguments show that one can defined a generalized class of synchronous input models that can serve as the drive of AONCB neurons as well.Such generalizations are obtained as limits of compound Poisson processes and are specified via their Lévy-Khintchine measures, which formalize the role of ν e [78,79].Our results naturally extend to this generalized class.Concretely, for excitation alone, our results extend by replacing all expectations of the form b e E e [•] by integral with respect to the measure ν e .One can easily check that these expectations, which feature prominently in the definition of the various synaptic efficacies, all remain finite for Lévy-Khintchine measures.In particular, the voltage mean and variance of AONCB neurons remain finite with . Thus, considering the classical scaling limit w e ∝ 1/K e preserves nonzero subthreshold variability in the infinite size limit K e → ∞ as long as ν e puts mass away from zero, i.e., for β e < ∞ ⇔ ρ e > 0. Furthermore, we show in Appendix O that V [V ] = O(ρ e ) so that voltage variability consistently vanishes in the absence of spiking correlation, for which ν ϵ concentrates in zero, i.e., when β e → ∞ ⇔ ρ e = 0.

A. Synchrony modeling
We have presented a parametric representation of the neuronal drives resulting from a finite number of asynchronous or (weakly) synchronous synaptic inputs.Several parametric statistical models have been proposed for generating correlated spiking activities in a discrete setting [59,[80][81][82].Such models have been used to analyze the activity of neural populations via Bayesian inference methods [83][84][85], as well as maximum entropy methods [86,87].Our approach is not to simulate or analyze complex neural dependencies but rather to derive from first principles the synchronous input models that could drive conductance-based neuronal models.This approach primarily relies on extending the definition of discrete-time correlated spiking models akin to [59] to the continuoustime setting.To do so, the main tenet of our approach is to realize that input synchrony and spiking correlation represent equivalent measures under the assumption of input exchangeabilty.
Input exchangeabilty posits that the driving inputs form a subset of an arbitrarily large pool of exchangeable random variables [55,56].In particular, this implies that the main determinant of the neuronal drive is the number of active inputs, as opposed to the magnitude of these synaptic inputs.Then, de Finetti theorem [57] states that the probability of observing a given input configuration can be represented in the discrete setting under an integral form (see Eq. ( 3)) involving a directing probability measure F .Intuitively, F represents the probability distribution of the fraction of coactivating inputs at any discrete time.Our approach identifies the directing measure F as a free parameter that captures input synchrony.The more dispersed the distribution F , the more synchronous the inputs, as previously noted in [88,89].Our work elaborates on this observation to develop computationally tractable statistical models for synchronous spiking in the continuous-time limit, i.e., for vanishing discrete time step ∆t → 0 + .We derive our results using a discrete-time directing measure chosen as beta distribution F ∼ B(α, β), where the parameters α and β can be related to the individual spiking rate r and the spiking correlation ρ via r∆t = α/(α + β) and ρ = 1/(1 + α + β).For this specific choice of distribution, we are able to construct statistical models of the correlated spiking activity as generalized beta-binomial processes [60], which play an important role in statistical Bayesian inference [61,62].This construction allows us to fully parametrize the synchronous activity of a finite number of inputs via the jump distribution of a compound Poisson process, which depends explicitly on the spiking correlation.For being continuously indexed in time, stationary compound Poisson processes can naturally serve as the drive to biophysically relevant neuronal models.The idea to utilize compound Poisson processes to model input synchrony was originally proposed in [90][91][92], but without constructing these processes as limits of discrete spiking models and without providing explicit functional form for their jump distributions.More generally, our synchrony modeling can be interpreted as a limit case of the formalism proposed in [93,94] to model correlated spiking activity via multidimensional Poisson processes.

B. Moment analysis
We analytically characterize the subthreshold variability of a tractable conductance-based neuronal model, the AONCB neurons, when driven by synchronous synaptic inputs.The analytical characterization of a neuron's voltage fluctuations has been the focus of intense research [46,47,[95][96][97].These attempts have considered neuronal models that already incorporate some diffusion scaling hypotheses [98,99], formally obtained by assuming an infinite number of synaptic inputs.The primary benefit of these diffusion approximations is that one can treat the corresponding Fokker-Planck equations to quantify neuronal variability in conductancebased integrate-and-fire models, while also including the effect of post-spiking reset [37,38].In practice, subthreshold variability is often estimated in the effectivetime-constant approximation, while neglecting the multiplicative noise contributions due to voltage-dependent membrane fluctuations [46,95,96], although an exact treatment is also possible without this simplifying assumption [38].By contrast, the analysis of conductancebased models has resisted exact treatments when driven by shot noise, as for compound Poisson input processes, rather than by Gaussian white noise, as in the diffusion approximation [41][42][43].
The exact treatment of shot-noise-driven neuronal dynamics is primarily hindered by the limitations of the Ito/Stratonovich integrals [65,100] to capture the effects of point-process-based noise sources, even without including a reset mechanism.These limitations were originally identified by Marcus, who proposed to approach the problem via a new type of stochastic equation [44,45].The key to Marcus equation is to define shot noise as limits of regularized, well-behaved approximations of that shot noise, for which classical calculus applies [66].In practice, these approximations are canonically obtained as the solutions of shot-noise-driven Langevin equations with relaxation time scale τ s , and shot noise is formally recovered in the limit τ s → 0 + .Our assertion here is that all-or-none conductances implement such a form of shotnoise regularization for which a natural limiting process can be defined when synapses operate instantaneously, i.e., τ s → 0 + .The main difference with the canonical Marcus approach is that our regularization is all-or-none, substituting each Dirac delta impulse with a finite steplike impulse of duration τ s and magnitude 1/τ s , thereby introducing a synaptic timescale but without any relaxation mechanism.The above assertion is the basis for introducing AONCB neurons, which is supported by our ability to obtain exact formulas for the first two moments of their stationary voltage dynamics (see Eq. ( 14) and Eq. ( 16)).For τ s > 0, these moments can be expressed in terms of synaptic efficacies that takes exact but rather intricate integral forms.Fortunately, these efficacies drastically simplify in the instantaneous synapse limit τ s → 0 + , for which the canonical shot-noise drive is recovered.These resulting formulas mirror those obtained in the diffusion and effective-time-constant approximations [46,47], except that they involve synaptic efficacies whose expressions are oirginal in three ways (see Eq. ( 15) Eq. (G4), Eq. (G7), and Eq.(G8)): First, independent of input synchrony, these efficacies all have exponential forms and saturate in the limit of large synaptic weights.Such saturation is a general characteristic of shot-noise-driven, continuously-relaxing systems [101][102][103].Second, these efficacies are defined as expectations with respect to the jump distribution p ei of the driving compound Poisson process (see Eq. ( 11) and Appendix B).A nonzero dispersion of p ei , indicating that synaptic activation is truly modeled via random variables W e and W i , is the hallmark of input synchrony [91,92].Third, these efficacies involve the overall rate of synaptic events b (see Eq. ( 12)), which also depends on input synchrony.Such dependence can be naturally understood within the framework of Palm calculus [104], a form of calculus specially developed for stationary point processes.

C. Biophysical relevance
Our analysis allows us to investigate quantitatively how subthreshold variability depends on the numbers and strength of the synaptic contacts.This approach requires that we infer synaptic weights from the typical peak time and peak amplitude of the somatic membrane fluctuations caused by post-synaptic potentials [72,75,76].Within our modeling framework, these weights are dimensionless quantities that we estimate by fitting the AONCB neuronal response to a single all-or-none synaptic activation at rest.For biophysically relevant parameters, this yields typically small synaptic weights in the sense that w e , w i ≪ 1.These small values warrant adopting the small-weight approximation, for which expressions Eq. ( 14) and Eq. ( 16) simplify.
In the small-weight approximation, the mean voltage becomes independent of input synchrony, whereas the simplified voltage variance Eq. ( 20) only depends on input synchrony via the spiking correlation coefficients ρ e , ρ i , and ρ ei , as opposed to depending on a full jump distribution.Spike-count correlations have been experimentally shown to be weak in cortical circuits [10][11][12] and for this reason, most theoretical approaches argued for asynchronous activity [17,[105][106][107][108][109].A putative role for synchrony in neural computations remains a matter of debate [110][111][112].In modeled networks, although the tight balance regime implies asynchronous activity, [19][20][21], the loosely balance regime is compatible with the establishment of strong neuronal correlations [22][23][24].When distributed over large networks, weak correlations can still give rise to precise synchrony, once information is pooled from a large enough number of synaptic inputs [32,33].In this view, and assuming that distinct inputs play comparable roles, correlations measure the propensity of distinct synaptic inputs impinging on a neuron to coactivate, which represents a clear form of synchrony.Our analysis shows that considering synchrony in amounts consistent with the levels of observed spiking correlation is enough to account for the surprisingly large magnitude of subthreshold neuronal variability [1,[26][27][28].In contrast, the asynchronous regime yields unrealistically low variability, an observation the challenges the basis for the asynchronous state hypothesis.
Recent theoretical works [37,38] have also noted that the asynchronous state hypothesis seems at odds with certain features of the cortical activity such as the emergence of spontaneous activity or the maintenance of significant average polarization during evoked activity.Zerlaut et al. have analyzed under which conditions conductance-based networks can achieve a spectrum of asynchronous states with realistic neural features.In their work, a key variable to achieve this spectrum is a strong afferent drive that modulates a balanced network with moderate recurrent connections.Moderate recurrent conductances are inferred from allowing for up to 2mV somatic deflections at rest, whereas the afferent drive is provided via even stronger synaptic conduc-tances that can activate synchronously.These inferred conductances appear large in light of recent in-vivo measurements [72,75,76], and the corresponding synaptic weights all satisfy w e , w i ≥ 0.01 within our framework.Correspondingly, the typical connectivity numbers considered are small with K e = 200, K i = 50 for recurrent connections and K e = 10 for the coactivating afferent projections.Thus, results from [37] appear consistent with our observation that realistic subthreshold variability can only be achieved asynchronously for a restricted number of large synaptic weights.Our findings, however, predict that these results follow from connectivity sparseness and will not hold in denser networks, for which the pairwise spiking correlation will exceed the empirical criteria for asynchrony, e.g., ρ e > 1/K e (ρ e < 0.005 ≤ 1/K e in [37]).Sanzeni et al. have pointed out that implementing the effective-time-constant approximation in conductance-based models suppresses subthreshold variability, especially in the high-conductance state [77].As mentioned here, this suppression causes the voltage variability to decay as O(w e ) + O(w i ) in any scaling limit with vanishing synaptic weights.Sanzeni et al. observe that such decay is too fast to yield realistic variability for the balanced scaling, which assumes w e ∼ 1/ √ K e and w i ∼ 1/ √ K i .To remedy this point, these authors propose to adopt a slower scaling of the weights, i.e., w e ∼ 1/ ln K e and w i ∼ 1/ ln K i , which can be derived from the principle of rate conservation in neural networks.Such a scaling is sufficiently slow for variability to persist in networks with large connectivity number (≃ 10 5 ).However, as any scaling with vanishing weights, our exact analysis shows that such scaling must eventually lead to decaying variability, thereby challenging the basis for the synchronous state hypothesis.
Both of these studies focus on the network dynamics of conductance-based networks under the diffusion and effective-time-constant approximations.The effectivetime-constant approximation follows the prior assumption that the diffusion approximation is valid [46,47,[95][96][97].In turn, diffusive behaviors only rigorously emerge under some scaling limit with vanishing weights [98,99].By focusing on the single-cell level rather than the network level, we are able to demonstrate that the effectivetime-constant approximation holds exactly for shot-noise driven, conductance-based neurons, without any diffusive approximations.Consequently, suppression of variability must occur independent of any scaling choice, except in the presence of input synchrony.Although this observation poses a serious theoretical challenge to the asynchronous state hypothesis, observe that it does not invalidate the practical usefulness of the diffusion approximation.For instance, we show in Fig. 11 that the mean spiking response of an a shot-noise driven AONCB neuron with an integrate-and-fire mechanism can be satisfactorily captured via the diffusion approximation.In addition, our analysis allows one to extend the diffusion approximation to include input synchrony.

D. Limitations of the approach
A first limitation of our analysis is that we neglect the spike-generating mechanism as a source of neural variability.Most diffusion-based approaches model spike generation via the integrate-and-fire mechanism, whereby the membrane voltages reset to fixed value upon reaching a spike-initiation threshold [37,38,46,47,[95][96][97]. Accounting for such a mechanism can impact our findings in two ways: (i) By confining voltage below the spiking threshold, the spiking mechanism may suppress the mean response enough for the neuron to operate well in the high-conductance regime for large input drives.Such a scenario will still produce exceedingly low variability due to variability quenching in the high-conductance regime, consistent with [1].(ii) The additional variability due to post-spiking resets may dominate the synaptic variability, so that a large overall subthreshold variability can be achieved in spite of low synaptic variability.This possibility also seems unlikely as dominant yet stereotypical resets would imply a quasi-deterministic neural response [71].Addressing the above limitations quantitatively requires extending our exact analysis to include the integrate-and-fire mechanism using technique from queueing theory [104].This is beyond the scope of this work.We note, however, that implementing a postspiking reset to a fixed voltage level yields simulated trajectories that markedly differ from physiological ones (see Fig. 1), for which the post-spiking voltage varies across conditions [26][27][28].
A second limitation of our analysis is our assumption of exchangeability, which is the lens through which we operate a link between spiking correlations and in-put drives.Taken literally, the exchangeability assumption states that synapses all have a typical strength and that conductance variability primarily stems from the variable numbers of coactivating synapses.This is certainly an oversimplification as synapses exhibit heterogeneity [113], which likely plays a role in shaping neural variability [114].Distinguishing between heterogeneity and correlation contributions, however, is a fundamentally ambiguous task [115].For instance, considering K e synchronous inputs with weight w e at rate b e and with jump probability p e (see Eq. ( 5) and Eq. ( 9)) is indistinguishable from considering K e independent inputs with heterogeneous weights {w e , 2w e , . . ., K e w e } and rates K e r e p e,k .Within our modeling approach, accounting for synaptic heterogeneity, with dispersed distribution for synaptic weights q e (w), can be done by taking the jump distribution p e as p e (w) = K k=1 q (⋆k) e (w)p e,k , where q (⋆k) e refers to the k-fold convolution of q e (w).This leads to an overdispersion of the jump distribution p e , and thus increased subthreshold neural variability.Therefore, while we have assumed exchangeability, our approach can accommodate weight heterogeneity.The interpretation of our results in term of synchrony rather than heterogeneity is supported by recent experimental evidence that cortical response selectivity derives from strength in numbers of synapses, rather than difference in synaptic weights [116].
A third limitation of our analysis is to consider a perfect form of synchrony, with exactly simultaneous synaptic activations.Although seemingly unrealistic, we argue that perfect input synchrony can still yield biologically relevant estimates of the voltage variability.This is because voltage fluctuations result from the integration of inputs over a time scale set by the passive membrane time constant τ ∼ 20ms.As a result, synaptic activation times that differ by significantly less than τ can be considered as synchronous inputs.To illustrate this point, we show in Fig. 12 the dependence of the voltage variance on the degree of synchrony by gradually jittering initially synchronous synaptic inputs.Assuming K e = 1000 excitatory inputs alone with spiking correlation ρ e = 0.03, one can check that the neural variability is left unchanged by jittering synaptic activations over time scales σ ≤ 2ms.One can also check that jittering over larger timescales than the synaptic time constant yields neural variability similar to that obtained in the absence of correlation in the inputs.This supports that our finding are robust to including temporal variability on timescales σ ≤ 2ms, which is consistent with typical heterogeneities with axonal or dendritic conduction delays.Extending our analytic results to include jittering will require modeling spiking correlations via multidimensional Poisson processes rather than via compound Poisson processes [93,94].However, this is is beyond the scope of this work.Here, we point out that weakened forms of synchrony will yield lower variability, so that our challenge to the asynchronous state will remain.One remaining limitation of our synchrony modeling is that our analysis can only account for instantaneous correlations between excitation and inhibition, while in reality such correlations are expected to peak at a non-zero time lag.
A fourth limitation of our analysis is that it is restricted to a form of synchrony that ignores temporal heterogeneity.This is a limitation because a leading hypothesis for the emergence of variability is that neurons generate spikes as if through a doubly stochastic process, i.e., as a Poisson process with temporally fluctuating rate [117].To better understand this limitation, let us interpret our exchangeability-based modeling approach within the framework of doubly stochastic processes [51,52].This can be done most conveniently by reasoning on the discrete correlated spiking model specified by Eq. ( 3).Specifically, given fixed bin size ∆t > 0, one can interpret the collection of i.i.d.variables θ ∼ F as an instantaneously fluctuating rate.In this interpretation, nonzero correlations can be seen as emerging from a doubly stochastic process for which the rate fluctuates as uncorrelated noise, i.e., with zero correlation time.This zero correlation time is potentially a serious limitation as it has been argued that shared variability is best modeled by a low-dimensional latent process evolving with slow, or even smooth, dynamics [82].Addressing this limitation will require developing limit spiking model with nonzero correlation time using probabilistic techniques that are beyond the scope of this work [56].
A final limitation of our analysis is that it does not explain the consistent emergence of synchrony in network dynamics.It remains conceptually unclear how synchrony can emerge and persist in neural networks that are fundamentally plagued by noise and exhibit large degrees of temporal and cellular heterogeneity.It may well be that carefully taking into account the finite-size of networks will be enough to produce the desired level of synchrony-based correlation, which is rather weak after all.Still, one would have to check wether achieving a given degree of synchrony requires the tuning of certain network features, such as the degree of shared input or the propensity of certain recurrent motifs [118] or the relative width of recurrent connections with respect to feedforward projections [119].From a theoretical standpoint, the asynchronous state hypothesis answers the consistency problem by assuming no spiking correlations, and thus no synchrony.One can justify this assumption in idealized mathematical models by demonstrating the so-called "propagation-of-chaos" property [120], which rigorously holds for certain scaling limits with vanishing weights and under the assumption of exchangeability [107][108][109].In this light, the main theoretical challenge posed by our analysis is extending the latter exchangeability-based property to include nonzero correlations [121], and hopefully characterize irregular synchronous state in some scaling limits.so that we can drop the time index and consider the population vector {X k } 1≤k≤Ke .Consequently, given the individual spiking rate r e , we have E [X k ] = P [X k = 1] = r i ∆t, where ∆t is the duration of the time step where a spike may or may not occur.Under the assumptions that {X k } 1≤k≤Ke belongs to an infinitely exchangeable set of random variables, de Finetti theorem states that there exists a probability measure F e on [0, 1] such that Assuming the directing measure F e known, we can compute the spiking correlation attached to our model.To see this, first observe that specifying the above probabilistic model for K e = 1, we have = θ e dF e (θ e ) .
Then, using the total law of covariance and specifying the above probabilistic model for K = 2, we have This directly yields that the spiking correlation reads The exact same calculations can be performed for the partially exchangeable case of mixed excitation and inhibition.The assumption of partial exchangeability requires that when considered separately, the {0, 1}-valued vectors {X 1 , . . ., X Ke } and {Y 1 , . . ., Y Ke } each belong to an infinitely exchangeable sequence of random variables.Then, de Finetti's theorem states that the probability to find the full vector of inputs {X 1 , . . ., X Ke , Y 1 , . . ., Y Ke } in any particular configuration is given by where the directing measure F ei fully parametrizes our probabilistic model.Performing similar calculations as for the case of excitation alone within this partially exchangeable setting yields Appendix B: Compound Poisson processes as continuous-time limits Let us consider the discrete-time model specified by Eq. (A2), which is obtained under the assumption of partial infinite exchangeability.Under this assumption, the probability laws of the inputs is entirely determined by the distribution of (k e , k i ), where k e denotes the number of active excitatory inputs and k i denotes the number of inhibitory inputs.This distribution can be computed as It is convenient to choose the directing measure as beta distributions since these are conjugate to the binomial distributions.Such a choice yields a class of probabilistic models referred to as beta-binomial models, which have been studied extensively [61,62].In this appendix, we always assume that the marginals F e and F i have the form F e ∼ Beta(α e , β e ) and F i ∼ Beta(α i , β i ).Then, direct integrations shows that the marginal distributions for the number of excitatory inputs and inhibitory inputs are Moreover, given individual spiking rates r e and r i within a time step ∆t, we have The continuous-time limit is obtained by taking ∆t → 0 + , which implies that the parameters α e and α i jointly vanish.When α e , α i → 0 + , the beta distributions F e and F i becomes deficient and we have P e,0 , P i,0 → 1.In other words, time bins of size ∆t almost surely have no active inputs in the limit ∆t → 0 + .Actually, one can show that where ψ denotes the digamma function.This indicates in the limit ∆t → 0 + , the times at which some excitatory inputs or some inhibitory inputs are active define a point process.Moreover, owing to the assumption of independence across time, this point process will actually be a Poisson point process.Specifically, consider a time T > 0 and set ∆t = T /N for some large integer N .Define the sequence of times Considered separately, the sequences of times {T e,n } n≥1 and {T i,n } n≥1 constitute binomial approximations of Poisson processes which we denote by N e and N i , respectively.It is a classical result that these limit Poisson processes are recovered exactly when N → ∞ and that their rates are respectively given by b e = lim For all integer K > 1, the function ) is an increasing analytic functions on the domain R + with range [1, K].Thus, we always have r e ≤ b e ≤ K e r e and r i ≤ b i ≤ K i r i and the extreme cases are achieved for perfect or zero correlations.Perfect correlations are achieved when ρ e = 1 or ρ i = 1, which corresponds to β e → 0 or β i → 0. This implies that b e = r e and b i = r i , consistent with all synapses activating simultaneously.Zero correlations are achieved when ρ e = 0 or ρ i = 0, which corresponds to β e → ∞ or β i → ∞.This implies that b e = K e r e and b i = K i r i , consistent with all synapses activating asynchronously, so that no inputs simultaneously activate.Observe that in all generality, the rates b e and b i are such that the mean number of spikes over the duration T is conserved in the limit ∆t → 0 + .For instance, one can check that When excitation and inhibition are considered separately, the limit process ∆t → 0 + specifies two compound Poisson processes where N e and N i are Poisson processes with rate b e and b i and where {k e,n } n≥1 are i.i.d according to p e and {k e,n } n≥1 are i.i.d according to p i .Nonzero correlations between excitation and inhibition emerge when the Poisson processes N e and N i are not independent.This corresponds to the processes N e and N i sharing times, so excitation and inhibition occur simultaneously at these times.To understand this point intuitively, let us consider the limit Poisson process N obtained by considering synaptic events without distinguishing excitation and inhibition.For perfect correlation, i.e., ρ ei = 1, all synapses activate synchronously and we have N = N e = N i : all times are shared.By contrast, for zero correlation, i.e., ρ ei = 0, no synapses activate simultaneously and we have N = N e + N i : no times are shared.For intermediary regime of correlations, a nonzero fraction of times will be shared resulting in a driving Poisson  Let us conclude this appendix by recapitulating the general form of the limit compound process Z obtained in the continuous-time limit ∆t → 0 + when jointly considering excitation and inhibition.This compound Poisson process can be represented as where N is that Poisson process registering all synaptic events without distinguishing excitation and inhibition and where the pairs (W e,n,Wi,n ) are i.i.d.random jumps in R × R \ {0, 0}.Formally, such a process is specified by the rate of N , denoted by b, and the bivariate distribution of the jumps (W e,n , W i,n ), denoted by p ei .These are defined as b = lim ∆t→0 + 1 − P ei,00 ∆t and p ei,kl = lim for (k, l) ̸ = (0, 0) , where P ei,00 is the probability to register no synaptic activation during a time step ∆t.According to these definitions, b is the infinitesimal likelihood that an input is active within a time bin, whereas p ei is the probability that k excitatory inputs and l inhibitory inputs are active given that at least one input is active.One can similarly define the excitatory and inhibitory rates of events b e and b i , as well as the excitatory jump distribution p e and the inhibitory jump distribution p i,l .Specifically, we have b e = lim for k ̸ = 0 , (B2) for l ̸ = 0 , with P e,k = Ki l=0 P ei,k,l and P i,k = Ke k=0 P ei,k,l .Observe that thus defined, the jump distribution p e and p i are specified as conditional marginal distributions of the joint jump distribution p ei on the events {k e > 0} and {k i > 0}, respectively.These are such that p Eq. (A1) and Eq.(A3) carry over to the continuous time limit ∆t → 0 + by observing that for limit compound Poisson processes to emerge, one must have that E . This directly implies that when ∆t → 0 + , we have All the stationary expectations appearing above can be computed via the jump distribution of the limit point process emerging in the limit ∆t → 0 + [104].Because this limit process is a compound Poisson process with discrete bivariate jumps, the resulting jump distribution p ei is specified over {1, . . ., K e } × {1, . . ., K i } \ {0, 0}.Denoting by b the overall rate of synaptic events, one has lim Then by partial exchangeability of the {0, 1}-valued population vectors {X k } 1≤k≤Ke and {Y l } 1≤l≤Ki , we have where the bivariate jump (k e , k i ) is distributed as p ei .
To further proceed, it is important to note the relation between the expectation E ei [•], which is tied to the overall input process with rate b, and the expectation E e [•] which is tied to the excitatory input process with rate b e .This relation is best captured by remarking that p e are not defined as the marginals of p ei , but as only as conditional marginals on {k e > 0}.In other words, we have with similar expressions for the inhibition-related quantities.Injecting Eq. (C2), Eq. (C3), and Eq.(C4) in Eq. (C1) yields and .

Appendix D: Two examples of limit compound Poisson processes
The probability P ei,00 that plays a central role in Appendix B can be easily computed for zero correlation, i.e., ρ ei = 0, by considering a directing measure under product form F ei (θ e , θ i ) = F e (θ e )F i (θ i ).Then integration with respect to the separable variables θ e and θ i yields In turn, the limit compound Poisson process can be obtain in the limit ∆t → 0 + by observing that A similar calculation shows that for all l ≥ 1, we have p ei,kl = b i /(b e + b i )1 {k=0} p i,l .Thus p ei,kl = 0 whenever k, l ≥ 1, so that the support of p ei,kl is {1, . . ., K e }×{0}∪{0}×{1, . . ., K i }.This is consistent with the intuition that excitation and inhibition happen at distinct times in the absence of correlations.
Let us now consider the case of maximum correlation for F e = F i = F , where F is a beta distribution with parameters α and β.Moreover, let us assume the deterministic coupling θ e = θ i such that F ei (θ e , θ i ) = F (θ e )δ(θ i − θ e ).Then, the joint distribution of the jumps (k e , k i ) can be evaluated via direct integration as As excitation and inhibition are captured separately by the same marginal functions = r e ∆t = r i ∆t and we refer to the common spiking rate as r.Then the overall rate of synaptic events is obtained as and one can check that b differs from the excitatory-and inhibitory-specific rates b e and b i , which satisfy To characterize the limit compound Poisson process, it remains to exhibit p ei,kl , the joint distribution of the jumps (k e , k i ).A similar calculation as for the case of excitation alone yields p ei,kl = lim Remember that within our model, spiking correlations do not depends on the number of neurons and that by construction we have ρ ei ≤ √ ρ e ρ i .Thus, for the symmetric case under consideration, maximum correlation corresponds to ρ ei = ρ e = ρ i = 1/(1 + β).In particular perfect correlation between excitation and inhibition can only be attained for β → 0. When β > 0, i.e., for partial correlations, the Poisson processes N e and N i only share a fraction of their times, yielding an aggregate Poisson process N such that min(b e , b i ) < b < b e + b i .The relations between b, b e , and b i can be directly recovered from the knowledge of p ei by observing that This implies that the fraction of times with nonzero excitation is given by , so that we consistently recover the value of b e already obtained in Eq. ( 9) and Eq.(D2) via .

Appendix E: Marcus jump rule
The goal of this appendix is to justify the Marcus-type update rule given in Eq. ( 13).To do so let us first remark that given a finite time interval [0, T ], the number of synaptic activation times {T n } n∈Z falling in this interval is almost surely finite.In particular, we have ∆ = inf 0≤Tn̸ =Tm≤T |T n − T m | > 0 almost surely.Consequently, taking ϵ < ∆/τ s ensures that synaptic activation events do not overlap in time, so that it is enough to consider a single synaptic activation triggered with no lack of generality in T 0 = 0. Let us denote the voltage just before the impulse onset as V (T − 0 ) = V 0 , which will serve as initial condition for the ensuing voltage dynamics.As the synaptic conductances remains equals to W e /(ϵτ ) and W i /(ϵτ ) for a duration [0, ϵτ ], the voltage V ϵ satisfies where we assume I = 0 for simplicity.The unique solution satisfying The Marcus-type rule follows from evaluating the jump update as the limit which has the same form as the rule announced in Eq. (13).Otherwise, at fixed ϵ > 0, the fraction of time for which the voltage V ϵ is exponentially relaxing toward the leak reversal potential V L = 0 is larger than 1 − N ϵ/T , where N denotes the almost surely finite number of synaptic activations, which does not depends on ϵ.Thus, the voltage V = lim ϵ→0 + V ϵ exponentially relaxes toward V L = 0, except when it has jump discontinuities in {T n } n∈Z .

Appendix F: Stationary voltage mean
For a positive synaptic activation time t > 0, the classical method of the variation of the constant applies to solve Eq. ( 1).This yields an expression for V ϵ (t) in terms of regular Riemann-Stieltjes integrals where the conductance traces h e (t) and h i (t) are treated as a form of deterministic quenched disorder.Specifically, given an initial condition V ϵ (0), we have where V ϵ (t) depends on ϵ via the all-or-none-conductance processes h e and h i .As usual, the stationary dynamics of the voltage V ϵ is recovered by considering the limit of arbitrary large times t → ∞, for which one can neglect the influence of the initial condition V ϵ (0).Introducing the cumulative input processes H = (H e , H i ) defined by and satisfying τ dH e (t) = h e (t) dt and τ dH i (t) = h i (t) dt, we have In turn, expanding the integrand above yields the following expression for the stationary expectation of the voltage Our primary task is to evaluate the various stationary expectations appearing in the above formula.Such a goal can be achieved analytically for AONCB models.As the involved calculations tend to be cumbersome, we only give a detailed account in Appendix.Here we account for the key steps of the calculation, which ultimately produces an interpretable compact formula for E [V ϵ ] in the limit of instantaneous synapses, i.e., when ϵ → 0.
In order to establish this compact formula, it is worth introducing the stationary bivariate function which naturally depends on ϵ via H e (t) and H i (s).The function Q ϵ is of great interest because all the stationary expectations at stake in Eq. (F2) can be derived from it.Before justifying this point, an important observation is that the expectation defining Q ϵ (t, s) only bears on the cumulative input processes H e and H i , which specify bounded, piecewise continuous functions with probability one, independent of ϵ.As a result of this regular behavior, the expectation commute with the limit of instantaneous synapses allowing one to write where we exploit the fact that the cumulative input processes H e and H i converge toward the coupled compound Poisson processes Z e and Z i when ϵ → 0 + : The above remark allows one to compute the term due to current injection I in Eq. (F2), where the expectation can be identified to Q ϵ (t, t).Indeed, utilizing the standard form for the moment-generating function for compound Poisson processes [51], we find that where we introduce the first-order aggregate efficacy We+Wi) .
Remember that in the above definition, E ei [•] denotes the expectation with respect to the joint probability of the conductance jumps, i.e., p ei .It remains to evaluate the expectations associated to excitation and inhibition reversal potentials in Eq. (F2).These terms differ from the current-associated term in that they involve expectations of stochastic integrals with respect to the cumulative input processes H e/i .This is by contrast with evaluating Eq. (F3), which only involves expectations of functions that depends on H e/i .In principle, one could still hope to adopt a similar route as for the current associated term, exploiting the compound Poisson process Z obtained in the limit of instantaneous synapses.However, such an approach would require that the operations of taking the limit of instantaneous synapses and evaluating the stationary expectation still commute.This is a major caveat as such a commuting relation generally fails for point-process-based stochastic integrals.Therefore, one has to analytically evaluate the expectations at stake for positive synaptic activation time ϵ > 0, without resorting to the simplifying limit of instantaneous synapses.This analytical requirement is the primary motivation to consider AONCB models.
The first step in the calculation is to realize that for ϵ > 0, the conductance traces h e (t) = τ dH e (t)/dt and h i (t) = τ dH i (t)/dt are bounded, piecewise continuous functions with probability one.Under these conditions, it then holds that where the effective first-order synaptic efficacies via Eq.( 15) as We+Wi)  .
Observe that by definition, a e,1 and a i,1 satisfy a e,1 + a i,1 = a ei,1 .
Altogether, upon evaluation of the integrals featured in Eq. (F2), these results allow one to produce the compact expression Eq. ( 14) for the stationary voltage mean in the limit of instantaneous synapses: .

Appendix G: Stationary voltage variance
The calculation of the stationary voltage variance is more challenging than that of the stationary voltage mean.However, in the limit of instantaneous synapses, this calculation produces a compact, interpretable formula as well.Adopting a similar approach as for the stationary mean calculation, we start by expressing V 2 ϵ in the stationary limit in terms of a stochastic integrals involving the cumulative input processes H e and H i .Specifically, using Eq.(F1), we have Our main goal is to compute the stationary expectation of the above quantity.As for the stationary voltage mean, our strategy is (i) to derive the exact stationary expectation of the integrands for finite synaptic activation time, (ii) to evaluate these integrands in the simplifying limit of instantaneous synapses, and (iii) to rearrange the terms obtained after integration into an interpretable final form.Enacting the above strategy is a rather tedious task, and as for the calculation of the mean voltage, we only present the key steps of the calculation in the following.
The integrand terms at stake are obtained by expanding Eq. (G1), which yields the following quadratic expression for the stationary second moment of the voltage whose various coefficients need to be evaluated.These coefficients are conveniently specified in terms of the following symmetric random function E ei (t, s) = e He(t)+Hi(t)+He(s)+Hi (s) .
which features prominently in Eq. (G1).Moreover, drawing on the calculation of the stationary mean voltage, we anticipate that the quadrivariate version of E ei (t, s) will play a central role in the calculation via its stationary expectation.Owing to this central role, we denote this expectation as R ϵ (t, u, s, v) = E e He(t)+Hi(u)+He(s)+Hi (v) .
where we make the ϵ-dependence explicit.As a mere expectation with respect to the cumulative input processes (H e , H i ), the expectation can be evaluated in closed form for AONCB models.This again requires careful manipulations of the processes H e and H i , which need to split into independent contributions arising from spiking events occurring in nonoverlapping intervals.By contrast with the bivariate case, the quadrivariate case requires to consider nine contiguous intervals.There is no loss of generality to consider these interval bounds to be determined by the two following time orderings: where O stands for off-diagonal ordering and D for diagonal ordering.The reason to only consider the O/D-orders is that all the relevant calculations will be made in the limit (u, v) → (t, s).By symmetry of R ϵ (t, u, s, v), it is then enough to restrict our consideration to the limit (u, v) → (t − , s − ), which leaves the choice of t, s ≤ 0 to be determined.By symmetry, one can always choose t > s, so that the only remaining alternative is to decide wether (t, s) belong to the diagonal region D ϵ = {t, s ≤ 0 | ϵτ ≥ |t − s|} or the off-diagonal region O ϵ = {t, s ≤ 0 | ϵτ < |t − s|}.For the sake of completeness, we give the two expressions of R ϵ (t, u, s, v) on the regions O ϵ and D ϵ in Appendix I. Owing to their tediousness, we do not give the detailed calculations leading to these expressions, which are lengthy but straightforward elaborations on those used in Appendix H.Here we stress that for ϵ > 0, these expressions reveal that R ϵ (t, u, s, v) is defined as a twice-differentiable quadrivariate function.
With these remarks in mind, the coefficients featured in Eq. (G2) can be categorized in three classes: 1.There is a single current-dependent inhomogeneous coefficient where we recognize that ) is merely a stationary expectation with respect to the cumulative input processes (H e , H i ), it can be directly evaluated in the limit of instantaneous synapses.In other word, step (ii) can be performed before step (i), similarly as for the stationary voltage mean calculation.However, having a general analytical expression for R ϵ (t, u, s, v) on O ϵ (see Appendix I), we can directly evaluate for all t ̸ = s that R(t, s) = lim ϵ→0 + R ϵ (t, s) = e (2aei,2 max(t,s)−aei,1|t−s|)/τ , (G2) where we define the second-order aggregate efficacy We+Wi) .
It is clear that the continuous function R(t, s) is smooth everywhere except on the diagonal where it admits a slope discontinuity.As we shall see, this slope discontinuity is the reason why one needs to consider the D ϵ region carefully, even when only concerned with the limit ϵ → 0 + .That being said, the diagonal behavior plays no role here and straightforward integration of R(t, s) on the negative orthant gives .
2. There are two current-dependent linear coefficients where the second-order synaptic efficacies are defined as Observe that these efficacies satisfy the familiar relation a e,2 + a i,2 = a ei,2 .Taking the limits of Eq. (G3) when ϵ → 0 + specify two bivariate functions that are continuous everywhere, except on the diagonal t = s, where these functions present a jump discontinuity.This behavior is still regular enough to discard any potential contributions from diagonal terms, so that we can restrict ourselves to the region O ϵ .Then, taking the limit ϵ → 0 + after integration of over O ϵ , we find that and .
3. There are four quadratic coefficients associated to the reversal-potential V e and V i , including two diagonal terms As before, the analytical knowledge of R ϵ (t, u, s, v) on the O ϵ region (see Appendix I) allows one to evaluate lim = a e,1 (2a e,2 − a e,1 ) , (a e,1 (2a i,2 − a i,1 ) + a i,1 (2a e,2 − a e,1 )) .
The above closed-form expressions allow one to compute A ′ e,ϵ and B ′ ei,ϵ , the part of the coefficients A e,ϵ and B ei,ϵ resulting from integration over the off-diagonal region O ϵ , which admit well-defined limit values A ′ e = lim ϵ→0 ) .
However, for quadratic terms, one also needs to include the contributions arising from the diagonal region D ϵ , as suggested be the first-order jump discontinuity of R(t, s) = lim ϵ→0 + R ϵ (t, s) on the diagonal t = s.To confirm this point, one can show from the analytical expression of R ϵ (t, u, s, v) on D ϵ (see Appendix I), that all relevant secondorder derivative terms scale as 1/ϵ over D ϵ .This scaling leads to the nonzero contributions A ′′ e,ϵ and B ′′ ei,ϵ resulting form the integration of these second-order derivative terms over the diagonal region D ϵ , even in the limit ϵ → 0 + .Actually, we find that these contributions also admit well-defined limit values A ′′ e = lim ϵ→0 . (G6) Remembering that the expression of A ′′ i can be deduced from that of A ′′ e by symmetry, Eq. (G5) defines A ′′ e , and thus A ′′ i , in terms of the useful auxiliary second-order efficacies a e,12 = a e,1 − a e,2 and a i,12 = a i,1 − a i,2 .These efficacies will feature prominently in the final variance expression and it is worth mentioning their explicit definitions as The other quantity of interest is the coefficient c ei , which appears in both Eq.(G5) and Eq.(G6).This nonnegative coefficient, defined as entirely captures the (nonnegative) correlation between excitatory and inhibitory inputs and shall be seen as an efficacy as well.Keeping these definitions in mind, the full quadratic coefficients are finally obtained as From there, injecting the analytical expressions of the various coefficients in the quadratic form Eq. (G2) leads to an explicit formula for the stationary voltage variance in the limit of instantaneous synapses.Then, one is only left with step (iii), which aims at exhibiting a compact, interpretable form for this formula.We show in Appendix K that lengthy but straightforward algebraic manipulations lead to the simplified form given in Eq. ( 16): Appendix H: Evaluation of Qϵ(t, s) for ϵ > 0 The goal here is to justify the closed-form expression of Q ϵ (t, s) = E e He(t)+Hi(s) via standard manipulation of exponential functionals of Poisson processes.By definition, assuming with no loss of generality the order 0 ≥ t ≥ s, we have We will evaluate Q ϵ (t, s) = E e He(t)+Hi(s) as a product of independent integral contributions.
Isolating these independent contributions from Eq. (H1) requires to establish two preliminary results about the quantity where N denotes a Poisson process, X k denotes i.i.d.nonnegative random variables, and where ∆ is positive activation where all five terms correspond to independent numbers of i.i.d.draws over the intervals (−ϵτ, 0], (t, −ϵτ ], (s, t], (t − ϵτ, s], and (s − ϵτ, t − ϵτ ].Then, we have Q ϵ (t, s) = E e He(t)+Hi(s) = E e −I1/(ϵτ ) E e −I2(t)/(ϵτ ) E e −I3(t,s)/(ϵτ ) E e −I4(s,t)/(ϵτ ) E e −I5(t,s)/(ϵτ ) , where all expectation terms can be computed via standard manipulation of the moment-generating function of Poisson processes [51].The trick is to remember that for all t ≥ s, given that a Poisson process admits K = N (t) − N (s) points in (s, t], all these K points are uniformly i.i.d.over (s, t].This trick allows one to simply represent all integral terms in terms of uniform random variables, whose expectations are easily computable.To see this, let us consider I 3 (t, s) for instance.We have where {U k } N (s)+1≤k≤N (t) are uniformly i.i.d. on [0, 1].From the knowledge of the moment-generating function of Poisson random variables [51], one can evaluate where (W e , W i ) denotes exemplary conductance jumps and U denotes an independent uniform random variable.Similar calculations show that we have Appendix I: Expression of Rϵ(t, u, s, v) on Oϵ and Dϵ Using the similar calculations as in Appendix H, we can evaluate the quadrivariate expectation R ϵ (t, u, s, v) on the region O ϵ , for which the O-order holds: 0 This requires to isolate consider 9 independent contributions, corresponding to the 9 contiguous intervals specified by the O-order.We find where the nonnegative terms making up the above sum are defined as One can check that A 3 (t, t) = A 5 (t, t) = 0 and A 7 (s, s) = A 9 (s, s) = 0 and that A 1 , A 4 (u, t), and A 8 (v, s) are all uniformly O(ϵ) on the region O ϵ .This implies that for all (t, s) in O ϵ , we have Using the similar calculations as in Appendix H, we can evaluate the quadrivariate expectation R ϵ (t, u, s, v) on the region D ϵ , for which the D-order holds: 0 This requires to isolate consider 9 independent contributions, corresponding to the 9 contiguous intervals specified by the O-order.We find ln R ϵ (t, u, s, v) = (I1) where the nonnegative terms making up the above sum are defined as Observe that B 1 = A 1 and B 2 (t) = A 2 (t) and that B 3 (t, t) = B 7 (t, t, s, v) = 0 and B 5 (t, u, s, s) = B 9 (s, s) = 0.Moreover, one can see that R(t, s) is continuous over the whole negative orthant by checking that: Actually, by computing the appropriate limit values of the relevant first-and second-order derivatives of R ϵ (t, u, s, v), one can check that for ϵ > 0, all the integrands involved in specifying the coefficients of the quadratic form Eq. (G2) define continuous functions.
Appendix J: Integrals of the quadratic terms on Dϵ Here, we only treat the quadratic term A e as the other quadratic terms A i and B ei involve a similar treatment.The goal is to compute A ′′ e , which is defined as the contribution to A e resulting from integrating lim (u,v)→(t,s) − ∂ t ∂ s R ϵ (t, u, s, v) over the diagonal region D ϵ = {t, s ≤ 0 | τ ϵ ≥ |t − s|}, in the limit ϵ → 0 + .To this end we first remark that Injecting the analytical expression Eq. (I1) into the above relation and evaluating I ϵ (t, s) = lim (u,v→(t,s) − ) ∂ t ∂ s R ϵ (t, u, s, v) reveals that I ϵ (t, s) scales as 1/ϵ, so that one expects that where the function ϵe − ϵx τ I ϵ (t, t + ϵx) remains of order one on D ϵ in the limit of instantaneous synapses.Actually, one can compute that lim Then, for dealing with positive, continuous, uniformly bounded functions, one can safely exchange the integral and limit operations to get A similar calculation for the quadratic cross term B ′′ ei yields  .
In order to express A ′′ e in term of c ei , we need to introduce the quantity a e,12 = a e,1 − a e,2 which satisfies With the above observation, we remark that = −c ei so that we have the following compact expression for the quadratic diagonal term A ′′ e = a e,12 − c ei 1 + a ei,2 .

Appendix K: Compact variance expression
Our goal is to find a compact, interpretable formula for the stationary variance V [V ] from the knowledge of the quadratic form Let us first assume no current injection, I = 0, so that one only has to keep track of the quadratic terms.Specifying the quadratic coefficient where we collect separately all the terms containing the coefficient c ei and where we use the facts that by definition a e,12 = a e,1 − a e,2 , a i,12 = a i,1 − a i,2 , and a ei,1 = a e,1 + a i,1 .Expanding and simplifying the coefficients of V 2 e and V 2 i above yield Then, we can utilize the expression above for E V 2 together with the stationary mean formula to write the variance (a e,1 − a e,2 )(1 + a i,1 ) 2 + (a i,1 − a i,2 )a 2 e,1 (1 + a ei,1 ) 2 (1 + a ei,2 ) V 2 e − (a e,1 − a e,2 )a e,1 (1 + a e,1 ) + a i,1 (a i,1 − a i,2 )(1 + a i,1 ) (1 + a ei,1 ) 2 (1 + a ei,2 ) V e V i + (a i,1 − a i,2 )(1 + a e,1 ) 2 + (a e,1 − a e,2 )a 2 i,1 (1 + a ei,1 ) 2 (1 + a ei,2 ) To factorize the above expression, let us reintroduce a e,12 = a e,1 − a e,2 and a i,12 = a i,1 − a i,2 and collect the terms where these two coefficients occur.This yields (1 + a i,e )V i − a i,1 V e 1 + a ei,1 Finally, injecting the expression of stationary mean Eq. (K1) in both parentheses above produces the compact formula which is the same as the one given in Eq. ( 16).

Appendix L: Factorized variance expression
In this appendix, we reshape the variance expression given in Eq. (K2) under a form that is clearly nonnegative.To this end, let us first remark that the calculation in Appendix J shows that a e,12 − c ei = bτ 2 E ei W 2 e (W e + W i ) 2 1 + e −(We+Wi) 2 .
(L1) Note that the above quantity is clearly non negative as any variance shall be.From there, one can include the impact of the injected current I by further considering all the terms in Eq. (K1), including the linear and inhomogeneous current-dependent terms.Similar algebraic manipulations confirm that Eq. (L1) remains valid so that the only impact of I is via altering the expression E [V ], so that we ultimately obtain the following explicit compact form: The above expression shows that as expected V [V ] ≥ 0 and that the variability vanishes if and only if W e /W i = (E [V ] − V i )/(V e − E [V ]) with probability one.In turn plugging this relation into the mean voltage expression and solving for E [V ] reveals that we necessarily have E [V ] = I/G.This is consistent with the intuition that variability can only vanish if excitation and inhibition perfectly cancel one another.
Appendix M: Variance in the small-weight approximation In this appendix, we compute the simplified expression for the variance V [V ] obtained via the small-weight approximation.Second, let us compute the small-weight approximation of the second-order efficacy which amounts to compute the expectation of the crossproduct of the jumps k e and k i .To estimate the above approximation, it is important to remember that first that p e and p i are not defined as the marginals of p ei , but as conditional marginals, for which we have p which simplifies to a e,12 = (r e τ /2)K e (1+ρ e (K e −1))w 2 e when excitation and inhibition act independently.A symmetric expression holds for the inhibitory efficacy a i,12 .Plugging the above expressions for synaptic efficacies in the variance expression Eq. ( 16) yields the small-weight approximation (1 + ρ e (K e − 1))K e r e w 2 e (V e − E [V ]) 2 + (1 + ρ i (K i − 1))K i r i w 2 i (V i − E [V ]) 2 2(1/τ + K e r e w e + K i r i w i ) + ρ ei √ r e r i (K e w e )(K 2(1/τ + K e r e w e + K i r i w i ) .
Let us note that the first-term in the right-hand side above represents the small-weight approximation of the voltage variance in the absence of correlation between excitation and inhibition, i.e., for ρ ei = 0. Denoting the latter approximation by V [V ] | ρei=0 and using the fact that the small-weight expression for the mean voltage E [V ] = K e r e w e V e + K i r i w i V i 1/τ + K e r e w e + K i r i w i , is independent of correlations, we observe that as intuition suggests, synchrony-based correlation between excitation and inhibition results in a decrease of the neural variability: 1/τ + K e r e w e + K i r i w i ≤ 0 .
However, the overall contribution of correlation is to increase variability in the small-weight approximation.This can be shown under the assumptions that K e ≫ 1 and K i ≫ 1, by observing that 2(1/τ + K e r e w e + K i r i w i ) +( √ ρ e ρ i − ρ ei ) √ r e r i (K e w e )(K 1/τ + K e r e w e + K i r i w i ≥ 0 , where both terms are positive since we always have 0 ≤ ρ ei ≤ √ ρ e ρ i .
Appendix N: Validity of the small-weight approximation Biophysical estimates of the synaptic weights w e < 0.01, w i < 0.04 and the synaptic input numbers K e < 10000, K i < 2500, suggest that neurons operates in the small-weight regime.In this regime, we claim that exponential corrections due to finite-size effect can be neglected in the evaluation of synaptic efficacies, as long as the spiking correlations remains weak.Here, we make this latter statement quantitative by focusing on the first-order efficacies in the case of excitation alone.The relative error due to neglecting exponential corrections can be quantified as For a correlation coefficient ρ e ≤ 0.05, this means that neglecting exponential corrections incurs less than a e = 3% error if the number of inputs is smaller than K e ≤ 1000 for moderate synaptic weight w e = 0.001 or than K e ≤ 100 for large synaptic weight w e = 0.01.
Appendix O: Infinite-size limit with spiking correlations The computation of the first two moments E [V ] and E V 2 requires to evaluate various efficacies as expectations.Upon inspection, these expectations are all of the form bE ei [f (W e , W i )], where f is a smooth positive function that is bounded on R + ×R + with f (0, 0) = 0. Just as for the Lévy-Khintchine decomposition of stable jump processes [78,79], this observation allows one to generalize our results to processes that exhibit and countable infinity of jumps over finite, nonzero time intervals.For our parametric forms based on beta distributions, such processes emerge in the limit of an arbitrary large number of inputs, i.e., for K e , K i → ∞.Let us consider the case of excitation alone for simplicity.Then, we need to make sure that all expectations of the form b e E ei [f (W e )] remain well-posed in the limit K e → ∞ for smooth, bounded test function f with f (0) = 0. To check this, observe that for all 0 < k ≤ K e , we have by Eq. ( 7) and Eq. ( 9 , which is a deficient measure for admitting a pole in zero.This singular behavior indicates that the limit jump process obtained when K e → ∞ has a countable infinity of jumps within any finite, nonempty time interval.Generic stationary jump processes with independent increments, as is the case here, are entirely specified by their Lévy-Khintchine measure ν e [78,79].Moreover, one can check that given knowledge of ν e , one can consistently estimate the corresponding pairwise spiking correlation as

FIG. 1 .
FIG. 1.Large trial-by-trial membrane voltage fluctuations.Membrane voltage responses are shown using whole cell recordings in awake behaving primates for both fixation alone trials (left) and visual stimulation trials (right).A drifting grating was presented for 1 second beginning at the arrow.Below the membrane voltage traces are records of horizontal and vertical eye movements, illustrating that the animal was fixating during the stimulus.Red and green traces indicate different trials under the same conditions.Adapted from [27].

FIG. 2 .
FIG. 2. All-or-none-conductance-based models.(a) Electrical diagram of conductance-based model for which the neuronal voltage V evolves in response to fluctuations of excitatory and inhibitory conductances ge and gi.(b) In all-ornone models, inputs delivered as Poisson processes transiently activate the excitatory and inhibitory conductances ge and gi during a finite, nonzero synaptic activation time τs > 0. Simulation parameters: Ke = Ki = 50, re = ri = 10Hz, τ = 15ms τs = 2ms > 0.

FIG. 3 .
FIG.3.Parametrizing correlations via exchangeability.The activity of Ke = 100 exchangeable synaptic inputs collected over N consecutive time bins can be represented as {0, 1}-valued array {X k,i } 1≤k≤Ke,1≤i≤N , where X k,i = 1 if input k activates in time bin i.Under assumptions of exchangeability, the input spiking correlation is entirely captured by the count statistics of how many inputs coactivate within a given time bin.In the limit Ke → ∞, the distribution of the fraction of coactivating inputs coincides with the directing de Finetti measure, which we consider as a parametric choice in our approach.In the absence of correlation, synapses tend to activate in isolation: ρe = 0 in (a).In the presence of correlation, synapses tend to coactivate yielding disproportionately large synaptic activation event: ρe = 0.1 in (b).Considering the associated cumulative counts specifies discrete-time jump processes that can be generalized to the continuous-time limit, i.e., for time bins of vanishing duration ∆t → 0 + .

FIG. 4 .
FIG. 4. Limit compound Poisson process with excitation and inhibition.(a)Under assumption of partial exchangeability, synaptic inputs can only be distinguished by the fact that they are either excitatory or inhibitory, which is marked by being colored in red or blue in the discrete representation of correlated synaptic inputs with bin size ∆t = 1ms.Accordingly, considering excitation and inhibition separately specifies two associated input-count processes and two cumulative counting processes.For nonzero spiking correlation ρ = 0.03, these processes are themselves correlated as captured by the joint distribution of excitatory and inhibitory input counts P ei,kl (center) and by the joint distribution of excitatory and inhibitory jumps P ei,kl /(1 − P00) (right).(b) The input count distribution P ei,kl is a finite-size approximation of the bivariate directing de Finetti measure Fei, which we consider as a parameter as usual.For a smaller bin size ∆t = 0.1ms, this distribution concentrates in (0, 0), as an increasing proportion of time bins does not register any synaptic events, be they excitatory or inhibitory.In the presence of correlation however, the conditioned jump distribution remains correlated but also dispersed.(c) In the limit ∆t → 0, the input-count distribution is concentrated in (0, 0), consistent with the fact that the average number of synaptic activations remains constant while the number of bins diverges.By contrast, the distribution of synaptic event size conditioned to distinct from (0, 0) converges toward a well-defined distribution: p ei,kl = lim ∆t→0 + P ei,kl /(1 − Pei,00).This distribution characterizes the jumps of a bivariate compound Poisson process, obtained as the limit of the cumulative count process when considering ∆t → 0 + .
synaptic events b e governing N e satisfies K e r e = b e E e [k] so that b e = K e r e E e [k] = r e β e (ψ(β e + K e ) − ψ(β e )) .
= b e b e + b i p e,k 1 {l=0} + b e b e + b i p i,l 1 {k=0} ,

FIG. 5 .
FIG. 5.Limit of instantaneous synapses.The voltage trace and the empirical voltage distribution are only marginally altered by taking the limit ϵ → 0 + for short synaptic time constant: τs = 2ms in (a) and τs = 0.02ms in (b).In both (a) and (b), we consider the same compound Poissonprocess drive with ρe = 0.03, ρi = 0.06, and ρei = 0, and the resulting fluctuating voltage V is simulated via a standard Euler discretization scheme.The corresponding empirical conductance and voltage distributions are shown on the right.The later voltage distribution asymptotically determines the stationary moments of V .

6 FIG. 7 .
FIG. 7. Voltage mean and variance in the absence of input correlations.Column (a) depicts the stationary subthreshold response of an AONCB neurons driven by Ke = 100 and Ki = 25 synapses with large weights we = 0.01 and wi = 0.04.Column (b) depicts the stationary subthreshold response of an AONCB neurons driven by Ke = 10 3 and Ki = 250 synapses with moderate weights we = 0.001 and wi = 0.004.For synaptic weights we, wi ≪ 1, the mean response is identical as Kewe = Kiwi = 1 for (a) and (b).By contrast, for ρe = ρi = ρei = 0, the variance is at least an order-ofmagnitude smaller than that experimentally observed (4 − 9mV 2 ) for moderate weights as shown in (a).Reaching the lower range of realistic neural variability requires driving the cell via large weights as shown in (b).

FIG. 8 .
FIG. 8. Dependence on the number of inputs and the synaptic weights in the absence of correlations.Column (a) depicts the stationary subthreshold response of an AONCB neurons driven by a varying number of excitatory synapses Ke with varying weight we at rate re = 20Hz, with background inhibitory drive given by Ki = 250 with moderate weights wi = 0.004 and ri = 20Hz.Column (b) depicts the same as in column (a) but for a background inhibitory drive given by Ki = 25 with large weights wi = 0.04 and ri = 20Hz.For both conditions, achieving realistic level of variance, i.e., V [V ] ≃ 4 − 9mV 2 , while ensuring a biophysically relevant mean range of variation, i.e., ∆E [V ] ≃ 10-20mV, is only possible for large weights: we ≥ 0.015 for moderate inhibitory weights in Fig. 8a and we ≥ 0.01 for large weights.

FIG. 9 .
FIG. 9. Voltage mean and variance in the presence of excitatory and inhibitory input correlations but without correlation across excitation and inhibition: ρe = ρi > ρei = 0. Column (a) depicts the stationary subthreshold response of an AONCB neurons driven by Ke = 100 and Ki = 25 synapses with large weights we = 0.01 and wi = 0.04.Column (b) depicts the stationary subthreshold response of an AONCB neurons driven by Ke = 10 3and Ki = 250 synapses with moderate dimensionless weights we = 0.001 and wi = 0.004.For synaptic weights we, wi ≪ 1, the mean response is identical as Kewe = Kiwi = 1 for (a) and (b).By contrast with the case of no correlation in Fig.7, for ρe = ρi = 0.03 and ρei = 0, the variance achieves similar levels as experimentally observed (4 − 9mV 2 ) for moderate weights as shown in (b), but slightly larger levels for large weights as shown in (a).
a e,1 = b e τ E e 1 − e −kewe ≃ b e τ w e E e [k e ] = K e r e τ w e , as well as a i,1 ≃ K i r i τ w i by symmetry.However, for both moderate and large synaptic weights, the voltage variance V [V ] now exhibits slightly larger magnitudes than observed experimentally.This is because we show in Appendix M that in the small-weight approximation a e,12 = b e τ 2 E e 1 − e −kewe 2 , ≃ (1 + ρ e (K e − 1)) K e r e τ w 2 e 2 ,

FIG. 10 .
FIG. 10.Voltage mean and variance in the presence of excitatory and inhibitory input correlations and with correlation across excitation and inhibition: ρe = ρi = ρei > 0. Column (a) depicts the stationary subthreshold response of an AONCB neurons driven by Ke = 100 and Ki = 25 synapses with large weights we = 0.01 and wi = 0.04.Column (b) depicts the stationary subthreshold response of an AONCB neurons driven by Ke = 10 3 and Ki = 250 synapses with moderate dimensionless weights we = 0.001 and wi = 0.004.For synaptic weights we, wi ≪ 1, the mean response is identical as Kewe = Kiwi = 1 for (a) and (b).Compared with the case of no crosscorrelation in Fig. 9, for ρe = ρi = ρei = 0.03, the variance is reduced to a biophysical range similar to that experimentally observed (4−9mV 2 ) for moderate weights as shown in (a), as well as for large weights as shown in (b).

FIG. 12 .
FIG.12.Impact of jittering synchronous inputs.Voltage mean and variance of AONCB neurons in response to synchronous inputs that have been jittered for parameters: Ke = 1000, we = 0.001, ρe = 0.03.Specifically, each input timing has been independently shifted in time by a centered Gaussian random variable with standard deviation σ.(a) If the mean response is largely independent of jittering, the variance steadily decreases with the jittering, which erases synchrony-based correlation over the timescale σ.Accordingly, for large timescale, σ ≥ τ , we recover variance values obtained for asynchronous drive with ρe = ρi = ρei = 0. (b) Variability estimates are reliable for jittering at timescales σ ≤ 2ms.
Furthermore we haveE e − t−s ϵτ U We−(We+Wi) = E E e − t−s ϵτ U We−(We+Wi) W e , W i , = E ei e −(We+Wi) E e − t−s ϵτ U We, finally obtain ln E e −I3(t,s)/(ϵτ )

w 2 e√ r i r e τ 2 (
e,k = (b/b e ) Ki l=0 p ei,kl and p i,l = (b/b i ) Ke k=0 p ei,kl .Then by the definition of the correlation coefficient ρ ei in Eq. (4), we haveρ ei = bE ei [k e k i ] K e bE ei [k e ] K i bE ei [k i ] = bE ei [k e k i ] K e b e E e [k e ] K i b i E i [k i ] = bE ei [k e k i ] K e K i √ r e r i ,as the rates b e and b i are such that b e E e [k e ] = K e r e and bi E e [k i ] = K i r i .As a result, we obtain a simplified expression for the cross-correlation coefficient:c ei = (ρ ei √ r e r i τ /2)(K e w e )(K i w i ) .Observe that as expected, c ei vanishes when ρ ei = 0. Second, let us compute the small-weight approximation of the second-order efficacya e,12 = bτ 2 E ei W e W e + W i 1 − e −(We+Wi) 2 ≃ bτ 2 E ei [W e (W e + W i )] = bτ 2 w 2 e E ei k 2 e + w e w i E ei [k e k i ] .To estimate the above approximation, we use the definition of the correlation coefficient ρ e in Eq.(8),ρ e = b e E e [k e (k e − 1)] b e E e [k e ] (K e − 1) = bE ei [k e (k e − 1)] K e (K e − 1)r e , as the rate b e is such that b e E e [k e ] = K e r e .This directly implies that bE ei k 2 e = bE ei [k e (k e − 1)] + bE ei [k e ] = ρ e K e (K e − 1)r e + K e r e = K e r e (1 + ρ e (K e − 1)) .so that we evaluate a e,12 = bτ 2 E ei k 2 e + w e w i E ei [k e k i ] = r e τ 2 K e (1 + ρ e (K e − 1))w 2 e + ρ ei K e w e )(K i w i ) ,

2 ,/ 2 ≃ w e ( 1 +
E = E e [W e ] − E e 1 − e −We E e [1 − e −We ] ≥ 0 .Let us evaluate this relative error, assumed to be small, when correlations are parametrized via beta distributions with parameter β e = 1/ρ e − 1.Assuming correlations to be weak, ρ e ≪ 1, amounts to assuming large, β e ≫ 1 Under the assumptions of small error, we can compute E e 1 − e −We ≃ E e [W e ] = w e E e [k e ] and E e W e − 1 + e −We ≃ E e W 2 e /2 = w 2 e E e k 2 e /By the calculations carried out in Appendix N, we have b e E e [k e ] = K e r e and b e E e k 2 e = K e r e (1 + ρ e (K e − 1)) .Remembering that β e = 1/ρ e − 1, this implies that we haveE ≃ E e W 2 e /2 E e [W e ] − E e [W 2e ] ρ e (K e − 1))/2 1 − w e (1 + ρ e (K e − 1))/2 ,

1 zβθ − 1 ( 1 f
) that b e p e,k = βr e K e k B(k, β + K e − k) = βr e Γ(K e + 1) Γ(k + 1)Γ(K e − k + 1) Γ(k)Γ(β + K e − k + 1) Γ(β + K e ) ,where we have introduce the Gamma function Γ. Rearranging terms and using the fact that Γ(z + 1) = zΓ(z) for all z > 0, we obtainb e p e,k = βr e k K e Γ(K e ) Γ(β + K e ) Γ(β + K e − k) (K e − k)Γ(K e − k)equality is uniform in k and follows from the fact that for all x > 0, we have lim z→∞ From there, given a test function f , let us consider b e E e [f (W e )] = Ke k=1 b e p e,k δ W e − kΩ e K e f (W e ) dW e , The order zero term above can be interpreted as a Riemann sum so that one has lim Ke→∞ b e E e [f (W e )] = r e lim − θ) β−1 f (θΩ e ) dθ , (w) dw .Thus, the jump densities is specified via the Lévy-Khintchine measure ν e (w

TABLE I .
Main notations.