Dynamics of stochastic integrate-and-fire networks

The neural dynamics generating sensory, motor, and cognitive functions are commonly understood through field theories for neural population activity. Classic neural field theories are derived from highly simplified models of individual neurons, while biological neurons are highly complex cells. Integrate-and-fire neuron models balance biophysical detail and analytical tractability. Here, we develop a statistical field theory for networks of integrate-and-fire neurons with stochastic spike emission. This reveals an exact mapping to a self-consistent renewal process and a new mean field theory for the activity in these networks. The mean field theory has a rate-dependent leak, approximating the spike-driven resets of the membrane voltage. This gives rise to bistability between quiescent and active states in homogenous and excitatory-inhibitory pulse-coupled networks. The field-theoretic framework also exposes fluctuation corrections to the mean field theory. We find that due to the spike reset, fluctuations suppress activity. We then examine the roles of spike resets and recurrent inhibition in stabilizing network activity. We calculate the phase diagram for inhibitory stabilization and find that an inhibition-stabilized regime occurs in wide regions of parameter space, consistent with experimental reports of inhibitory stabilization in diverse brain regions. Fluctuations narrow the region of inhibitory stabilization, consistent with their role in suppressing activity through spike resets.


I. INTRODUCTION
The activity of neuronal populations underlies sensory, motor, and cognitive functions.Mathematical theories for predicting the macroscopic activity of neural populations are a core tool of computational neuroscience, psychology, and psychiatry [1][2][3][4].These theories typically rely on neural activity equations, with variants also called rate equations, neural mass equations or, if placed on a spatial domain, neural field equations: where bold terms denote a vector or matrix-valued function, ϕ is a single-unit nonlinearity applied elementwise, ∂ t is the derivative with respect to time, and * is a matrix convolution: J * ϕ(v) = j ds J ij (s) ϕ(v j (t − s)).
These and similar equations are commonly understood as a coarse-grained model for large populations of neurons [5][6][7][8][9].Formally, they are a mean field theory for populations of neurons that switch between discrete active and quiescent states [10][11][12][13], or for generalized linear point process models (Eq.9).Biological neurons' membrane voltages, however, have complex nonlinear dynamics [14].Neural field equations have been supplemented with some biophysical detail in an ad hoc fashion [3].A principled mean field theory * gkocker@bu.edu of more biophysical neuron models would expose how single-neuron biophysics shape macroscopic population activity [15].
Integrate-and-fire models, which replace the nonlinear dynamics of spike generation by a simple fire-and-reset rule for the membrane voltage, are fruitful tools for investigating how network structure and synaptic and neuronal biophysics shape macroscopic activity [16,17].The classic mean field theory of integrate-and-fire networks focuses on the density of membrane voltages across a population [18].If the net recurrent input to each neuron is a white Gaussian process, the membrane voltage density obeys a Fokker-Planck partial differential equation [19].Numerical or special function solutions of that Fokker-Planck equation expose steady-state and weakly nonequilibrium population firing rates and pairwise statistics [20][21][22][23][24].
The assumption of white Gaussian input currents is, however, inconsistent with the resulting temporally colored spike train statistics [25].In some cases, the Fokker-Planck approach for the population voltage density can be extended to temporally structured fluctuations [26][27][28].Alternatively, for generalized integrateand-fire neurons with stochastic spike emission, population firing rates and pairwise statistics can be predicted from the density of inter-spike times rather than the density of membrane voltages [29][30][31][32][33][34].Population density approaches expose approximate low-dimensional dynamics through eigenfunctions of the density evolution operator [35,36].
The joint density functional exposes a new simple, deterministic mean field theory for stochastic integrateand-fire networks: activity equations like Eq. 1 with an additional rate-dependent leak.This novel nonlinearity qualitatively shapes networks' macroscopic dynamics.We study networks in an increasing order of complexity, progressing from uncoupled neurons to singlepopulation recurrent networks and then networks with multiple cell types.Spike resets can stabilize strongly coupled excitatory networks with unbounded spike intensity functions.We uncover bistable regimes in homogenous and excitatory-inhibitory networks.Examining the impact of fluctuations on the activity using renewal theory and a self-consistent Gaussian approximation with colored noise, we find that due to the nonlinearity of the spike reset, fluctuations suppress activity.
In the classic neural activity equations, inhibitory feedback is necessary to stabilize strong recurrent excitation [8,59].A paradoxical reduction of inhibitory activity after inhibitory stimulation is a signature of an inhibition-stabilized regime [60] and is observed in diverse mammalian cortices [61][62][63][64].We find that the phase diagram for excitatory-inhibitory networks includes wide regions of paradoxical responses, suggesting a generic mechanism for their widespread experimental observation.Spiking fluctuations narrow the region of inhibitory stabilization, consistent with their intrinsically stabilizing effect through resets of the membrane voltage.

II. STOCHASTIC INTEGRATE-AND-FIRE MODEL
We introduce the stochastic leaky integrate-and-fire (LIF) model in discrete time first and then take a continuous-time limit.At each small time step t ∈ [T ], of width dt, neuron i ∈ [N ] generates dn it ∈ {0, 1} spikes.(n it is the cumulative spike count of neuron i at time t.) Neuron i receives inputs dn through weighted synaptic filters J .It also has a resting voltage E i , which may depend on external applied currents.We take dn it to be generated as a Bernoulli random variable with spike probability f (v it ) dt, for some intensity function 0 ≤ f (v) ≤ dt −1 .After a spike is emitted, that neuron's membrane voltage is reset to within O(dt) of the reset value r.If f (v) = θ(v − b)/dt, where θ(x) is the Heaviside step function, the deterministic LIF neuron with threshold b is recovered [65].In the continuous-time limit (Appendix A), (2) Here, ṅi (t + ) ≡ ∂ t n(t + ), and t + = t + ϵ for an infinitesimal ϵ > 0 so the membrane potential is reset at time t by an immediately preceding spike.Each ṅi (t) is an inhomogenous Poisson process with intensity f (v i (t)).The Poisson spike emission arises as the continuous-time limit of the discrete-time Bernoulli spike train.The last term in Eq. 2 is the reset of the membrane voltage after a spike.This nonlinear coupling between the spike train and membrane voltage is the key feature of this model compared to generalized linear models.(See Appendix B for a discussion of absolute refractory periods in this model.)We will non-dimensionalize the model, measuring time relative to τ and shifting v and E by the reset r.
Eq. 2 is a set of coupled stochastic differential equations with multiplicative Poisson noise.The expected trajectory obeys where ⟨⟩ denotes a moment and ⟨⟨⟩⟩ denotes a cumulant.(Here we suppress the explicit time dependencies, as well as the infinitesimal time shift in the reset term.Moving forwards, we will often continue to suppress those.)To compute those requires the joint density functional of the membrane voltages and spike trains.In the response variable path integral formalism, it is (Appendix A): Here, x T y = i dt x i (t) y i (t) is the functional inner product and f i (t) = f (v i (t)).S is the action functional.ñ, ṽ are purely imaginary auxiliary variables, called the response variables because joint moments with them measure responses to fluctuations in the activity.
This density has the N -dimensional deterministic mean field theory The mean field value of ṅi is fi = f (v i ).The Ndimensional mean field theory is an approximation of Eq. 3. If we assume that 1. the expectation of the spike trains is ⟨ ṅ⟩ = f (⟨v⟩) 2. the spikes and membrane voltage are independent so that the joint cumulant ⟨⟨ ṅv⟩⟩ = 0, then Eq. 3 reduces to Eq. 5. Assumption ( 1) is only correct if f is linear.Assumption ( 2) is generally incorrect, although it may be a good approximation if ⟨v⟩⟨ ṅ⟩ ≫ ⟨⟨ ṅv⟩⟩.Formally, we expand the configuration variables v, ṅ around their mean values.The mean field theory is then the result of a saddle point approximation for integrals over the fluctuations (Appendix E).This corresponds to assuming fluctuations are negligible so p or equivalently, truncating the action at linear order in ñ.This implies assumptions (1) and ( 2), so Eq. 3 reduces to Eq. 5.This approach also exposes fluctuation corrections to the mean field theory.We will see in the next section that the two nonlinearities in Eq. 2 impart different fluctuation corrections to the mean field theory.First, we compare the mean field theory, Eq. 5, to the classic activity equations, Eq. 1.
The mean field dynamics of Eq. 5 differ from Eq. 1 in two ways.The first is the presence of the reset term −f (v i ) v i .The second is in the interpretation of the nonlinearity f .Here, f determines the instantaneous spike emission probability as a function of the membrane voltage and is typically required to be non-saturating so that the neuron is guaranteed to spike if v i → ∞. (This is not mathematically necessary; f could be chosen to saturate at a finite value.In discrete time, f must be bounded by 1/dt so the spike probability does not exceed 1.)In the microscopic binary switching model underlying Eq. 1, the nonlinearity ϕ determines the single-neuron transition rates from quiescence to activity and is typically chosen as a sigmoid to prevent unbounded activity.In either case, the nonlinearity f or ϕ is a property of individual neurons.
Can we map the new mean field theory, Eq. 5, onto the classic activity equations, Eq. 1, with an effective nonlinearity ϕ that includes the effect of the rate-dependent leak?Requiring J ϕ = J f − vf , with (vf ) i = v i f (v i ), we find that if the coupling J has a left inverse, So to map the mean field theory of Eq. 5 onto the classic activity equations, the effective nonlinearity ϕ depends explicitly on the coupling J ; it is no longer a singleneuron nonlinearity.If there are linear self-interactions and inter-neuronal coupling is weak so that J is diagonally dominant, the effective nonlinearity will be approximately a single-neuron property.
The other classic form of rate equation is τ . This is also a mean field theory of binary switching neurons [11][12][13].Here, v is commonly understood as a mean field description of the firing rate or proportion of active neurons in a population, rather than the membrane potential or synaptic drive [8,9].The two types of activity equation differ in their assumptions about the dominant synaptic or neuronal timescales [13,66].
To map Eq. 5 onto this would require ϕ(( In general, mapping Eq. 5 onto this may require the nonlinearity to be a function of the coupling operator, activity variable, and baseline drive separately, rather than a function of their sum.
Mapping Eq. 5 onto the classic activity equations can thus introduce nonlinearities tailored to a specific LIF network, rather than as single-neuron input-rate functions.This mapping is, however, not necessary.The mean field dynamics of Eq. 5 are amenable to direct analysis.

III. IMPACT OF SPIKE RESET AND FLUCTUATIONS ON SINGLE-NEURON ACTIVITY
We now examine the steady-state input-rate transfer of a single neuron or, equivalently, an uncoupled population.The mean field firing rate, f , is given by equilibria of Eq. 5 with J = 0 and constant E. We consider neurons with threshold-power law spike probability functions, f (v) = ⌊v − 1⌋ a + , which match the effective nonlinearity of mechanistic spiking models and biological neurons in fluctuation-driven regimes [67][68][69][70][71]. (The membrane voltage has been non-dimensionalized to set the threshold for spike generation at v = 1.)For simplicity, we take a threshold-linear neuron with f (v) = ⌊v − 1⌋ + so the equilibrium solution to the mean field equation is The mean field theory for the stochastic LIF neuron predicts its equilibrium firing rate as a function of its membrane voltage (Fig. 1b, black line vs dots).At higher rates, Eq. 7 overpredicts the true firing rates.Since the mean field theory neglects all fluctuations, fluctuations suppress activity in the stochastic LIF model.For comparison, consider a stochastic LIF model with a linear reset: each spike causes a decrease in the membrane voltage of size r (Fig. 1a, blue) [72].The action for that model is with the N -dimensional mean field theory This has a similar form to the classic activity equation, Eq. 1, and can be directly mapped onto it with the substitution J ii (s) → J ii (s) − rδ(s).For this reason, we say that Eq. 1 is a mean field theory for a stochastic LIF neuron with linear resets, which is an example of a generalized linear model or 0th order spike response model [73].The mean field firing rate of the uncoupled linear-reset model, with  12).Solid: exact renewal theory prediction from Eq. 17. d) Difference between the one-loop and meanfield rates for the three models.
For a peri-threshold stimulus, E = 1 + ϵ in Eqs.7 and 10, f = ϵ/2 + O(ϵ 2 ) ≈ fL and the mean field theories of the stochastic LIF and linear-reset models match for infinitesimal firing rates.At finite rates, however, the linear-reset model provides a poor prediction for the stochastic LIF neuron (Fig. 1b, blue vs black).Instead of matching the intensity functions of the two models, we could match their mean-field membrane voltage by giving the linear-reset model the intensity function f M (v) = vf (v).(f (v) is the intensity function of the stochastic LIF neuron.)The mean-field rate of this matched linear-reset model is For the matched linear-reset model, the mean field firing rate underpredicts the true activity level (Fig. 1b, orange line vs dots), so fluctuations promote activity.Why do fluctuations suppress activity in the stochastic LIF model but promote activity in the matched linear-reset model?In ∂ t ⟨v⟩, we need to account for (1) the nonlinearity in the intensity function and (2) the nonlinear spike reset.To that end, we expand the membrane voltage and spike trains around their means to derive an expansion for the action that self-consistently accounts for the impact of fluctuations (the loop expansion of the effective action; Appendix E).This allows us to derive diagrammatic corrections to the mean field theory.Loop diagrams measure the influence of higher-order activity statistics on lowerorder statistics.There are loop corrections to the mean field theory when the model has a nonlinearity.
The stochastic LIF has two nonlinearities: the intensity function and the nonlinear spike reset.These give rise to fluctuation corrections in the mean voltage and rate: Without the one-loop diagrams, these reduce to the mean field theory of Eq. 5 with J = 0.The one-loop diagrams measure the impact of two-point fluctuations on the mean through the two nonlinearities of the intensity function and spike reset.
In field theoretic terms, these loop diagrams represent proper vertex corrections to the effective action (Appendix E).We can also understand them by comparing Eq. 3 and Eq.12.In their first lines, the loop diagram is an approximation of ⟨⟨ ṅv⟩⟩.In the second line of Eq. 12 , the loop diagram originates in a Taylor expansion of f (v) around f (⟨v⟩) in Eq. 3; the loop diagram in the second line approximates f (2)  2 ⟨⟨v 2 ⟩⟩.We next discuss these approximations.
The edges in the Feynman diagrams correspond to factors of the linear response of the configuration to a fluctuation, ∆, also called a propagator.Since the model has two configuration variables, each with a corresponding response variable, there are four types of propagator: 1. the spike response to a spike fluctuation ∆n,ñ , 2. the voltage response to a spike fluctuation ∆v,ñ , 3. the spike response to a voltage fluctuation ∆n,ṽ , and 4. the voltage response to a voltage fluctuation ∆v,ṽ .
We represent them with the edges ∆n,ñ = ∆v,ñ = ∆n,ṽ = ∆v,ṽ = Only the first two of these edges appear in the one-loop equations of motion for the mean voltage and rate.The vertex represents the intensity, f (v).Each diagram also has a vertex .These vertices have different origins in the two diagrams, corresponding to either the spike reset or intensity function.(The two types of vertex can be distinguished by their incoming edges.)The definition of the linear response functions for the stochastic LIF model are given in Appendix D 1, along with the Feynman rules for perturbative corrections to the mean field theory (see also Appendix E).In Eq. 3, at a stationary state We have written the cumulant for a single neuron, dropping the neuron index implicit in Eq. 3 and leveraged the stationarity assumption so that ∆(t, with the Fourier transform convention F[ ∆(s)] = ds ∆(s)e −iωs .The limit t + → t is taken from the right, t + > t. f (p) is the order-p derivative of the intensity function f , evaluated at v.This diagram, and the vertex in it, arise from the spike reset term of Eq. 2. If the neuron is in an active steady state, v > 1 and f (v), n > 0. If f (1) > 0, every term in this approximate cumulant is positive.It thus decreases ∂ t v in Eq. 12.
The equation of motion governing n also has a loop correction.In the second line of Eq. 3, we expand f (v) around the mean.Truncating at second order, in a stationary state (15) This diagram exists in both the stochastic LIF and linearreset models.It arises from the nonlinear intensity function, f , and vanishes almost everywhere if f is thresholdlinear.The vertex in it carries the factor of f (2) /2.This contribution impacts the mapping from membrane potential to firing rate.Similarly to above, the approximate cumulant ⟨⟨v(t) v(t)⟩⟩ is positive in an active state with non-decreasing f .So if f (2) > 0 fluctuations will promote activity and vice versa.The curvature also determines the magnitude of this contribution.
Evaluating the one-loop predictions at an equilibrium of v, the stochastic LIF with a threshold-linear intensity function has the one-loop equilibrium Comparing the one-loop and mean field predictions, we see that the negative vertex factor leads to a suppression of activity (Fig. 1c, dotted vs dashed; Fig. 1d, black).This occurs because the nonlinear spike reset negatively couples the mean membrane voltage to joint fluctuations in the spikes and membrane voltage.
We can also calculate the rate of the threshold-linear stochastic LIF neuron exactly.Due to the nonlinear reset mechanism, the spike train is a renewal process.Standard results of renewal theory expose its rate [74].With a constant drive E, the membrane voltage evolves after a spike at time t as v(t + s) = E (1 − exp(−s)), with v(t) = 0.The time-averaged firing rate is the inverse of the mean interspike interval: ⟨ ṅ⟩ = 1/⟨s⟩.For thresholdlinear f , the mean interspike interval is (17) γ(x, y) is the lower incomplete gamma function.The term ln E E−1 is the time for v(t) to reach the threshold value of 1; the second term is the mean first spike time after that.The one-loop prediction matches the true f-I curve better than the mean-field theory (Fig. 1c).
In the uncoupled linear-reset model, the one-loop equations of motion are The linear-reset model has the same four types of propagator as the stochastic LIF, although their definitions differ between the two models due to the different spike reset mechanisms (Appendix F).The matched linear-reset model (f The term 1  4 ⌊v − 1⌋ + is the result of the loop correction to the firing rate, leading to an increase in the rate compared to the mean-field rate.The expected membrane potential still reaches the threshold value v = 1 at E = 1, and the one-loop membrane voltage is greater than the mean-field membrane voltage √ E. Close to threshold (E = 1 + ϵ), the one-loop voltage is v = 1 + 4ϵ/9 + O(ϵ 2 ) while for the mean-field theory, v = 1+ϵ/2+O(ϵ 2 ).Similarly, for large E the one-loop membrane potential is approximately 1 + √ E.
In summary: fluctuations suppress activity in the stochastic LIF neuron because the nonlinear spike reset negatively couples the mean membrane voltage to joint spike-voltage fluctuations.In the linear-reset model, the only nonlinearity arises from the intensity function.To match the mean-field voltage of the stochastic LIF neuron, the linear neuron's intensity function is f M (v) = vf (v), where f (v) is the stochastic LIF intensity function.Since f (v) had non-negative curvature, f M has positive curvature.So, fluctuations of the membrane voltage promote spiking activity in the matched linear-reset model.A stochastic LIF network with a nonlinear intensity function may have contributions from both diagrams, so that the two nonlinearities compete to determine whether fluctuations suppress or promote activity.

IV. HOMOGENOUS NETWORKS
Biological neural networks are coupled.We will seek a low-dimensional description of the population activity that accounts for synaptic coupling.Here, we study the simplest case: networks where the connectivity between neurons is homogenous, so we take the synaptic weights between neurons from a distribution with negligible second-and higher-order cumulants.We assume that the mean synaptic weight is O(1/N ) so the total synaptic weight onto a neuron is O(1).An exemplar of this case is a network with weak (J ij ∼ 1/N ) but potentially dense (connection probability ∼ 1) connections (Appendix C).To examine the interaction between synaptic connectivity, subthreshold dynamics, and stochastic spike emission in shaping network activity, we will average the partition functional for the activity (equivalently, average the moment generating functional) over realizations of the synaptic connectivity (Appendix C).In the limit of large N , the density factorizes over the neurons, yielding the partition functional The result is a population of independent stochastic LIF neurons, each receiving a self-consistent mean field input J * ⟨ ṅ⟩, where ⟨ ṅ⟩ is the population-averaged spike train.For self-averaging connectivity, the result describes the typical behavior of an individual network and the population average ⟨ ṅ⟩ matches the ensemble averaged rate.Since the density factorizes, we drop the neuron index.Robert & Touboul proved convergence to these mean field dynamics [75].The connectivity has been reduced to its mean, J, which would be equivalent to assuming a network with all-to-all connectivity.J can be either positive or negative.If the connectivity had non-negligible higher cumulants, these would give rise to corresponding fluctuations in the membrane potential (Appendix C).This population-averaged mean field theory is one-dimensional not because the neurons are synchronized, but because they spike independently given a self-consistent mean field input.
If the network is in an asynchronous state so ⟨ ṅ⟩ is constant in time, after a spike at time t the membrane voltage obeys and the spike train is a renewal process.(We write J for the integral of the coupling kernel J(s).)With a threshold-linear intensity function, the mean interspike interval is where C = E + J⟨ ṅ⟩.In a stationary state, the rate is the inverse of the mean interspike interval: ⟨ ṅ⟩ = 1/⟨s⟩, which allows us to find self-consistent solutions of Eq. 22 numerically.
The mean field (tree-level) equation of motion for the membrane voltage is with f (v) the mean field approximation of ṅ.As in the N -dimensional mean field theory of Eq. 5, this neglects all fluctuations, so we expect that it will not be quantitatively correct.Since the spike trains are conditionally Poisson, those fluctuations are driven by the expected intensity.We thus expect that Eq. 23 should be a good approximation when the true firing rate is low.As we will see below, it can provide a good qualitative description of the population dynamics, including bifurcations from quiescence.The leading-order description of fluctuations is given by the one-loop equations of motion, The one-loop contributions are given by Eqs.14, 15.

V. BISTABLE ACTIVITY IN HOMOGENOUS NETWORKS
With a threshold-linear f , f (v) = ⌊v − 1⌋ + , and pulse coupling, J(s) = Jδ(s), there are three possible steady states of Eq. 23.The first is v = E, which exists if E < 1.There are two other possible steady states at v > 1, which both exist if Whenever it exists, v− (v + ) is unstable (stable).If E > 1, only v+ exists.With J > 2 and J(4−J) 4 ≤ E < 1, both steady states exist and the firing rates are thus bistable, with v− providing a separatrix between the attractors v → E and v → v+ .The mean field theory has two saddle node bifurcation curves, where the unstable fixed point v− meets either v = E or v+ (Fig. 2a).
These bifurcations also appear in the underlying stochastic spiking model.We simulated a network of 100 stochastic LIF neurons (Eq.2) with Erdős-Rényi connectivity (p = 0.5) with different values of the baseline drive E and coupling strength J (marked in Fig. 2a).At times 5 and 15, we applied pulse perturbations to the baseline drive and observed monostable or bistable behavior matching the predictions of the phase diagram (Fig. 2b-d).
The mean field theory neglects all fluctuations in the spiking activity.Due to the nonlinear spike-voltage coupling imparted by the reset mechanism, those fluctuations can impact the firing rate.To determine the magnitude of fluctuation corrections, we computed bifurcation diagrams of the exact firing rate (Eq.22; Fig. 2e, f).The mean field theory systematically overestimates the true firing rates.This implies that fluctuations in the activity suppress firing.
Similarly to the uncoupled neuron, the impact of fluctuations can be explicitly described by loop corrections to the mean field dynamics (Eq.24).To one loop, equilibria In the model with linear resets and a threshold-linear intensity function, the mean field theory is linear in both the sub-and supra-threshold regimes and does not exhibit bistability.The classic activity equations can have bistable regimes so long as the nonlinearity saturates, e.g., [8].Here, bistability is due to the nonlinear coupling between the spiking and membrane voltage.
The stochastic spiking network may not exhibit true bistability in the bistable regime of the deterministic mean field or one-loop approximations.Rather, the quiescent state should be truly stable, while the active state is metastable.Fluctuations in the spiking activity may drive the network into the quiescent state.In the quiescent state, there are no fluctuations since all n-point correlation functions are sourced by the intensity f (v), which we took to be 0 for v < 1.If the nonlinearity f (v) were small but finite for v < 0, then fluctuations could be maintained in the quiescent state and both would be metastable.The slope of the intensity function at threshold can also play a key role in metastability of the population activity [75].

VI. MULTIPLE CELL TYPES
Biological neural networks are composed of diverse types of neurons with cell-type-specific connectivity, e.g., [76][77][78][79][80][81][82][83].Motivated by this, we consider a network with M populations, which impose a block structure on the connectivity matrix J .The average over the connectivity proceeds as for the single population, with an order parameter for each population's mean activity.This yields a M -dimensional mean field theory.In the large-N limit, the partition functional is For self-averaging networks, the density factorizes over the populations and neurons so the neurons again spike independently given a self-consistent mean field input.
The typical spike train of population α (α ∈ [M ]) is an inhomogenous Poisson process.If the population-averaged activities ⟨ ṅα ⟩ are constant in time, the mean first passage times are In a stationary state, the rate is ⟨ ṅα ⟩ = 1/⟨s α ⟩.The mean field approximation of the membrane voltages is The one-loop equations of motion, similarly, are given by accounting for the input across populations in Eq. 24.

VII. BISTABLE ACTIVITY IN EXCITATORY-INHIBITORY NETWORKS
Here, we consider the classic excitatory-inhibitory network with pulse coupling and mean connection strengths as in [20,21] (Fig. 3a).With input E to both populations, the mean rates of the excitatory and inhibitory populations are equal since they receive the same external and recurrent inputs.The self-consistent fixed points with positive rates are the same as those in the singlepopulation network with the replacement J → J(1 − g) (Eq. 25 for the mean field theory, Eq. 27 to one loop).In the mean field theory, both fixed points exist if E < 1 and J > 2 and Here we have highlighted the requirement for g as a function of the other model parameters; J > 2 is a necessary condition for bistability in the mean field theory, but depending on the values of g and E the greatest lower bound for J may be above 2.With both population voltages under threshold, there is the stable fixed point v = E, if E ≤ 1.If E > 1, only v+ exists.The Jacobian eigenvalue J(1 − g) − 2v is positive for v− and negative for v+ root.So if these fixed points exist, the one at higher v is stable and the other a saddle.Similarly, in the one-loop theory both fixed points exist if E < 1 and J > 9 4 and As for the single-population network, the existence conditions for these fixed points define saddle node bifurcation curves for the mean field and one-loop theories (Fig. 3b-d).If the inhibitory coupling strength is sufficiently low, we have the same types of bifurcation curves as in the single-population network (Fig. 3c).If the inhibitory coupling g is too strong, the only stable equilibrium is the low-rate state (Fig. 3b, d).
These bifurcations also appear in the stochastic spiking network with block-Erdős-Rényi connectivity (Fig. 3e, f; network parameters are given in the caption).

VIII. FLUCTUATIONS
The temporal structure of fluctuations can shape sensory codes [84][85][86] and determine neural circuit structures through spike timing-dependent plasticity [87][88][89][90][91][92][93].The classic Fokker-Planck mean field theory of integrateand-fire networks assumes that the membrane voltages experience a white Gaussian noise [20][21][22].The resulting predictions for the spike trains' power spectra are not white, however, so these predictions are not self-consistent [25].In the stochastic integrate-andfire model, the output spike trains also are not white.In the excitatory-inhibitory network, for example, the population-averaged power spectrum exhibits a high-pass shape with a slight resonance (Fig. 4a, dots).This is similar to the shape of the power spectrum of networks of deterministic integrate-and-fire neurons with white noise inputs [23].The one-loop equations of motion account for Gaussian fluctuations, but do not make any assumptions about their temporal structure.We will next discuss the temporal structure of fluctuations in this Gaussian approximation and the full prediction from renewal theory.
The exact mean field theory, Eq. 29, is of an inhomogeneous Poisson process receiving the self-consistent mean input β J αβ ⟨ ṅβ ⟩.Substituting this into Eq.21, yields the post-spike membrane voltage, which defines the intensities f (v α (t)).The interspike interval density is p(s Cα−1 (35) where C α = E + β J αβ ⟨ ṅβ ⟩ and γ is again the lower incomplete gamma function.This provides an exact prediction for the interspike interval density in the limit N → ∞, accurate for populations of a few hundred neurons (Fig. 4b).The interspike interval distribution defines the spike train power spectrum C(ω) of a renewal process [94]: Together, Eqs.35 and 36 provide an exact prediction for the typical power spectrum in a large homogenous network.Computing the Fourier transform p(ω) numerically, we see that these predictions are quantitatively accurate in simulations of a few hundred neurons (Fig. 4a, dots vs solid).
For analytic approximations of the power spectrum, we turn to the field theoretic formulation.If the fluctuations or the nonlinearity are weak, we can expand the density pertubatively around a solution of the deterministic mean field theory (Appendix D).The connected two-point function of the spike trains can then be calculated diagrammatically.With a threshold-linear intensity function, (37) The expansion may contain terms with up to infinitely many loops, inducing dependence on n-point correlation functions of all orders.With an intensity function nonlinear at the mean voltage there would be additional diagrams, containing internal vertices with multiple incoming edges.The same is true for any cumulant of the activity.The simplest approximation of the twopoint correlation is the tree-level approximation given by the first diagram of Eq. 37, where vα is a solution to the mean field equation for population α.At ω = 0, this yields ⟨⟨ ṅ2 α ⟩⟩(0) ≈ f (v α )/4.For ω → ∞, ⟨⟨ ṅ2 ⟩⟩(ω) → f (v α ), the mean-field approximation to the intensity.This simple approximation captures the high-pass nature of the power spectrum but is not quantitatively accurate (Fig. 4a, dotted line).
The one-loop predictions for the mean membrane voltage and rate account for second-order fluctuations to tree level.For the spike train power spectrum this again corresponds to Eq. 38, but with v a solution to the one-loop equations of motion.This provides a more accurate prediction of the power spectrum (Fig. 4a, dashed line) due to the improved estimate of the intensity, f (v α ).
As the coupling or input strength brings the network to a bifurcation, the spike train variance ⟨⟨ ṅ2 α ⟩⟩ 0 (0) undergoes a sharp transition from 0 in the quiescent state to positive values in the active state (Fig. 4c-f).The transition in the spike train variance follows that in the rate, since all correlation functions are sourced by the intensity f (v).

IX. INHIBITORY STABILIZATION
In recent years, a body of work has emerged suggesting that mammalian cortices resides in an inhibitionstabilized regime [8,59,[61][62][63][64].There are two requirements for an excitatory-inhibitory network to be inhibition-stabilized: the network must occupy a stable attractor, but the excitatory population would be unstable on its own.These are difficult to directly test experimentally.Fortunately, inhibition-stabilized fixed points have another signature: paradoxical responses to inhibitory neuron stimulation.In an inhibition-stabilized network, stimulation of the inhibitory neurons leads to a paradoxical reduction of their firing rates [60].If there are multiple inhibitory subtypes, the net inhibitory input to excitatory neurons decreases upon inhibitory neuron stimulation [95].The widespread experimental observation of paradoxical responses, and other response patterns consistent with inhibition-stabilized networks, raises the question: is inhibitory stabilization a generic property, or does it require fine-tuned parameters towards which cortical networks develop?
The inhibition-stabilized regime, and paradoxical responses as its signature, are predictions of the classic activity equations, Eq. 1.Does an inhibition-stabilized regime exist in the mean field theory of Eqs. 31 and 32?The stability requirements are determined from the Jacobian matrix, where fα = f (v α ) and f For a fixed point to be inhibition-stabilized, the first element of its Jacobian must be positive (the excitatory-only subnetwork would be unstable), but the maximum real part of its eigenvalues negative (the full network is stable).For the threshold-linear intensity function, f (1) , where θ(x) is the Heaviside step function.This leads to the requirement that for the excitatory subnetwork to be linearly unstable with a positive firing rate, 1 < vE < J/2.Do paradoxical responses to inhibitory stimulation occur in the stochastic LIF network?To investigate this, we return to the tractable threshold-linear intensity function.We allow the external input to vary between the two populations, E = (E, hE) (Fig. 5a).h controls the relative strength of the input to the inhibitory population.When both population voltages are above threshold, the mean field inhibitory and excitatory nullclines are at v * α is the nullcline of population α ∈ {e, i}.The suprathreshold inflection point of the excitatory nullcline is at ve = J/2.An inhibition-stabilized fixed point must thus be on the increasing branch of the excitatory nullcline.h does not affect the excitatory nullcline but shifts the inhibitory nullcline.An increase in h will lead to a paradoxical reduction in firing rates if it shifts a stable fixed point to lower vi .For example, consider the case when there is a single fixed point on the increasing side of the excitatory nullcline, to the left of its peak (Fig. 5b).Increasing h shifts the inhibitory nullcline up and to the left, moving that fixed point to a lower (v e , vi ).Depending on the magnitude of the shift, it may also take the dynamics through a bifurcation into a bistable regime.A sufficiently large increase in h can shift the network into a regime with no excitatory activity, which can also lead to a net decrease in inhibitory rates (Fig. 5b, c).
In what regions of parameter space does an inhibitionstabilized fixed point exist?As discussed above, for the excitatory subnetwork to be unstable, with non-zero excitatory rate, requires that the fixed point be on the middle branch of the excitatory nullcline: 1 < vE < J/2.The inhibitory nullcline is an increasing function of ve .The excitatory nullcline increases for v * e close to 1 and decreases for sufficiently large v * e .At threshold (v e = 1) the inhibitory nullcline must be below the excitatory nullcline: If h = 1, this requirement imposes that E > 1; at E = 1 the two sides are equal, and the difference of the two sides grows as √ E. The peak of the excitatory nullcline is at v * e = J/2.At the peak of the excitatory nullcline, vi = J 2 /4 − J(1 − g) + E /gJ.At ve = J/2, the inhibitory nullcline should be above the excitatory nullcline: Together, Eqs.41 and 42 provide sufficient conditions for a paradoxical response to inhibitory stimulation in the mean field theory.At fixed drive E, they predict a paradoxical response for sufficiently large J or g.For stronger E, these minimal couplings increase (Fig. 6a, b, dashed).
To estimate how fluctuations impact inhibitory stabilization, we compute the one-loop nullclines (each with v i as a function of v e ): The inflection point of the one-loop ve nullcline is at v * e = (1 + 4J) /10.At one loop, for the inhibitory nullcline to be below the excitatory nullcline at threshold requires 10E > 10 + gJ(−9 − 4gJ + 1 + 80Eh + 8gJ(9 + 2gJ)) ( 44) for the inhibitory nullcline to be above the excitatory nullcline at ve = (1 + 4J) /10 requires 1 + 80E < 8J 9 − 2J − g 9 + 4gJ − 1 + 80Eh + 8J(−9 + 4J + g(9 + 2gJ)) (45) For fixed E and J, this one-loop boundary requires a higher g (stronger inhibition) than the mean field boundary, better matching the transition observed in simulations (Fig. 6a, dashed vs solid).Similarly, for fixed E and g, the one-loop boundary is at higher J (stronger coupling) than the mean field boundary (Fig. 6b, dashed vs solid).Together, this comparison indicates that fluctuations shift the region of paradoxical responses to more strongly coupled networks.This comports with the role of fluctuations in suppressing activity.
A paradoxical response could also occur from other dynamical regimes than the single fixed point on the decreasing branch of the excitatory nullcline, such as from a bistable regime.To test whether the underlying spiking model exhibits paradoxical responses, we simulated excitatory-inhibitory networks while varying J and g.For each network, we applied a perturbation of amplitude 0.1 to the inhibitory population's input and computed the inhibitory population's average firing rate before and after the perturbation.With J fixed and varying g, we observed paradoxical responses for sufficiently large g (Fig. 6a, blue).Similarly, with g fixed and varying J, we observed paradoxical responses for sufficiently large J (Fig. 6b, blue).The one-loop predictions better match the region of paradoxical responses than the mean field predictions (Fig. 6).

X. DISCUSSION
We constructed a path integral representation for the joint probability density functional of the membrane voltage and spike trains of a network of stochastic LIF neurons, Eq. 4. This exposed a simple deterministic mean field theory for spiking networks: activity equations with an additional rate-dependent leak arising from the spike reseting (Eq.5).It also exposed fluctuation corrections to the mean field theory, arising from the two nonlinearities of the intensity function and spike reset (Eq.24), the latter of which suppresses activity (Fig. 1).These N -dimensional systems expose predictions for the activity that depend on a particular connectivity J .Largescale electron microscopy is now revealing such wiring diagrams, e.g., [96][97][98][99][100][101][102][103][104][105][106][107][108].Predicting the microscopic dynamics of even deterministic, threshold-linear neuronal networks is challenging [109][110][111][112]. Statistical approaches focusing on stochastic models allow the prediction of correlations between specific neurons' activity through linear response or Hawkes theory and its generalizations [55,[113][114][115][116][117][118].Eq. 4 provides a starting point for making such predictions in a stochastic LIF network.
Here, we instead used the path integral representation to derive a population-averaged stochastic field theory for large networks with homogenous coupling, including multi-population systems like excitatory-inhibitory networks.That stochastic field theory was of the form of a renewal process with a self-consistent input (Eqs.20, 29).Robert & Touboul studied the homogenous stochastic LIF network rigorously [75].They proved that the mean field process, Eq. 20, can have one or several invariant densities depending on the form of the firing function.The stochastic field theory admits low-dimensional mean field and loop approximations of the voltage and rate as simple functions of the model's parameters.Using these, we demonstrated bistability of the deterministic mean field theory and its extension to the stochastic system, and studied the contributions of recurrent inhibition and spike resetting to stabilizing network activity.We also found that fluctuations suppress activity through the spike reset also in coupled networks (Figs.2ef, 4c-e).Excitatory-inhibitory networks of deterministic integrate-and-fire neurons can also exhibit bistable equilibrium rates if the inhibition is not too strong [21,119].The field-theoretic description here does not rely on a white noise approximation for the membrane voltages, but exposes a systematic method for calculating their statistics.It requires here, however, a model with stochastic spike emission.
Deterministic integrate-and-fire networks can also exhibit spatial, temporal and spatiotemporal transitions [21,120,121].Temporal, spatial and spatiotemporal bifurcations are often understood through the classic activity equations [3].The field theory developed here provides a route to uncovering bifurcations in networks of stochastic integrate-and-fire neurons with more temporal or spatial structure in their interactions, as well as investigating the impact of spiking fluctuations on such transitions.
In the classic activity equations (e.g., Eq. 1), recurrent inhibition is necessary to stabilize strongly coupled networks [8,59].An inhibition-stabilized regime can be exposed by a paradoxical reduction of inhibitory activity after inhibitory stimulation [60].We calculated the phase diagram for paradoxical responses in stochastic LIF networks, and found that an inhibition-stabilized regime exists in wide regions of parameter space (Figs. 5, 6).This suggests a generic mechanism underlying the observation of paradoxical responses widely in mammalian cortex [61][62][63][64].
There are two complementary approaches to our focus on the density functional of sample paths, p[v(t), ṅ(t)], for the stochastic LIF model.These complementary approaches focus on the time-dependent probability den-sity function of the membrane voltages, p(v, t), across a population of neurons [18].In the N → ∞ limit and with J ij ∼ 1/N , the population density of membrane voltages in a stochastic LIF network obeys a Volterra integral equation [29,30].That integral equation can also be written as a partial differential equation, which rigorously exposes the stochastic stability of the population densities in a mean field limit [122][123][124][125].A finite-size analysis introduces a stochastic term to the population density equations [34,126].Alternatively, moments for finite-size networks can be analyzed through a replica mean-field approach [127][128][129].The path integral approach also exposes a finite-size mean field theory (Eq.5).Fluctuation corrections to that finite-N mean field theory can be obtained in the same way as for the large-N , connectivity-averaged system.
The field theoretic approach is practical and flexible.It exposes simple analytic approximations for any cumulant of the membrane voltages and/or spike trains via diagrammatic methods, is amenable to finite-size corrections, and applies readily to other models such as those with temporal synaptic interactions, spatially dependent connectivity, conductance-based or strong O(1/ √ N ) synapses, and additional nonlinearities in the single-neuron dynamics.

XI. ACKNOWLEDGEMENTS
I thank Michael A. Buice, Brent Doiron, and Stefano Recanatesi for helpful feedback.

Appendix A: Joint probability density functional
We will construct the joint probability density of the membrane voltages and spike trains using the response variable path integral formalism [37][38][39][40], reviewed in [130][131][132].We will use boldface lowercase variables for vectors and boldface capital letters for matrices/operators.Given the membrane voltages v it , we will require that the spikes generated in the network are conditionally independent across neurons i and time points t.Here we take the model to be already nondimensionalized, so that time is measured in units of the membrane time constant τ and the voltage resets to 0 after a spike.The joint probability density of the membrane voltages v and the spikes dn, conditioned on the stochastic spike generation, is Here, η it ∼ Bernoulli (f (v it ) dt).Introducing the Fourier representation of the delta functions and marginalizing over η yields the joint density The measures are D ñ = i,t dñit 2πi and D ṽ = i,t dṽit 2πi .The integrals over the response variables, ñ and ṽ, are along the imaginary axis.The logarithmic term in the exponent is the cumulant-generating function of the Bernoulli spikes.Galves & Lőcherbach proved the existence and uniqueness of stationary densities for the discrete-time model with strictly positive intensity function [133].
We next take a continuous time limit, dt → 0, T → ∞, with their product fixed.This defines the functional integration measures D. With dt ≪ 1, we expand the natural logarithm in its Taylor series around 1: 2 .This yields Eq. 4, with the infinitesimal shift in the reset term in Eq. 2.

Appendix B: Absolute refractory period
With an absolute refractory period of T r time steps in the discrete-time dynamics, during which the membrane voltage is clamped within O(dt) of 0, the joint density of the spike trains and membrane voltages instead obeys This presents some complication in the continuous-time limit: the refractory term diverges when written as a convolution.
One alternative would be to incorporate a strong, negative self-coupling in diagonal elements of J .While not strictly an absolute refractory period, this may mimic its effects.This would affect the definition of the mean field theory and propagators, but would not give rise to new types of fluctuation correction (no new vertices; Appendix D 1).
Another alternative is consider an absolute refractory period in which the membrane voltage is not clamped at the reset voltage.Rather, we can require that 1. during the absolute refractory period, the spike probability is 0 and 2. at the end of the absolute refractory period, the membrane voltage is reset to the reset voltage.This yields the discrete-time density from which a continuum limit can be taken straightforwardly, yielding a spike reset term ṅi (t − τ r ) v i (t) and spike intensity f (v i ) (1 − ṅi * B), with the rectangular function B(t) = θ(t) − θ(t − τ r ) and refractory period τ r = T r dt.This introduces a new state-dependence to the intensity, which would give rise to new types of fluctuation correction.

Appendix C: Connectivity-averaged density
To examine the interaction between synaptic connectivity, subthreshold dynamics, and stochastic spike emission in shaping network activity, we will average the par-tition functional for the activity (equivalently, average the moment generating functional) over the synaptic connectivity.This is a standard exercise in statistical field theory [132], relying on the assumption that the system is self-averaging with respect to the connectivity: that is, that the average over realizations of J will give us an accurate description of a single large system.The connectivity-averaged partition functional is Since the action S is linear in J , a cumulant generating function for J appears in Z: where W J (ṽ T ṅ) = ln DJ p(J ) exp dt ds i,j ṽi (t) J ij (s) ṅj (t − s) .

(C3)
Due to this, each cumulant of J gives rise to a corresponding cumulant in the connectivity-averaged partition functional for the activity.We have overloaded notation here, writing p(J ) for the distribution of the synaptic weight matrix while also letting J be a function of the time lag.This notation assumes that J ij (s) = J ij G ij (s) for some matrix of unit-norm kernels G, which we leave implicit.The connectivity gives rise to an effective noise in the membrane voltage.Each cumulant of the connectivity gives rise to a cumulant of the same order in the effective noise.For example, consider an Erdős-Rényi network with connection probability p and synaptic weight J for the non-zero connections, with J(s) = J δ(s).The distribution p(J ) factorizes over the weights; the cumulant generating functional for an individual synaptic weight is and cumulants of the synaptic weights J ij obey the recursion relation If the connection probability p and weight J are both of order 1, the synaptic weights will have non-negligible cumulants of all orders.If the synaptic weights are of order one and the connectivity sparse, p ∼ 1/N , the cumulants are ⟨⟨J n ij ⟩⟩ = J n p + O(1/N 2 ).If J > 1, higher cumulants of the connectivity will dominate, giving rise to higher-order cumulants in the effective noise of the membrane voltage.In contrast, if J ∼ 1/N and p ∼ 1 then ⟨⟨J n ij ⟩⟩ ∼ N −n so in a large network, the first cumulant of the connectivity dominates.
Here, we consider that simple case where J has only a first cumulant.Let ⟨J ij (s)⟩ J = J(s)/N .The average over the connectivity yields (Here, * represents scalar temporal convolution.)We would like to examine this partition functional in the limit of a large network.Let R = 1 N j J * ṅj ; we will enforce this by integrating against δ N R − J * j ṅj .
With the Fourier representation of that delta function, we have a generating functional for the auxiliary fields R, R: Here, ⟨ ṅ⟩(t) is the population-averaged firing rate.Inserting these saddle-point solutions yields the partition functional, Eq. 20.

Appendix D: Perturbative expansion
If fluctuations or nonlinearities are weak, a perturbative expansion around the mean field theory can provide accurate estimates of fluctuation effects.For ease of notation, let x = (v, ṅ) T and x = (ṽ, ñ) T .We expand x around a background field, and collect terms up to linear order in the fluctuations in the free action S 0 , with higher order terms in the inter-acting part of the action S V : (D2) (We should also expand the response variable x around a background field; we skip that here since in Appendix E we will constrain the background fields to be the mean trajectories, and the mean of response variables is 0 Expanding exp (−S V ) in a functional Taylor series around a solution to the mean field theory yields an expansion of the moment in terms of Gaussian integrals with respect to the free density exp (−S 0 ).Due to Wick's theorem, these integrals yield products of the propagators, ∆.These expansions can be efficiently organized diagrammatically.

Feynman rules
Here we give the Feynman rules for a perturbative expansion of statistics of the population-averaged system, Eq. 29, around the mean field theory.This provides a graphical algorithm for computing arbitrary cumulants of the spike train ṅ or membrane voltage v. Moments can be composed from the cumulants by the appropriate Bell polynomials.We give the rules in the temporal frequency domain, for an expansion around a stationary point.Each cumulant can be decomposed into a sum of terms, each represented by a connected diagram.Those diagrams are composed of the vertices and edges in Tables I and II (Feynman diagrams generated with [134]).

Vertex
Factor In-degree (δv, δn) Out-degree (ṽ, ñ) Vertices corresponding to the interacting action, SV , in Eq.D2. f (q) is the qth derivative of the intensity function f , evaluated at the expansion point v.The intensity function vertex, f (q) /q!, also has the constraint that the sum of its in and out-degrees must be at least three since the linear and bilinear terms in (ñ, δv) went in to the definitions of the background field and the propagators (Eq.D2).
The source vertex emits factors of the response variable ñ corresponding to spike fluctuations.Each internal vertex, , receives configuration variables δn, δv and emits response variables, ñ, ṽ.In any connected diagram, each pair of vertices will be linked by at least one pair of configuration and response variables, e.g., (δv, ñ).Due to Wick's theorem, each pair of configuration and response variables is replaced by the corresponding propagator edge.For example, the pair δv, ñ gives rise to Edge Propagator Factor ∆n,ñ(ω) (1 + n + iω) / 1 + n + f (1) v + iω ∆v,ñ(ω) −v/ 1 + n + f (1) v + iω ∆n,ṽ(ω) f (1) / 1 + n + f (1) v + iω ∆v,ṽ(ω) 1/ 1 + n + f (1) v + iω TABLE II.Edges corresponding to the components of the propagator from S0 in Eq.D2.Each measures the linear response of one configuration variable to a perturbation of another.For example, ∆v,ñ measures the linear response of the voltage to a spike fluctuation.
2. Using the internal vertices and edges in Tables I, II, construct all connected graphs such that each external vertex has one incoming propagator edge.Each edge has its own frequency variable, ω i .
3. To evaluate a diagram, multiply the factors of every edge and vertex together.Additionally, the sum of external frequencies (those on the external vertices' incoming edges) is zero: also multiply by δ a+b i=1 ω i .Finally, integrate over all of the internal frequencies: for each internal frequency ω i , integrate dω i /2π.

Evaluate each connected diagram constructed in
(2), and add the contributions of the diagrams.
We perform the integrals over internal frequencies analytically using the residue theorem.For a thorough introduction see e.g., [132,135].For an introduction to diagrammatic methods in the Poisson generalized linear model without resets (no self-coupling) see [55].See [58] for detailed analytical calculations of the integrals over internal frequencies in that model.
Eqs. 12, 24 are self-consistent one-loop equations of motion for the mean voltage and rate.The approximate joint cumulants appearing in them can also be calculated using the perturbative Feynman rules above; each is given by a tree diagram with the same two edges as in the loop diagram.
The corresponding perturbative corrections to the mean field values of the voltage and rate are given by oneloop tadpole diagrams.For example, the perturbative one-loop correction to the mean field voltage corresponding to Eq. 14 is . The internal vertex in this diagram carries a factor of −1.This corresponds to the sign this diagram appears with in Eqs. 12, 24, opposite the sign in front of the second loop diagram.There is also a perturbative one-loop correction to the mean field voltage arising from the intensity vertex, .
Both nonlinearities also give rise to perturbative corrections to the mean field rate.This is a difference with the self-consistent approach of the main text, where only one one-loop correction arises in each equation of motion (Appendix E).

Appendix E: Effective action
Here we briefly derive the effective action.For a more detailed presentation see e.g., [135] Ch. 7 or [132] Ch. 11-14.For ease of notation, let x = (v, ṅ) T and x = (ṽ, ñ) T .The cumulant-generating functional is exp W [j, j] = Dx D x exp 1 h −S[x, x] + jT x + j T x .
(E1) We have introduced a scale h into the exponent on the right-hand side.For physical calculations we will set h = 1.(Here, h has no relation to that used in Section IX.)We expand x around a background field x (Eq.D1) and similarly for the response variable, x = x * + δ x.This yields

(E7)
The loop expansion for the effective action is a diagrammatic equivalent of the saddle point expansion of the integrals over δx, δ x in Eq.E6, without requiring that h be a bona fide small parameter [136,137].The diagrams contributing to the equations of motion for x, x * are one-line-irreducible vacuum diagrams (those that cannot be disconnected by cutting one edge; see e.g., [132], Ch. 11.4, 13.3).Only those diagrams with a vertex carrying the appropriate factor of x *  q! (δv) q .
(F2) The components of the propagator for this model are given in Table III.It has the same source and intensity vertices as the stochastic LIF model (the first two entries in Table I), but lacks the reset vertex.

FIG. 2 .
FIG. 2. Bistable activity in homogenous networks.a) Phase diagram of the mean field theory, Eq. 23, in the input (E) vs coupling (J) plane.There are three possible states: low activity (L), high activity (H), and bistability (B).b-d) Raster plots of a homogenous networks activity at the parameter locations marked in panel a.At t = 5 and t = 15, perturbations of amplitude 2 and duration 2 are applied to the drive E (top).e) Bifurcation curve in J with E = 1/2.f ) Bifurcation plot in E with J = 4. Grey circles: simulation.Black dashed: the mean field theory of Eq. 23.Black solid: the exact rate of the disorder-averaged system, using the numerical self-consistent solution of Eq. 22.The simulated network has Erdő-Rènyi connectivity.Simulated network parameters: N = 100, p = 0.5.All non-zero connections have the same weight, J/(pN ).

FIG. 6 .
FIG. 6. Phase diagrams for paradoxical responses to inhibitory stimulation.a) Boundaries of the paradoxical response region with J = 4. Dashed line: mean field theory, Eq. 42.Solid line: one-loop theory, Eq. 45.Color: simulation.Each simulation lasts for 200 time units; at time 100, the inhibitory drive switches from hE = E to hE = E + 0.1.b) As in a, with g = 2. Simulated block Erdős-Rènyi network parameters as in Fig. 3.
C7) Note that the generating function for the neural dynamics factorizes over the neurons; Z i [R, R] does not contain any other indices.So, we will drop the neuron indices and write N ln Z[R, R] instead of i ln Z i [R, R].For large N , we evaluate the integrals over the auxiliary fields R, R by a saddle point approximation.The saddle point equations are 0 , j] − jT x − j T x * = Dδx Dδ x exp 1 h − S + jT δx + j T δ x .(E2)Wenow require that our background field be the mean: x = ⟨x⟩ so that0 =⟨δx i ⟩ = ∂ ∂ ji exp 1 h hW [j, j] − jT x − j T x *(E3) and similarly, we require x * = ⟨ x⟩.These requirements can only be satisfied at a stationary point of the exponent:0 = ∂ ∂j i hW [j, j] − jT x − j T x * 0 = ∂ ∂ ji hW [j, j] − jT x − j T x * ,(E4)which defines a Legendre transform from −hW to the effective action Γ:Γ[ x, x * ] = sup j, j jT x + j T x * − hW [j, j].[ x, x * ] + S[ x, x * ]) = Dδx Dδ x exp 1 h −S[δx, δ x] + jT δx + j T δ x ,(E6) where S[ x, x * ] contains the terms in the S that depend only on x, x * and not δx, δ x and S[δx, δ x] contains the remaining terms of S. This has the form of a generating functional for x, x * .The mean is a stationary point of Γ; it obeys the equations of motion 0 i will contribute to the equation of motion 0 = ∂ ∂ x * i Γ (and similarly for ∂/∂ xi ) which is why the one-loop equations of motion each have only one loop correction.