Theory of spike-train power spectra for multidimensional integrate-and-ﬁre neurons

Multidimensional stochastic integrate-and-ﬁre (IF) models are a standard spike-generator model in studies of ﬁring variability, neural information transmission, and neural network dynamics. Most popular is a version with Gaussian noise and adaptation currents that can be described via Markovian embedding by a set of d + 1 stochastic differential equations corresponding to a Fokker-Planck equation (FPE) for one voltage and d auxiliary variables. For the speciﬁc case d = 1, we ﬁnd a set of partial differential equations that govern the stationary probability density, the stationary ﬁring rate, and, central to our study, the spike-train power spectrum. We numerically solve the corresponding equations for various examples by a ﬁnite-difference method and compare the resulting spike-train power spectra to those obtained by numerical simulations of the IF models. Examples include leaky IF models driven by either high-pass-ﬁltered (green) or low-pass-ﬁltered (red) noise (surprisingly, already in this case, the Markovian embedding is not unique), white-noise-driven IF models with spike-frequency adaptation (deterministic or stochastic) and models with a bursting mechanism. We extend the framework to general d and study as an example an IF neuron driven by a narrow-band noise (leading to a three-dimensional FPE). The many examples illustrate the validity of our theory but also clearly demonstrate that different forms of colored noise or adaptation entail a rich repertoire of spectral shapes. The framework developed so far provides the theory of the spike statistics of neurons with known sources of noise and adaptation. In the ﬁnal part, we use our results to develop a theory of spike-train correlations when noise sources are not known but emerge from the nonlinear interactions among neurons in sparse recurrent networks such as found in cortex. In this theory, network input to a single cell is described by a multidimensional Ornstein-Uhlenbeck process with coefﬁcients that are related to the output spike-train power spectrum. This leads to a system of equations which determine the self-consistent spike-train and noise statistics. For a simple example, we ﬁnd a low-dimensional numerical solution of these equations and verify our framework by simulation results of a large sparse recurrent network of integrate-and-ﬁre neurons.


I. INTRODUCTION
Sequences of stereotypic events occur in many fields of physics and beyond: the emissions from a radioactive source, the shot noise in a semiconductor diode, the occurrences of avalanches, earthquakes, and floods, crashes in the stock market, and the generations of action potentials in a nerve cell are all excellent examples for events that reoccur at apparently random instances in time. Often, the statistics of these random sequences is much richer than the textbook example of a Poisson process; events might be strongly interdependent and the intervals between them are correlated as well. Stochastic models that can capture such statistical features are often noise-driven excitable systems in which certain variables reach a threshold for event generation. the voltage variable v. A highlight of this kind of analysis was the stochastic mean-field theory for sparse networks of recurrently connected IF neurons [12,[14][15][16][17][18][19], in which the coefficients of the Fokker-Planck equation depend on the resulting density itself in a self-consistent way (input spikes are here approximated by Poisson processes), providing a tool for the analysis of neural "phase transitions" (e.g., the transition from an asynchronous to a synchronous network state upon change of a cellular or network parameter). Under the crucial assumption that neurons are mainly subject to temporally uncorrelated noise, in particular, the asynchronous state with its weak cross correlations among neurons can be well described by linear response theory [20][21][22][23] (for a related problem in an Ising-type spin system, see [24]). For white-noise-driven neurons, there exist also a number of exact results for the firing rate [5], the power spectrum [25], and the linear [12,[26][27][28] and nonlinear [15,29] response functions to periodic stimulation. Moreover, an efficient numerical scheme, the threshold-integration method [30,31], has been developed for the swift computation of these statistics for IF models with arbitrary voltage dependence.
Unfortunately, in many situations the one-dimensional model with white noise does not suffice to capture the statistical and information-theoretic features of real neurons. This model class is, for instance, unable to reproduce nonvanishing interspike-interval correlations [32,33] or high-pass information filtering of sensory signals [34][35][36][37], distinct features that are seen experimentally in many sensory neurons. The good news is that the extension by just one or a few degrees of freedom for the subthreshold dynamics yields excellent models of the various spike patterns seen in vitro [38][39][40], of the spike response upon noisy current injections [41][42][43][44][45], and of the above-mentioned nonrenewal [46,47] and information-filtering [35,36,48] effects. Further degrees arise from modeling correlated fluctuation [49][50][51][52][53]: colored noise can be represented by means of a Markovian embedding [54], the most popular example of which is the Ornstein-Uhlenbeck process (OUP) that serves in many studies as a low-passfiltered Gaussian noise. In statistical physics the idea of a Markovian embedding goes back to Mori [55] and has found many applications, e.g., in studies of escape problems [56][57][58] and anomalous diffusion [59,60].
Although special cases of multidimensional IF neurons have been studied over the past two decades, either numerically [61][62][63][64], by analytical approximations [8,26,47,[49][50][51][52][65][66][67][68][69][70], or by a reduction to mesoscopic equations [71,72], a general framework how to calculate their essential statistics is lacking. We note that although density equations for two-dimensional IF neurons with instantaneous reset have been formulated [49,50,61,70], a refractory state of finite duration has not been incorporated in the multidimensional Fokker-Planck equation (for an exception in a special case, see [63]). More importantly, previous studies have focused on the calculation of the firing rate and voltage distribution only but did not address a key characteristic of the neural firing statistics, the spike-train power spectrum.
In particular in the presence of colored noise and/or spikefrequency adaptation, the spike-train power spectrum gives a more complete description than the single interspike interval (ISI) distribution because in general ISIs are correlated. Aside from the stationary firing rate (the first moment of the spike train), the next important characteristic is the second-order statistics, which is the temporal correlation function of the spike train or its Fourier transform, the power spectrum. Spectra have been measured in experiments [37,[73][74][75][76] and reveal timescales of oscillations by showing peaks at the firing rate, at the frequency of subthreshold oscillations, at the frequency of an external periodic driving, or due to narrowband noise, and at respective sidebands. Spectra display reduced power at low frequencies due to refractoriness or due to mechanisms of long-term variability suppression (such as slow spike-triggered inhibitory currents); their zero-frequency and high-frequency limits are conveniently given by the Fano factor and the stationary firing rate, respectively. Spectra indicate processes with extremely slow timescales by a strong increase of power at low frequencies [77]. Last but not least, in populations of neurons, the spike-train power spectrum is also a main contributor to the power spectrum of the population activity [78,79]. Spike-train power spectra are thus a key measure of neural variability and their calculation for a given stochastic model is a key challenge for any theory of neural firing variability.
There is another reason why the power spectrum of a multidimensional IF neuron is of particular importance in neurophysics. The aforementioned theories of recurrent networks employ a white-noise approximation (effectively amounting to treating all spike trains in the network as Poisson processes), and this approximation fails if physiologically realistic synaptic coupling amplitudes are used [77,80,81]. Although this has been early on recognized [82] and has led to self-consistent single-neuron simulation schemes for the spike-train power spectrum [81][82][83], a mean-field theory for a sparse recurrent network of spiking neurons that is self-consistent with respect to second-order fluctuations is still missing. Such a theory requires a Markovian embedding for the (network-generated) colored noise and thus a self-consistent formulation in terms of a multidimensional IF neuron, the coefficients of which are determined by the spike-train power spectrum itself. Knowing how to determine the spectrum for a given multidimensional IF neuron is the corresponding open-loop problem and thus the first step in a self-consistent theory of recurrent networks of spiking neurons. Note that existing multidimensional Fokker-Planck descriptions of recurrent networks (e.g., in [61,84]) take into account slower variables (such as adaptation currents) but do not incorporate this self-consistency condition.
In this paper, we derive the determining equations for the stationary density, the firing rate, and the spike-train power spectrum of a multidimensional IF neuron. The inspected model class includes the cases of neurons driven by lowpass-filtered noise (as emerges by synaptic filtering [49,52] or due to slow channel noise [67,68]), driven by high-passfiltered noise (as emerges by presynaptic input spikes with a refractory period [53,62]), endowed with spike-triggered adaptation (as emerges from slow calcium dynamics and calcium-dependent ion channels [85]), and endowed with a bursting mechanism [38,40,41]. For all these distinct cases, we focus in this paper predominantly on the correlation statistics, i.e., the spike-train power spectrum. We demonstrate the correctness of our theory for a large number of examples for several reasons. First of all, because the numerical solution of our general equations is far from being trivial and it is therefore important to test it in many different situations. Second, the resulting spectra reveal a richness of shapes with distinct maxima, minima, shoulders, etc., resulting from the interplay of nonlinear neural dynamics and correlated fluctuations and/or adaptation currents. Third, the spike-train power spectra obtained can also serve as a reference for comparison with experimentally measured spike-train power spectra.
Our Fokker-Planck framework for the spectral statistics of integrate-and-fire models paves the way for many more applications and generalizations. For instance, it sets the stage for calculating other important features, among them the response to external stimuli and the corresponding information flow. In addition, our framework may serve for modeling some of the problems outside neuroscience which involve stochastic event sequences mentioned in the beginning. Most importantly, as already mentioned above, it allows to approach the problem of the self-consistent spiking statistics in a sparse recurrent network. In the last part of the paper, we pursue this important problem and present a mean-field theory of the self-consistent second-order statistics. For a special case, we present an approximate numerical solution of the resulting equations and demonstrate the agreement of spike-train power spectra with simulations of a large neural network.
The paper is organized as follows. In Sec. II we introduce our generic two-dimensional IF neuron model and the spiketrain statistics of interest. The Fokker-Planck equation (FPE) and a related equation for the power spectrum are presented in Sec. III. In Sec. IV we apply the theory to the standard leaky IF neuron driven by a colored noise that can be represented by a sum of a white noise and a (correlated) OUP. We show that the Markovian embedding is not unique even in this very simple case and inspect the spike-train power spectra. In Sec. V we use an exponential IF neuron with an adaptation current, the so-called AdEx model [41] driven by white noise and also including stochastic adaptation [67,68]. We outline in Sec. VI how the theory is extended to arbitrary dimensions (covering more complicated input noise correlations [53,69] or multidimensional adaptation processes) and study as an example the power spectrum of a leaky IF neuron driven by a narrow-band noise. In Sec. VII, we employ our results to develop a theory of self-consistent spike correlations in a large sparse recurrent network, derive a low-dimensional approximate solution in a special case, and verify this result by comparison with a network simulation. We conclude in Sec. VIII with a summary of the paper's main achievements and a discussion of a few open problems. Appendices give details on the numerical solution of our key equations (Appendix A) by a finite-difference scheme [86] and on the ambiguity of the Markovian embedding (Appendix B).

II. NEURON MODEL AND SPIKE-TRAIN STATISTICS
We consider a generalized two-dimensional neuron model. The membrane voltage v and an auxiliary variable a evolve according to the Langevin equations The time constants of the membrane voltage and the auxiliary variable are denoted by τ m and τ a , respectively. Furthermore, ξ i (t ) is Gaussian white noise with zero mean, obeying the correlation ξ i (t )ξ j (τ ) = δ i j δ(t − τ ). The parameters β * 1 and β 1 quantify the intensity by which the noise process ξ 1 (t ) may enter both of the two equations. In the equation for the auxiliary process we apply an additional source of noise ξ 2 (t ) with the strength β 2 . Both variables are affecting each other via the functions f (v, a) and g (v, a). For the functions f (v, a) and g(v, a), we assume reasonable shapes such that the existence of a stationary solution for the probability density in v and a is guaranteed. If the membrane voltage crosses the threshold v th , the neuron fires an action-potential (spike emission), i.e., it is clamped to the value v ref for an absolute refractory period τ ref and then reset to the value v r ; at the time of spike emission t k the auxiliary variable a(t ) may be incremented by a constant amount δ a and freely evolves during the absolute refractory period: Within the framework of the IF model our usage of a finite pulse width τ ref (cf. Fig. 1) is uncommon. It allows for a more realistic description of the auxiliary variable's evolution during an action potential that is finite in width and amplitude. Our model can capture two important classes of multidimensional IF neurons. First, assuming the auxiliary variable does not depend on membrane voltage and spike history [ f (v, a) = F (v) + a and δ a = 0], it represents an external source of colored noise, that drives a one-dimensional neuron. The most prominent example of this kind is the OUP with a linear function g(v, a) = −a. In this case, the two-dimensional white-noise-driven process [v(t ), a(t )] is a Markovian embedding of a one-dimensional dynamics v(t ) driven by a colored (temporally correlated) noise η(t ) = a(t ) + β * 1 ξ 1 (t ). The second class of multidimensional IF neurons described by our model is a neuron with an adaptation current. A dependence of the auxiliary process on either the spike history or the membrane voltage captures effectively a feedback from v(t ) to a(t ). A typical choice is f (v, a) = F (v) − a and δ a > 0 and leads to spike-frequency adaptation [41,46,70,85,87]. Here, we distinguish between deterministic adaptation (β 1 = β 2 = 0), in which channel fluctuations of the adaptation channels are neglected (the case considered in almost all papers cited above), and stochastic adaptation, for which the dynamics of a(t ) is subject to additional white noise [67,68]. Furthermore, subthreshold adaptation can be taken into account by the first term in g(v, a) = Av − a [41,88], i.e., with a nonvanishing value of A. Such two-dimensional neuron models have already rich deterministic dynamics, exhibiting bursting or even chaotic firing without noise [89,90].
The main goal of our paper is the determination of the spike train's temporal correlation or, equivalently, the spiketrain power spectrum. The mathematically convenient abstract description of a sequence of action potentials is given by the sum of δ functions at spike times t k constituting the spike train: The mean value of the spike train is the instantaneous firing rate where the angular brackets denote an ensemble average over different realizations of the noise. In case of a stationary averaging ensemble, we obtain the stationary firing rate r 0 . However, other preparations are possible and will be of importance in the following. For instance, if we consider an ensemble of neurons right after firing with the proper distribution of the auxiliary variable a(t ), we obtain the important conditional firing rate m(t ) that is the essential part of the spike train's autocorrelation function [91,92]: According to its actual definition, the function m(t ) can also be determined without any knowledge of a(t ) by measuring the probability for a spike given that there was a spike at t = 0. Instead of the autocorrelation function we can also consider the spike-train power spectrum, which is the Fourier transform of C(t ): [here we have used that C(t ) is an even function]. This relation is central to our approach because it connects the power spectrum with the Fourier transform of the conditional firing rate which can be extracted from the Fokker-Planck equation. We can also measure the spike-train power spectrum in numerical simulations of the IF model. Generally, we define a finite time window Fourier transform of the time series y(t ) as where we subtracted the mean value, resulting in Fourier transforms without DC peak. The power spectrum of y(t ) is then obtained as the simple ensemble average If y(t ) is the spike train x(t ) sampled with a time resolution of t, we approximate the time series as a binary sequence of height 1/ t for time bins with a spike and zero elsewhere. In the following, to ease the notation we will use S(ω) in our calculations and S( f ) in the figures for the spike-train power spectrum, where ω and f = ω/(2π ) denote the angular and regular frequency, respectively.

III. FOKKER-PLANCK EQUATION
The Fokker-Planck equation (FPE) describes the time evolution of an ensemble of processes driven by white Gaussian noise [13]. For the introduced neuron model, this equation contains an uncommon nonlocal term due to the fire-and-reset rule. It reads as and the nonlocal operatorR mediates the fire-and-reset condition and the evolution of the probability density during the refractory period (see below). The density obeys natural boundary conditions for a → ±∞ and v → −∞, i.e., and (because we have included a nonvanishing white noise in the v dynamics [93]) an absorbing boundary condition at v th : It is important to mention that a vanishing white noise in the voltage dynamics yields a different boundary condition as described in [49]. However, an additional weak white noise is a reasonable model for different kinds of fluctuations, e.g., due to fast ion channels. If we think of the probability density as a population of uncoupled neurons, we can relate the probability current in v, to the instantaneous firing rate. On the one hand, the fraction of neurons with a voltage that crosses the threshold in a small time interval (t, t + t ) and at a certain value of the auxiliary variable a is given by the probability current J v (v th , a, t ) t; (a) Fire-and-reset rule without refractory period or spike-triggered adaptation: the flux of probability that is absorbed at the threshold v th (red line) is reinserted at the reset voltage v r (red arrows) without a change in the auxiliary variable a. (b) Fire-and-reset rule with refractory period but in the absence of spike-triggered adaptation: the absorbed probability is transferred to the refractory voltage v ref (red arrows). The probability density (red line directly after firing) evolves in the auxiliary variable until the refractory period has passed (blue line after the refractory period). Subsequently, the evolved density is reinserted at v r (blue arrows). (c) Spike-triggered adaptation without refractory period: the probability is reinserted at the reset voltage and, simultaneously, shifted by δ a along the a axis (red arrows). (d) Combination of spike-triggered adaptation and refractory period: the probability is shifted along a and inserted at v ref . After the refractory period has passed, the evolved probability is reinserted at v r . In addition to the stationary probability currents (strength indicated by arrow density and background color), we also show the stationary probability densities in the four situations.
if we integrate over a, this corresponds to the total probability current through the threshold. The latter, on the other hand, determines the firing probability r(t ) t. By definition of the current and the absorbing boundary condition follows: This is an important relation between the subthreshold membrane potential statistics and the spike-train statistics of the neuron model [8,65]. The set of equations (FPE and its boundary conditions) is completed by the normalization condition The second term on the left-hand side captures the part of the probability that is currently in the refractory state.
We turn now to the definition of the fire-and-reset operator in the FPE [Eq. (9)]. Because this term appears to be surprisingly cumbersome, let us start with the simple case without refractory period and incrementation [τ ref = 0 and δ a = 0, see Fig. 2(a)]. Here, trajectories that reach (v th , a) are immediately reset to (v r , a); the termRP(v, a, t ) mediates a source of probability at v r (leading to a delta-function contribution) with a source strength corresponding to the efflux of probability at the threshold, related to the first derivative of the probability density at the threshold. With an incrementation in the auxiliary variable [δ a > 0, see Fig. 2 , there is an additional shift between the efflux point (v th , a) and source point (v r , a + δ a ), implying that we have to take the derivative not at a but at a − δ a . Taken together, we can describe the effect of the nonlocal operatorR for τ ref = 0 as follows: This definition has been used, e.g., in Refs. [49,61,70]. If τ ref > 0, we have to deal with the complicating feature that the evolution of a(t ) and its corresponding probability density does not stop during the refractory period. Thus, the density evolves according to an The procedure runs as follows: Then, the evolved probability density (a, t ) is given in terms of the formal solution for the transition probability density by [13] (a, t ) By means of the so-defined operatorÊ , we can write the source term at the reset point as follows: which is nonlocal both in (v, a) space and in time t.
The first problem of interest is to find the stationary solution P 0 (v, a) of Eq. (9) and the corresponding stationary firing rate r 0 . In the time-independent case (∂ t P 0 = 0), the equation reads as and the solution satisfies the same boundary conditions as the time-dependent density. Specifically, the normalization condition simplifies as follows: The unique solution for the density and the firing rate can be found by the numerical procedure discussed in Appendix A.
Our main interest is the calculation of the spike-train power spectrum, for which, according to Eq. (6), we have to compute the Fourier transform of the conditional firing rate m(t ). The latter was given by the probability of a spike given that there was a spike at t = 0 corresponding to an integrated conditioned probability current [cf. Eq. (14)]. The condition is that v(t = τ ref ) = v r but we do not know the value of a(t = τ ref ) or, more precisely, its proper initial distribution upon reset ρ U R (a). However, in a stationary situation, the number of realizations that start at the reset point is proportional to the stationary probability current into (v r , a),RP 0 , which leads upon normalization to If we use this as an initial condition for the time-dependent FPE (9) at t = τ ref (no probability in the system before that time), we obtain m(t ) as the integrated current through the threshold With a Fourier transform of this function, we can determine the spike-train power spectrum by virtue of Eq. (6).
Alternatively to the approach in time domain, one can reduce the computational complexity of the problem by a temporal Fourier transformation, a method that specifically pays off if we want to know the spike-train power spectrum only at a few frequency values. The approach can be regarded as an extension of the work in [31] to multidimensional neuron models. Through the Fourier transformation, the problem is described by a partial differential equation in only two, instead of three, variables, though for a complex-valued function. In addition, the nonlocal time dependence due to the refractory state is turned into a local dependence in the frequency domain.
In order to deal with well-behaved functions in the Fourier domain, we transform the deviations of the probability density and conditional firing rate from their respective steady state: In the last line we show the important relation of the Fouriertransformed rate deviation [appearing in the spectral formula (6)] to the derivative of the Fourier-transformed densitỹ Q(v, a, ω). Fourier transforming the FPEs for P(v, a, t ) and P 0 (v, a) and taking their difference, we obtain an equation forQ, reading as with e τ = exp(iωτ ref ). This equation has to be solved with natural boundary conditions for a → ±∞, v → −∞ and an absorbing boundary condition at v = v th . Once we have foundQ(v, a, ω), we can determine the spike-train power spectrum at frequency ω by Eqs. (6) and (24): This formula does not necessarily provide a computationally superior way to calculate the spike-train power spectrum compared to Langevin simulations, although it may be helpful in special cases, for instance, for the determination of the Fano factor. Our main aim was to develop the spectrum's general theory, i.e., analytical relations that do not require Monte Carlo simulations. The general theory is particularly useful for more involved applications such as the correlation statistics in recurrent neural networks (see Sec. VII below). It may be also the starting point of novel analytical approximations in specific situations. Since no analytical solution is known in general for Eqs. (20) and (25), both equations are solved numerically by approximating the differential operators with finite differences as explained in Appendix A in detail.

IV. LEAKY INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE
We show that our method can reproduce spectra of the standard leaky integrate-and-fire (LIF) neuron driven by colored Gaussian noise (no adaptation, δ a = 0). The qualitative spiking behavior of such neuron model was already studied in [62] although with vanishing refractory period (here τ ref = 2 ms if not stated otherwise). To generate such correlated noise, we use an OUP [94] as an auxiliary noise variable. For simplicity we set β 2 = 0, i.e., LIF neuron and OU process are driven by the same white-noise source. Our generic model reads as in this special case (27) The stochastic input to the membrane voltage is given by η = a + β * ξ (t ) with the input power spectrum Note that an additional noise source ξ 2 (t ) does not permit a different shape of the input spectrum. The input spectrum (28) is a constant (white noise) plus or minus a Lorentzian function with a maximum at 0 (red noise); the full width at half-maximum is given by 2τ −1 a . We distinguish between high-pass-filtered noise if −2 < β/β * < 0 [see green region in Fig. 3(a)], that we refer to as green noise (white minus red), and low-pass-filtered noise if β/β * < −2 [see red region in Fig. 3(a)], that we refer to as white-plus-red noise. In the case that β = 0 or β/β * = −2 we generate white noise. Remarkably, a given input spectrum can be generated by two different values of β, except if we want the spectrum at zero frequency to be zero, which is the case if β = −β * . Put differently, in general, the Markovian embedding is not unique (for a similar ambiguity of the Markovian embedding of a generalized Langevin equation, see [60]). Although the Fokker-Planck equation and its stationary solution is different for the two representations of the same colored noise [cf. Figs. 3(b) and 3(c)], they yield the same spike-train power spectrum [cf. Fig. 3(d)].
In the following, we test our theory for green and whiteplus-red noise both in a fluctuation-dominated regime and in a mean-driven regime. We vary the timescale and intensity of the input fluctuations by changing τ a and β * , respectively. The influence of the colored noise on the resulting spike-train power spectrum depends on the ratio between the characteristic frequency f c = (2πτ a ) −1 and r 0 . If f c r 0 , the increase or lack of input power is felt only at very low frequency, otherwise, the effect of the noise is mainly restricted to that of its white component acting with the strength β * 2 . In the other case, f c r 0 , the input noise acts like a white noise with the strength (β + β * ) 2 .

A. Green noise
Here, we choose input spectra with strongly reduced power at low frequencies [ We start in the fluctuation-dominated regime (μ/v th = 0.75 < 1).  Table I  Our theory is capable to predict the power spectra of all considered parameter sets (cf. agreement between theory and simulations in Fig. 4). The spike-train power spectrum adopts the key feature of the input spectrum: it exhibits lower power up to frequencies where the input spectrum reaches its high-frequency limit. We obtain an absolute minimum at f = 0 in all considered cases. Decreasing τ a (Fig. 4, from left to right), and thereby increasing f c , reduces the stationary firing rate (high-frequency limit of the spectrum) and shifts the low-frequency dip to higher frequencies.
Increasing the noise strength β * (Fig. 4, from top to bottom) increases the stationary firing rate and changes the spectral shapes. Most remarkably, a new maximum [Figs. 4(d), 4(g) and 4(h)] is formed around the frequency at which the input spectrum starts saturating f ∞ ≈ (2τ a ) −1 . This maximum can be understood by considering the white-noise-driven LIF neuron's power spectrum [25] which exhibits for large noise  Table I. intensity a maximum at f = 0 and a minimum at the firing rate. Reducing the white-noise input at low frequencies (corresponding to our green noise with sufficiently large τ a ) reduces likewise low-frequency power in the output spectrum thus resulting in the maximum. We would like to emphasize that therefore there is no oscillation mechanism responsible for this peak.
The theory also predicts the spike-train power spectra in a mean-driven regime well (cf.   Table I for all parameters. In both regimes, we also show one example of a stationary density, the function that enters our essential Eq. (25) as the inhomogeneity. For the green noise considered here, the auxiliary variable and the voltage are clearly negatively correlated, especially in the fluctuation-dominated regime at larger noise [note the pronounced tilt in Fig. 4(j)]. If the mean drive is stronger and fluctuations are weaker, this anticorrelation is not as pronounced [ Fig. 5(j)].

B. White-plus-red noise
Here, we increase power at low frequencies, choosing β = ( √ 2 − 1)β * which keeps the ratio of low-and highfrequency power spectrum constant at S ηη (0)/S ηη ( f → ∞) = 2, irrespective of the choice of τ a and β * . As before our theory reveals excellent agreement with the simulation results at a variety of parameter sets (cf. Fig. 6 and Fig. 7).
As in the case of green noise, the spike-train power spectrum inherits a main feature of the input spectrum. For the red input noise, low-frequency power is increased also in the output spectrum, yielding a maximum at f = 0. Decreasing τ a (Fig. 6, from left to right) yields an increase in the firing rate and shifts the low-frequency peak to higher frequencies.
Increasing the noise strength β * (Fig. 6, from top to bottom)  Table I for all  parameters. increases r 0 and generally leads to more power at low frequencies. Interestingly, white-plus-red noise with sufficiently low f c and small strength forms an additional minimum [Figs. 6(a) and 6(b) and Figs. 7(a) and 7(b)]. The stationary densities in Figs. 6(j) and 7(j) reveal a positive correlation between auxiliary variable and voltage variable for white-plus-red noise.

C. Varying refractory period
Different types of neurons display a wide range of action potential widths [2], which sets a lower bound for the absolute refractory period in our IF description. Generally, it is of interest how the spiking statistics changes as we increase τ ref .
Furthermore, we can regard the variation of the refractory period as a further validation of our theory: we recall that the incorporation of the absolute refractory period is not completely trivial and thus worth to be tested.
In Fig. 8 we inspect the change of the refractory period for a neuron in the fluctuation-dominated regime for both green [ Fig. 8(a)] and white-plus-red noise [ Fig. 8(b)]. Increasing the refractory period to the maximal biophysically plausible value of 100 ms [95] changes the spectral shape drastically. Power at low frequencies is strongly reduced and sharp peaks  Table I for all parameters. emerge at the firing rate and its higher harmonics, indicating more regular firing. We note that even for a long refractory period the spectra for the two differently colored inputs still     show significant differences, e.g., in the ratio of the first to the second spectral peak.

V. EXPONENTIAL INTEGRATE-AND-FIRE MODEL WITH ADAPTATION
An important extension of the LIF neuron that can be described by Eq. (1) is the popular exponential IF (EIF) neuron with spike-frequency adaptation (AdEx model): Here, an additional exponential term in the membrane voltage dynamics yields a strong positive feedback if the membrane voltage exceeds the effective threshold v T , such that the model generates an abrupt rise from the effective threshold v T to the true threshold v th , a dynamical feature that significantly improves the similarity to experimentally observed action potentials. To model spike-frequency adaptation, the auxiliary variable is increased by a constant amount δ a if the membrane voltage crosses v th (spike-triggered adaptation). Furthermore, so-called subthreshold adaptation is incorporated by a linear term with positive coefficient A which yields a delayed feedback. The model's variety of spike patterns is large, even in a deterministic case with β 1 = β 2 = 0 (see [90] for all).
Here, we show that the introduced theory can predict spiketrain power spectra of the nonlinear AdEx model by means of a few examples: regular spiking with pure deterministic adaptation (β 2 = A = 0), additional stochastic adaptation (β 2 < 0), subthreshold adaptation (A > 0), and bursting (v r > v th ).

A. Deterministic adaptation
Here, we consider the case of a regular firing EIF neuron without noise in the adaptation variable (see Table I for all parameters) driven by white current noise. Several examples of spike-train power spectra are presented in Fig. 9 where our theory is in excellent agreement with the simulation results.
When we increase the time constant τ a [ Fig. 9(a)], the first effect is a drastic reduction of the firing rate: a larger τ a reduces the time derivative of the adaptation variable except  Table I for other parameters.

023024-10
for the jumps upon firing that are not affected by the choice of τ a in our scaling of the model. Consequentially, the time average of a increases with τ a yielding a stronger mean inhibition for longer time constant which in turn leads to reduced firing. A second effect of a growing value of τ a is the shrinkage of the frequency range of reduced power due to the adaptation.
Varying the strength of the spike-triggered adaptation δ a [ Fig. 9(b)], we find similarly strong effects on the spectral shape: the firing rate drops with growing δ a , a plateau around the inverse adaptation time constant τ −1 a = 10 Hz is formed (orange and green lines) and merges with the high-frequency limit for strong δ a (red line). Generally, we note that with stronger spike-triggered adaptation the ratio between low-and high-frequency power spectrum decreases, which is due to the reduction of long-term variability.

B. Stochastic adaptation
In Ref. [68] it was shown that the finite-size noise of adaptation channels can dominate the firing statistics in sensory neurons. To incorporate such adaptation channel noise, we include a nonvanishing noise term in the auxiliary process (β 2 > 0). This corresponds to a model that is not only subject to a deterministic adaptation current but in addition is driven by a white-plus-red input noise. In order to see this, we follow Ref. [67] and split the adaptation current into a = a 1 + a 2 that obey the two equations where the incrementation only applies to a 1 , i.e., upon firing a 1 → a 1 + δ a but a 2 remains unchanged. In result, a 1 and a 2 + β 1 ξ 1 (t ) correspond to the deterministic adaptation and the white-plus-red input noise, respectively. We note that numerically the original model is obviously simpler to treat but the separation is conceptually useful and helps to understand the effects of parameter changes on the power spectrum. The effects of the stochastic adaptation can be understood as a mixture of the effects that we have seen due to a whiteplus-red noise ( Fig. 6 and deterministic adaptation Fig. 9). As for the colored noise, we obtain a pronounced increase in spectral power for τ a = 40 ms but not for much higher values. Similar to the case of deterministic adaptation, longer adaptation time yields a reduced firing rate, i.e., a reduced highfrequency limit of the spike-train power spectrum. Hence, for the parameters chosen, the two aspects of the stochastic adaptation, the feedback and the fluctuations, become mainly manifest in different (low-and high-, respectively) frequency bands. This is also supported by the variation of the noise intensity in the adaptation variable [ Fig. 10(b)] which modifies mainly the low-frequency power spectrum but leaves the high-frequency limit unchanged.

C. Subthreshold adaptation
Incorporating subthreshold adaptation (A > 0) in the IF model may change the subthreshold dynamics drastically [40]: new fixed points may emerge and one of them may turn into a stable focus, which is accompanied by subthreshold oscillations [65,96]. It is known that such oscillations can  Table I for all parameters. become apparent in the spike-train power spectrum [48]. This is also seen in our theoretical and simulation results [cf. Fig. 11(a)]. For sufficiently large values of A, a peak emerges close to the resonance frequency [roughly given by f res = √ (1 + A)/(τ m τ a )/(2π )]; this is particularly pronounced for A = 100. Another effect of the subthreshold-adaptation term is a general reduction of the firing rate.

D. Bursting
Here, we tune the parameters of the AdEx model to evoke bursting, i.e., the neuron fires several spikes in rapid succession and is subsequently silent for a longer period of time [cf. spike pattern in Fig. 12(a)]. Bursting behavior occurs if the reset voltage is higher than the effective threshold (v r > v T ) and the adaptation is slow and weak. In this case, the exponential term in Eq. (29) yields a rapid increase of the voltage directly after reset and, hence, a rapid firing sequence, or, equivalently, short intraburst intervals. Bursting continues until the adaptation variable has sufficiently grown to bring FIG. 11. Exponential IF neuron with spike-triggered and subthreshold adaptation. Various values of A as indicated are used. For comparison, we show the same data for A = 8 in both panels. The vertical black line represents the inverse of the time constant 1/τ a and the dotted lines represent the firing rate consistent with the color encoding. See Table I for parameters.  Table I for parameters. the membrane voltage below the effective threshold. The following silent period persists until the adaptation variable is slowly degraded again, resulting in a long interburst interval. The white current noise that we have included in the model randomizes both the intraburst and interburst intervals as well as the number of spikes within a burst [cf. Fig. 12(a)].
The stationary density in Fig. 12(b) reflects the bursting mechanism. The fast spiking and resetting with increased a forms the undulating structure close to threshold. Probability that is reset above the voltage null cline (red dashed line) drifts toward lower voltages and lower a. The probability is maximal just beneath the voltage null cline because of the low velocity and the funneling of trajectories in this region. Finally, voltage and adaptation variables reach the initial state of a burst again and the cycle repeats.
Corresponding to the two types of intervals, the resulting spike-train power spectrum [ Fig. 12(c)] is characterized by two prominent peaks. The first peak at around 3.5 Hz is associated with the interburst interval and dominates the shape of the spectrum. The second peak is less pronounced, can be found around 200 Hz, and corresponds to the intraburst intervals. The input noise affects the relative variability of the intraburst intervals much stronger than that of the longer interburst interval. In addition, the low number of spikes within a burst is another reason for the smallness of the high-frequency peak. As before, our theoretical results are in good agreement with the numerical simulations.

VI. EXTENSION TO HIGHER-DIMENSIONAL NEURON MODELS
One may wonder whether more complicated colored noise or adaptation processes can be incorporated in our theory. Here, we outline how the framework can be extended to arbitrarily high-dimensional Markovian embeddings and demonstrate for one example that the numerical solution of the corresponding equations is feasible.
The essential step is to replace the scalar process a(t ) by a d-dimensional auxiliary process a(t ) with additive white noise: The d components of ξ (t ) are independent Gaussian white noise obeying ξ i (t )ξ j (t ) = δ i j δ(t − t ) and B is a d × d matrix with the components B kl .
The main achievement of this paper, namely, the analytical relation between the solution of the FPE in Fourier domain [Eq. (25)] and the spike-train power spectrum [Eq. (26)], can be adopted to the multidimensional model in Eq. (31). The corresponding FPE in time domain can be written as where the operatorL describes the drift and diffusion terms in all dimensions: Analogously to the two-dimensional FPE, the fire-and-reset mechanism is incorporated with the operatorR that includes multiple steps: it measures the probability flux through the ddimensional threshold manifold, shifts the out-flowing current into the refractory state, evolves the probability density in the refractory state, and reinserts the evolved probability at the reset voltage As in the two-dimensional case, the operatorÊ can be defined by its effect on a probability density ( a , t − τ ref ) during the refractory period: ( a, t ) where the operatorL ref is given bŷ The probability density P(v, a, t ) obeys natural boundary conditions for all components a i → ±∞ and v → −∞. Due to the white noise in the voltage dynamics we have an absorbing boundary at the threshold manifold: To calculate the firing rate, that is the flux of probability that crosses the threshold, we integrate the flux over all components of our auxiliary process over the entire manifold M a : The Fourier transform of the two-dimensional FPE in Eq. (25) has to be changed by replacing P 0 (v, a) → P 0 (v, a) andQ(v, a, ω) →Q(v, a, ω); apart from this replacement, Eq. (25) remains the central equation to be solved. As in the two-dimensional case, the spike-train power spectrum can be computed from the solutionQ(v, a, ω) as the integrated flux of probability that crosses the threshold: It is clear that the difficulty of finding a solution increases with the dimension of the Markovian embedding. In the following, we discuss the case of a two-dimensional embedding.

A. Harmonic noise
The extension to a multidimensional auxiliary process provides the possibility to calculate spike-train power spectra of neurons driven by noise with a more complex power spectrum. As an example, we calculate the spike-train power spectrum of a LIF neuron driven by a harmonic noise; a similar model (lacking the leak term and the white noise replaced by an OU process with short correlation time) has been proposed by [97] for the spiking of certain electroreceptor cells and has FIG. 13. Leaky IF neuron with a narrow-band noise, represented by higher-dimensional Markovian embedding. Time evolution of the voltage (a) and driving harmonic noise (b); spike-train power spectrum (c) with peaks at firing rate, the noise peak frequency, and side bands; input noise spectrum (d). Parameters in Table I. Important frequencies are indicated by vertical lines. been studied by the Fokker-Planck method in [69]. Here, we consider the following version: For simplicity, we set τ ref = 0. The auxiliary process y(t ) corresponds to the position of a harmonically bound Brownian particle with velocity s(t ) that is driven by thermal noise ξ 2 (t ) and Stokes friction with coefficient . The input spectrum is given by and exhibits a peak at ω p = √ ω 2 0 − 2 /2 with a width that is controlled by the friction coefficient .
The generated input noise [ Fig. 13(b)] attains the form of a noisy oscillation characterized by a peaked power spectrum [see Fig. 13(d)]. Unsurprisingly, the resulting spike-train power spectrum in Fig. 13(c) displays likewise a peak at ω = ω p . In the mean-driven regime and for ω p ≈ π r 0 (peak frequency of the harmonic noise is roughly equal to half of the firing rate) the spectrum shows additional peaks at f ± = r 0 ± ω p /(2π ) as it is also observed in experimental data [69,76]. Applying the general theory (25) to the two-dimensional Markovian embedding (40) yields a spectrum (red circles) that is close to the numerical simulations. Hence, our theory is applicable in a nontrivial higher-dimensional model with rich correlation structure.

VII. THEORY OF SPARSE RECURRENT NETWORKS
The computation of the power spectrum for a colored-noise driven IF neuron is the first step for a correlation-consistent theory of sparse recurrent networks. In such a network, each neuron is subject to the temporally correlated output of other neurons that is statistically identical to its own output. If we are able to calculate the power spectrum for an arbitrarily correlated input noise, we can ask for the kind of input noise that will evoke the same correlations in the output spike train. This question can be addressed numerically within an iterative scheme for a single neuron [81][82][83]. In contrast to this numerical approach, below we provide explicit equations for the self-consistent spectrum in a sparse recurrent IF network with random connectivity.
To be specific, we address the problem for a well-known standard network model proposed by Brunel [17] and often considered afterward (see, e.g., [1,77,80]). The network consists of two neuronal populations with N E excitatory and N I inhibitory LIF neurons, all obeying the same dynamics: Neuronal input RI (t ) is generated by presynaptic spikes and a constant external current: where t k i denotes the kth spike time of the ith presynaptic excitatory neuron, t l j the lth spike time of the jth presynaptic inhibitory neuron, and D is the synaptic delay time (the index of the postsynaptic cell enters by the specific random choice of the C E + C I presynaptic neurons). Excitatory and inhibitory synaptic weights are given by J and gJ, respectively. Every neuron has fixed numbers of C E excitatory and C I inhibitory input connections, where C E /N E , C I /N I 1, i.e., the network has a sparse connectivity.
This simple model was shown to exhibit a rich variety of distinct firing patterns and corresponding states of synchronization or desynchronization [17,80]. For dominating recurrent inhibition, a large part of the parameter space is occupied by the asynchronous irregular state, which resembles the spike statistics often observed in the awake behaving animal [98]. In that case it is convenient to consider one representative neuron and approximate its neural input, being the sum of many independent spike trains, as a stochastic process η(t ) with Gaussian statistics and constant mean input μ = RI (t ) : The constant mean input only depends on the firing rate of the presynaptic neurons r 0 : As an approximation, the temporal correlations of spike trains were neglected in [17] and η(t ) was approximated by a white-noise process that is fully determined by the firing rate of the presynaptic neurons. Hence, the output firing rate of a neuron only depends on the input firing rate. In a homogeneous network, input and output firing rates have to coincide (mean-field condition for the self-consistence of the first-order statistics). Assuming white Gaussian noise, one can solve for the corresponding firing rate [17]. However, as presented in all examples, neuronal spike trains are characterized by temporal correlations with nonflat power spectra and the neuronal input, being the sum of many spike trains, maintains these correlations [99]. Hence, the condition of self-consistence has to be extended to the spike-train power spectrum: The solution of this equation can be determined using the iterative scheme proposed in [81][82][83]: simulating Eq. (44) with an arbitrary initial choice for the power spectrum of the noise η(t ), one measures the power spectrum of the output spike train and generates a new surrogate noise with a power spectrum equal to that of this output spike train; the LIF neuron is then driven by the surrogate noise, the new output spectrum is measured, and the procedure is repeated until input and output spectra agree within the desired accuracy [i.e., until Eq. (46) is approximately satisfied]. This numerical scheme agrees well with simulations of sparse recurrent networks but still constitutes a Monte Carlo approach but not a theoretical solution for the self-consistent correlation statistics. Here, our solution of the open-loop problem is used to formulate this theory.

A. General theory of Markovian embedding of recurrent network noise
In line with our previous representation of colored noise by a Markovian embedding we write η(t ) as the summed components of an Ornstein-Uhlenbeck process with arbitrarily high dimensionality: Here, 1 denotes the vector in which all components are 1. The matrix A has to ensure that all components of a stay finite. This process is a special case of Eq. (31) with linear functions f (v, a) = −v + 1 a and g(v, a) = −A a and without adaptation. The crucial difference to the previous cases considered in Secs. II-VI is that we do not know the matrices A, B and the vector β.
Formally, for given A, B and β the input-noise power spectrum reads as (see Eq. 4.4.58 in [100]) Without loss of generality, all components of β except one can be set to zero (see Appendix B for details and more discussion of the ambiguity issue). The unknown coefficients also determine the output power spectrum via the generalization of the functionsQ(v, a, ω) and P 0 (v, a) in Eqs. (20) and (25). Combined, we obtain a system of equations for A, B, β,Q, P 0 , S, and r 0 that we first write in its simplest version: This system has to be complemented by the definition of the firing rate, the normalization of the stationary density P 0 , and the numerous boundary conditions forQ and P 0 . We emphasize again that this set of equations determines not only the two functions Q and P 0 , but also the coefficients of the noise process η(t ). The full complexity of the mathematical problem becomes apparent if we write this system of equations explicitly out as follows: 1 [ ∂ a = (∂ a 1 , ∂ a 2 , . . . , ∂ a d )]. This set of equations constitutes a mean-field theory of sparsely connected networks of spiking LIF neurons taking into account the self-consistent temporal correlations of spike trains at all timescales. Put differently, the equation should be satisfied at all frequencies, at least in the limit case of an infinite-dimensional Markovian embedding. To show the existence, uniqueness, and stability of the solution in A, B, β,Q, P 0 , and r 0 is a challenging problem for the theory and may reveal novel dynamical regimes of spiking networks for which the heterogeneous asynchronous state found in [80] is but one example. In the following, we simply assume that this problem has a solution and test numerical methods to find this solution for a simple example. However, we stress that this set of equations above deserves much more attention in future investigations.
In the above derivation we have assumed that the summed components of a multidimensional OUP may exhibit an arbitrary power spectrum. However, in general, this process has to be infinite dimensional which is difficult to implement numerically. In what follows, we restrict ourselves to finite-(and in fact rather low-) dimensional approximations of the self-consistency problem.

B. Finite-dimensional Markovian embedding of the network noise
For a d-dimensional embedding the input spectrum in Eq. (49) can be expressed as the following rational function: where the coefficients X k and Y are related to the matrices A, B and the vector β by [

adj(A) is the adjoint matrix of A].
We recall that Eq. (52) approximates an input power spectrum of summed spike trains multiplied with current amplitudes. Then, the constant term β 2 can be readily identified as the input firing rate with an additional amplitude factor [defined in Eq. (46)] The equality of input and output firing rates, already used in [17], can thus be interpreted as the consistency of spike-train and input spectra at infinite frequency. In the diffusion approximation, input fluctuations are assumed to be uncorrelated and the corresponding white input spectrum can be regarded as a zero-dimensional "embedding" (d = 0) of the network noise. With a finite nontrivial Markovian embedding (0 < d < ∞), self-consistency can be achieved at several frequencies. A particularly simple relation is obtained in the zero-frequency limit (ω = 0): For a given output spectrum S(ω), we can write 2d − 1 linear equations for the remaining 2d − 1 unknown coefficients X k (k = 1, . . . , d − 1) and Y ( = 1, . . . , d): The optimal choice of the corresponding 2d − 1 frequency values is not obvious. A reasonable frequency value is certainly the firing rate (ω 1 = 2π r 0 ) which exhausts the number of possible values for d = 1. For d = 2 we additionally choose ω 2 = π r 0 , ω 3 = 4π r 0 [for details on the numerical solution of Eqs. (54)-(56), see Appendix B). Figure 14 displays our numerical solution for embeddings with d = 1 (blue) and d = 2 (red). We also compare to Brunel's theory with d = 0 (green) and simulation results for a large recurrent network (black). The Brunel theory (green) FIG. 14. Self-consistent solutions of the spike-train power spectrum for low-dimensional Markovian embeddings and comparison to network simulation. Black line represents spike-train power spectrum in the network (cf . Table II for parameters), obtained by averaging raw spectra over all neurons. Colored lines are self-consistent input S ηη (ω)/φ (dashed) and output (solid) power spectra and colors indicate dimensionality of Markovian embedding. Frequency values at which the self-consistency is enforced are indicated by vertical dotted lines. See Tables III and II for parameters. displays a clear discrepancy between the white input spectrum S ηη /φ (dashed) and the high-pass spike-train power spectrum (solid) except in the high-frequency limit. For d = 1 (blue) dashed and solid lines, i.e., rescaled input and output spectra, are closer together and intersect at the three frequencies ω = 0, 2π r 0 , ∞. With d = 2 (red), consistence of input and output spectra can be achieved for five distinct frequencies and the dashed and solid lines become nearly indistinguishable. Moreover, the resulting spectra are close to the power spectrum measured in the recurrent network (black line); a remaining small discrepancy is most likely due to the shot-noise character of the input [101] that we have neglected in the Gaussian approximation. Our example demonstrates the correctness of our general theory and its numerical feasibility for lowdimensional Markovian embedding at least in special cases.

VIII. SUMMARY AND OPEN PROBLEMS
In this paper we have developed a theory for the power spectrum of a multidimensional stochastic integrate-and-fire neuron, i.e., a nerve cell that is subject to a nontrivial voltage dynamics, to a temporally correlated noise, or to spikefrequency adaptation. The theory consists of a system of partial differential equations including a nonlocal operator for the resetting and evolution of the probability in the refractory state. We also provided in the Appendices a numerical implementation for the solution of the equations and tested them on a large number of cases. In all cases, we found a good agreement between the theory and results of stochastic simulations. We demonstrated that already with the relatively small number of auxiliary variables, various features in the spike-train power spectrum emerge, some of which are also known from the experimental literature.  (Fig. 14). Interestingly, we found that the Markovian embedding of colored noise is not unique: there are distinct stochastic differential equations that yield the same Gaussian noise. This ambiguity might be even larger in higher dimensions (see also [60]) and thus worth to be explored in more detail. It remains to be seen whether these different representations of the same colored noise have a meaning and whether some of them are numerically simpler to be found than others.
Our framework can be applied and extended in a number of directions. First of all, the theory may become the starting point of analytical approaches beyond the existing ones that are limited to IF models with weak or shortcorrelated Ornstein-Uhlenbeck noise. Second, with more efficient algorithms (efficient tools for the solution of multidimensional FPEs [102,103] might be useful here) also higher-dimensional situations, e.g., an adapting neuron with narrow-band noise input (corresponding to a system of four stochastic differential equations) might be tractable; possible candidates are eigenfunction expansions [104,105] and the matrix-continued-fraction method [13]. Third, similar to the one-dimensional white-noise case [30,31], the calculation of the firing-rate modulation in response to a time-dependent stimulus (other than noise) will follow a very similar mathematical framework as presented here for the calculation of the power spectrum. Knowledge of the rate modulation and the spontaneous power spectrum gives us access to informationtheoretic measures like the coherence or the lower bound on the mutual information and enables the systematic study of the information-filtering properties of sensory neurons [36].
The theory as developed here is applicable to many interesting cases for neurons subject to colored channel noise [68], narrow-band driving by other cells [76], and other situations, in which we have a clear idea about the correlations of the underlying noise sources, e.g., for some cell types in the sensory periphery. However, neurons beyond the periphery operate in large and sparse recurrent networks in which most of the noise emerges by the nonlinear interactions among the nerve cells [77,80,[106][107][108]. The dynamics of such networks can be investigated by the self-consistent mean-field theory introduced here. The corresponding set of equations poses a number of open problems for future studies: exploring the solution's (or solutions') existence, uniqueness, and stability may reveal novel dynamical regimes and, thus, yield a deeper understanding of the complex dynamics of such networks.  (Fig. 14). Interestingly, the approximation by a finite-dimensional Markovian embedding includes in a systematic way the theory by Brunel [17] which implements the lowest-order selfconsistency, i.e., a self-consistency between input and output firing rates, or, in our Fourier formulation, a self-consistency of input and output power spectra at infinite frequency. Convincingly, with an increasing dimensionality of the embedding, self-consistency of input and output spectra can be achieved at an increasing number of distinct frequencies and the resulting spectrum approaches the true spectrum observed in network simulations.
The results on the recurrent network complement an approach to the correlation problem that has recently been put forward in Ref. [109]. In the latter study, the power spectrum for IF neurons in a Brunel network were considered in the limit case of a strong mean input drive, in which spectra display pronounced peaks at the firing rate. The spectrum that we found as the self-consistent solution (cf. Fig. 14), looked very different and displayed a pronounced reduction of power at low frequencies, similar to spectra measured experimentally in vivo [74].
Specifically with regard to the mean-field-theoretical applications of the model, one could be worried about the approximation of spike-train input by a Gaussian noise process. Indeed, for stronger synaptic amplitudes (or amplitude distributions with a fat tail), one has to consider synaptically filtered shot noise instead of Gaussian fluctuations. Although exact results for the firing rate, power spectrum, and response functions exist, if the input noise is a Poisson process [101], it is a difficult problem to derive and solve similar density equations for general point processes (for an exception, if the input spike train can be described by a renewal point process, see [110]). However, the self-consistent numerical simulation schemes developed and tested in [81][82][83] for large recurrent networks of pulse-coupled IF neurons suggest that the Gaussian approximation (a basic assumption of our theory) is a reasonable one in many situations of interest, especially in networks with moderate synaptic amplitudes. Hence, we are confident that the framework developed here will be a useful step to better understand theoretically the spike statistics and also the information processing in the neural networks of the brain.

ACKNOWLEDGMENT
This paper was developed within the scope of the IRTG 1740/TRP 2015/50122-0, funded by the DFG/FAPESP.

APPENDIX A: NUMERICAL SOLUTION OF SPECTRUM AND DENSITY FOR THE MULTIDIMENSIONAL IF MODEL
Except for a few special cases, we lack analytical solutions of multidimensional FPEs. Here, we outline the numerical methods to solve Eqs. (20), (21), and (25) in order to evaluate the power spectrum (26). We follow standard procedures for the numerical integration of partial differential equations [86]. 2

Discretization and boundary conditions
We approximate the derivatives in second order using finite differences and discretize v, a, and time: with i, j ∈ [0, 1, . . . , N − 1] and k ∈ [0, 1, . . . , ∞) (see Table IV for all parameters for d = 1 and Table V The last discrete element of v is located right in front of the threshold, i.e., the element v 0 + N v = v th is not included. The numerical values v 0 , a 0 , and N are chosen to properly resolve the relevant region of phase space, which is done by visual inspection of the stationary density. Put differently, one has to ensure that the final result (the power spectrum) does not depend (or does only weakly depend) on the specific values v 0 , a 0 , N. Except for the boundary at the threshold, if the other boundaries are chosen in a sufficient distance from the main location of the probability, the kind of boundary condition (reflecting, absorbing, or periodic) should become unimportant. We opt for absorbing boundary conditions that are implemented via the operators (see below). We have tested also reflecting boundary conditions and did not find any change in our results.
The linear operatorsL andR appearing in Eqs. (20) and (25) are approximated by large and sparse matrices of dimension N 2 × N 2 that act on P and Q. Discretized, the FPE reads as now where, for simplicity, we use for the absolute refractory period an integer multiple of the time step τ ref = N ref t.

Subthreshold dynamics
We first discuss the evolution operator of the subthreshold dynamicsL. We use symmetric approximations of the derivatives in v and a yielding for the operatorL a tridiagonal matrix By means ofδ k,m = 1 − δ k,m we have incorporated the absorbing boundary conditions into the operator. The sparse matrix has only nine nonzero components in each row, and even less if the corresponding element is located next to a boundary.

Firing, evolution in refractory state, and reset
The fire-and-reset operator performs four different steps. The operator (i) absorbs the probability that crosses the threshold and transfers it into the refractory state, (ii) shifts it along the a axis (spike-triggered adaptation), (iii) evolves the probability during the refractory period, and (iv) finally reinserts the probability at v r . Numerically, we perform these steps by subsequent matrix multiplications: The (N × N 2 )-dimensional matrixF extracts the efflux of probability crossing the threshold and puts it into the refractory state (an N-dimensional vector) The probability that crosses the threshold within a small time interval t [being proportional to the derivative with respect to v in Eq. (13)] is here approximated by the finite difference between the vanishing value at the threshold and P k N−1+ jN (the latter terms are exactly the ones picked out by the matrix).
To account for spike-triggered adaptation, this probability is shifted along the a axis by a constant amount δ a . In the likely case that δ a is not a multiple of the bin size a, we shift the content of each probability bin into two neighboring bins with distance n a = M(δ a , a) = δ a / a − (δ a mod a) and n a + 1, respectively [the function M(z, ) returns the down rounded fraction of z/ ]. The fractions of probability that end up in the ( j + n a )th and ( j + n a + 1)th bins are given by κ a = 1 − (δ a mod a) and (1 − κ a ), respectively. We can represent the shift operatorŜ m ,m as follows: where we look at the influx in the jth bin rather than the efflux from the jth bin. The part of the flux that is shifted from outside or out of the considered range of j is neglected. The probability evolves in the refractory state with the reduced tridiagonal N × N Fokker-Planck operator acting only  In case of a stochastic auxiliary process, the whole evolution during the refractory period can be approximated by numerical integration using the forward-time-centered-space scheme [86]:Ê To obtain sufficiently accurate results, we used an estimate for a reliable time step for a pure diffusion process N ref = 4τ ref (β 2 1 + β 2 2 )/(2τ 2 a a 2 ) (see [86]). If the auxiliary process is completely deterministic (as in Sec. V D), we can simplify the evolution operator and recast it as a simple shift operator, similar to the effect of spiketriggered adaptation where a det (t; a(0)) is the solution of the deterministic equation τ aȧ j = g(v ref , a j ) with the initial condition a(0). To reinsert the evolved probability we approximate the δ function in Eq. (19) that transfers the flux to the reset voltage by Kronecker deltas:

Stationary solution
With the approximated operators Eq. (20) attains the form of a large and sparse linear system: ((L +R)P 0 ) i, j = 0; (A12) additionally, the density is normalized. To solve the above system, we first set an arbitrary element to one and solve the inhomogeneous system ((L +R)P 0 ) i, j + P 0 n r +N 2 /2 = 1 (A13) using either a direct approach with the LU decomposition performed with the SUPERLU package in Python [111], or an indirect one with the ILU decomposition as preconditioner and the stabilized biconjugated gradient method, both provided in the Eigen library [112] (the used methods for the respective models are listed in Tables IV and V).
From the unnormalized solution, we can calculate the stationary firing rate using Eqs. (14) and (21):

FPE solution in Fourier domain and spike-train power spectrum
With the stationary density, we know the inhomogeneity in Eq. (25) and by replacing the operators by matrices as similarly done above, we obtain We find a solutionQ(ω) for a given ω with the same method as used for solving Eq. (A13). WithQ(ω) we can calculate the  In the limit of low frequencies ω → 0, we observe numerical instabilities becauseQ(ω → 0) is not uniquely determined by Eq. (A16); everyQ (ω → 0) =Q(ω → 0) + γ P 0 (v, a) is a solution as well. We find an additional condition forQ(ω → 0) from the normalization condition for the time-dependent probability density which can be expressed by the temporally integrated efflux. After a number of manipulations, we arrive at the relation We can find the correctQ(ω → 0) =Q (ω → 0) − γ P 0 (v, a) by inserting this expression into Eq. (A18), solving for γ , and reinserting this solution into the expression forQ(ω → 0). Expressing the integrals by simple sums we obtaiñ [(1 + τ refR )Q (ω → 0)] i+ jN . (A19) In this paper this procedure is applied for all frequencies lower than 1 Hz.

APPENDIX B: MEANINGFUL CHOICE OF THE EMBEDDING AND APPLICATION TO THE SELF-CONSISTENCY PROBLEM
As we have studied in detail in the one-dimensional case, the power spectrum of the Markovian embedding in Eq. (49) is ambiguous, thus, the number of free variables can be reduced without loss of generality. If the dimensionality is d > 1, we may reduce the ambiguity by exploiting the invariance of the summed OUP Eq. (48) upon the following change: where S = S −1 denotes an arbitrary rotation matrix. Consequentially, all components may be rotated in the first dimension [ β = (β, 0, . . . , 0)] and any β and B can be replaced by the β and a corresponding matrix B without restricting the power spectrum's shape. For d = 2, the remaining ambiguity in A and B can be further reduced by the choice A = a a 1 a 2 a , B = b 1 0 0 b 2 , and β = β 0 .
(B2) For the particular choice b 2 = 0, Eq. (53) can be inverted (here W = X 0 + β 2 − β) and the only remaining freedom is the choice of the sign in the formula for a 2 . This apparently satisfying choice (b 2 = 0) leading to the simple system above, however, turns out to be disadvantageous for the numerical solution of the FPE (second-order derivatives stabilize the solution of the PDE, see [86] 20.5). Instead, we numerically solve Eq. (53) with the additional constraints (B2) and obtain a (nonunique) set of parameters A, B, β that are given in Table III. The resulting self-consistent spectra show a slight dependence on the specific constraint because of the limited number of grid points. For the iterative determination of the self-consistent solution in Sec. VII B, the coefficients of the OUP for the next iteration are calculated from the spike-train power spectrum at zero frequency S ηη (0) = φS(0) and at the c max = 2d − 1 frequencies ω c [S ηη (ω c ) = φS(ω c )]. First, inserting the given points of the spectrum into Eq. (52) yields a linear system, in the coefficients X k and Y . Then, the OUP coefficients for the next iteration follow from solving Eq. (53).