Conditions for wave trains in spiking neural networks

Spatiotemporal patterns such as traveling waves are frequently observed in recordings of neural activity. The mechanisms underlying the generation of such patterns are largely unknown. Previous studies have investigated the existence and uniqueness of different types of waves or bumps of activity using neural-field models, phenomenological coarse-grained descriptions of neural-network dynamics. But it remains unclear how these insights can be transferred to more biologically realistic networks of spiking neurons, where individual neurons fire irregularly. Here, we employ mean-field theory to reduce a microscopic model of leaky integrate-and-fire (LIF) neurons with distance-dependent connectivity to an effective neural-field model. In contrast to existing phenomenological descriptions, the dynamics in this neural-field model depends on the mean and the variance in the synaptic input, both determining the amplitude and the temporal structure of the resulting effective coupling kernel. For the neural-field model we employ liner stability analysis to derive conditions for the existence of spatial and temporal oscillations and wave trains, that is, temporally and spatially periodic traveling waves. We first prove that wave trains cannot occur in a single homogeneous population of neurons, irrespective of the form of distance dependence of the connection probability. Compatible with the architecture of cortical neural networks, wave trains emerge in two-population networks of excitatory and inhibitory neurons as a combination of delay-induced temporal oscillations and spatial oscillations due to distance-dependent connectivity profiles. Finally, we demonstrate quantitative agreement between predictions of the analytically tractable neural-field model and numerical simulations of both networks of nonlinear rate-based units and networks of LIF neurons.


Introduction
Experimental recordings of neural activity frequently reveal spatiotemporal patterns such as traveling waves propagating across the cortical surface [1][2][3][4][5][6][7][8] or within other brain regions such as the thalamus [3,9] or the hippocampus [10]. These large-scale dynamical phenomena are detected in local-field potentials (LFP) [11] and in the spiking activity [12] recorded with multi-electrode arrays, by voltage-sensitive dye imaging [13], or by two-photon imaging monitoring the intracellular calcium concentration [14]. They have been reported in in-vitro and in in-vivo experiments, in both anesthetized and awake states, and during spontaneous as well as stimulus-evoked activity [3].
Previous modeling studies have shown that networks of spiking neurons with distance-dependent connectivity, extending in one-or two-dimensional space, can exhibit a variety of such spatiotemporal patterns [15][16][17][18]. For illustration, consider the example in Fig 1. Depending on the choice of transmission delays, the spatial reach of connections and the strength of inhibition, a network of leaky integrate-and-fire (LIF) model neurons generates asynchronous-irregular activity (A), spatial patterns that are persistent in time (B), spatially uniform temporal oscillations (C), or propagating waves (D). Distance-dependent connectivity is a prominent feature of biological networks. In the neocortex, local connections are established within a radius of about 500 µm around a neuron's cell body [19], and the probability of two neurons being connected decays with distance [20][21][22]. For remaining parameters, see Table 4.
So far, the formation of spatiotemporal patterns in neural networks has mainly been studied by means of phenomenological neural-field models describing network dynamics at a macroscopic spatial scale [23][24][25]. Such models can describe patterns in recorded brain activity that are related to movement [26] or occur in response to a visual stimulus [27]. Neural-field models are formulated with continuous nonlinear integrodifferential equations for a spatially and temporally resolved activity variable and usually possess an effective distance-dependent connectivity kernel. These models provide insights into the existence and uniqueness of diverse patterns which are stationary or nonstationary in space and time, such as waves, wave fronts, bumps, pulses, and periodic patterns (reviewed in [28][29][30][31][32][33][34]). There are two main techniques for analyzing spatiotemporal patterns in neural-field models [32]: First, in the constructive approach introduced by Amari [25], bump or wave solutions are explicitly constructed by relating the spatial and temporal coordinates of a nonlinear system (reviewed in [28,Section 7] and [32,). Second, the emergence of periodic patterns is studied with bifurcation theory as in the seminal works of Ermentrout and Cowan [35][36][37][38]. In this latter framework, linear stability analysis is often employed to detect pattern-forming instabilities and to derive conditions for the onset of pattern formation (see for example [39,40] or the reviews [28,Section 8] and [32,Section 5]). There are four general classes of states that can linearly bifurcate from a homogeneous steady state: a new uniform stationary state, temporal oscillations (spatially uniform and periodic in time, also known as global 'bulk oscillations' [41]), spatial oscillations (spatially periodic and stationary in time), and wave trains (spatially and temporally periodic; special type of traveling waves), see [28,Section 8] and [42][43][44]. The analysis of these states is often called '(linear) Turing instability analysis' [29,44,45] referring to the work of Turing on patterns in reaction-diffusion systems [46]. The respective instabilities leading to these states are termed: a firing rate instability, Hopf instability [47], Turing instability, and Turing-Hopf [42] or 'wave' [40] instability. The instabilities generating temporally periodic patterns (Hopf and Turing-Hopf instabilities) are known as 'dynamic' [44] or 'nonstationary' [48] instabilities, in contrast to 'static' [44] or 'stationary' [48] instabilities generating temporally stationary patterns.
The emergence of pattern-forming instabilities has been investigated with respect to system parameters such as the spatial reach of excitation and inhibition in an effective connectivity profile [28]; specifically without transmission delays [49,50], or with constant [42,51], distance-dependent [40,41,43,45,[52][53][54][55][56] or both types [57,58] of delays. Faye and Faugeras [59] show existence and uniqueness of solutions and provide conditions for asymptotic stability of the trivial homogeneous steady state of the corresponding linearized system using the Lyapunov functional. The principle of linearized stability for such models was proven by Veltz and Faugeras [60]. Dijkstra et al. [61] provide a rigorous analysis of a one-dimensional neural-field model revealing pitchfork-Hopf bifurcations. The existence of standing waves emerging from a Turing bifurcation of the trivial homogeneous steady state, in a linearized neural field model with space-dependent delays on a sphere, was shown by Visser et al. [62].
Neural-field models treat neural tissue as a continuous excitable medium and describe neural activity in terms of a space and time dependent real-valued quantity. Throughout the current work the spatial coordinate refers to physical space, although in general it could also be interpreted as feature space. At the microscopic scale, in contrast, neural networks are composed of discrete units (neurons) -which interact via occasional short stereotypical pulses (spikes) rather than continuous quantities like firing rates. In the neocortex, spiking activity is typically highly irregular and sparse [63,64], with weak pairwise correlations [65]. To date, a rigorous link between this microscopic level and the macroscopic description by neuralfield models is lacking [31,33,66,67]. While randomly connected spiking networks have been extensively analyzed using mean-field approaches [64,[68][69][70][71], the theoretical understanding of spatially structured spiking networks is still deficient. A recent work in this direction is Esnaola-Acebes et al. [72], who investigate ring networks of quadratic integrate-and-fire model neurons and provide bifurcation diagrams showing temporal oscillations and bump states, supported by both mathematical analysis and simulation. But in general it remains unclear how to qualitatively transfer insights on the formation of spatiotemporal patterns from neural fields to networks of spiking neurons. Moreover, it is unknown how the multitude of neuron, synapse and connectivity parameters of spiking neural networks relates to the effective parameters in neural-field models. A quantitative link between the two levels of description is, for example, required for adjusting parameters in a network of spiking neurons such that it generates a specific type of spatiotemporal pattern, and to enable model validation by comparison with experimental data.
Different efforts have already been undertaken to match spiking and time-continuous rate models with spatial structure. Certain assumptions and approximations allow the application of techniques for analyzing spatiotemporal patterns developed for neural-field models. The above mentioned constructive approach [25], for example, can be applied to networks of spiking neurons under the assumption that every neuron spikes at most once, thus ignoring the sustained spike generation and after-spike dynamics of biological neurons [73][74][75]. A related simplification substitutes a spike train by an ansatz for a wave front. This leads to a mean-field description of single-spike activity often applied to a spike-response model [76][77][78][79]. Traveling-wave solutions have also been proposed for a network of coupled oscillators and a corresponding continuum model [80]. In the framework of bifurcation theory, Roxin et al. [42,51] demonstrate a qualitative agreement between a neural-field model and a numerically simulated network of Hodgkin-Huxley-type neurons in terms of emerging spatiotemporal patterns. However, the authors do not observe stable traveling waves in the spiking network, even though the neural-field model predicts their occurrence. In the limit of slow synaptic interactions, spiking dynamics can be reduced to a mean-firing-rate model for studying bifurcations [81][82][83]. An example is the lighthouse model [84,85], defined as a hybrid between a phase oscillator and a firing-rate model, that reduces to a pure rate model for slow synapses [86]. Laing and Chow [87] demonstrate a bump solution in a spiking network and discuss a corresponding rate model. Recently, the group around Doiron and Rosenbaum explored in a sequence of studies spatially structured networks of LIF neurons without transmission delays in the continuum limit with respect to the spatial widths of connectivity. The authors focus on the existence of the balanced state [88], the structure of correlations in the spiking activity [89], and bifurcations in the linearized dynamics in relation to network computations [90]. Spreizer et al. [91] further demonstrate that spatiotemporal activity sequences can be induced by anisotropic but spatially correlated connectivity. Kriener et al. [92] employ static mean-field theory and extend the linearization of a network of LIF neurons with constant delays as described by Brunel [69], to spatially structured networks. The work derives conditions for the appearance of spontaneous symmetry breaking that leads to stationary periodic bump solutions (spatial oscillations), and distinguish between the mean-driven and the fluctuation driven regime. A coarse-graining procedure for a ring network of modified binary neurons with refractoriness was presented by Avitable and Wedgwood [93]. By combining analytical and numerical analysis they show existence of bumps and traveling waves.
Despite these previous works on spatially structured network models of spiking neurons and attempts to link them with neural-field models, there still exists no systematic way of mapping parameters between these models. Furthermore, none of these studies focuses on uncovering the underlying mechanism of wave trains in spiking networks. In the present work we establish the so far missing, quantitative link between a sparsely parameter mapping  connected network of spiking LIF neurons with spatial structure and a typical neural-field model. An explicit parameter mapping between the two levels of description allows us to study the origin of spatiotemporal patterns analytically in the neural-field model using linear stability analysis, and to reproduce the predicted patterns in spiking activity. We employ mean-field theory to derive the neural-field model as an effective rate model depending on the dynamical working point of the network that is characterized by both the mean and the variance of the synaptic input. The rate model accounts for biological constraints such as a static weight that is either positive (excitatory) or negative (inhibitory) and a spatial profile that can be interpreted as a distance-dependent connection probability. Given these constraints, we show that wave trains cannot occur in a single homogeneous population irrespective of the shape of distance-dependent connection probability. For two-population networks of excitatory and inhibitory neurons, in contrast, wave trains emerge for specific types of spatial profiles and for sufficiently large delays, as shown in Fig 1D. The remainder of the study is structured as follows: In Results we derive the conditions for the existence of wave trains for a typical neural-field model by linear stability analysis, present an effective model corresponding to the microscopic description of spiking neurons, compare the two models, and show simulation results for validation. In Discussion we put our results in the context of previous literature. Finally, Appendix contains details on our approach. An account of the presented work has previously been published in abstract form in [94].

Results
We aim to establish a mapping between two different levels of description for spatially structured neural systems to which we refer as 'neural-field model' and 'spiking model' based on the initial model assumptions. While the neural-field model describes neural activity as a quantity that is continuous in space and time, the spiking model assumes a network of recurrently connected spiking model neurons in discrete space. Our methodological approach for mapping between these two models, as well as the structure of this section, are illustrated in Fig 2. (1) We start in Sections 2.1-2.3 with linear stability analysis of a typical neuralfield model that is a well-known and analytically tractable rate equation. This approach builds on existing literature (cf. [28,Section 8] and [32,Section 5]) and introduces the concepts of our study with modest mathematical efforts. We analyze the neural-field model for one and two populations and derive conditions for the occurrence of wave trains based on spatial connectivity profiles and transmission delays. (2) In Section 2.4 we continue with simulations of a discrete version of the neural-field model, a network of nonlinear rate-based units, and show that the results from our linear analysis indeed accurately predict transitions between network states (homogeneously steady, spatial oscillations, temporal oscillations, waves). (3) Then, (4) Figure 3. Effective profile yields conditions for wave trains. A Boxcar-shaped spatial profile p of width R = 1 mm for a single population. B Effective profile c (blue curve) denotes Fourier transform of spatial profile p times positive weight w E = 1. Gray crosses indicate maximum c max and minimum c min . Same spatial profile but with negative weight (w I = −w E ) yields mirrored curve (red, dashed line). C Spatial profiles of different widths for two populations E (R E = 1 mm, blue) and I (R I = 0.5 mm, red).
given by Eq 10 for Hopf bifurcation indicating onset of delay-induced oscillations (appearing in purple region) with time constant τ and delay d. F Transition curves for relative width ρ = R I /R E and relative weight η = −w I /w E . Colored regions indicate which extremum, the minimum c min or the maximum c max , has larger absolute value and if the dominant one occurs at k = 0 or at k > 0. Purple (1): c min appears at k min > 0. Light blue (2): c min appears at k min = 0. Dark gray (3): c max appears at k max = 0. Green (4) c max appears at k max > 0.
in Section 2.5 we linearize the population dynamics of networks of discrete spiking leaky integrate-and-fire (LIF) neurons using mean-field theory and derive expressions similar to the neural-field model. (4) Thus, both the linearized neural-field and spiking models can be treated in a conceptually similar manner, with the exception of an effective coupling kernel which is mathematically more involved for the spiking model. In Section 2.6 we perform a parameter mapping between the biophysically motivated parameters of the spiking model and the effective parameters of a neural-field model. (5) Finally, in Section 2.7 we demonstrate that the insights obtained in the analysis of the neural-field model apply to networks of simulated LIF neurons: The bifurcations indeed appear at the theoretically predicted parameter values.
In summary, the mapping of a microscopic spiking network model to a continuum neural-field model (bottom up) allows us to transfer analytically derived insights from the neural-field model directly to the spiking model (top down).

Linear stability analysis of a neural-field model
We first consider a neural-field model with a single population defined as a continuous excitable medium with a translation-invariant interaction kernel and delayed interaction in one spatial dimension. The dynamics follows an integro-differential equation The variable u describes the activity of the neural population at position Earlier studies show that specific choices for connectivities P and nonlinear transformations ψ result in spatiotemporal patterns such as waves or bumps [28][29][30][31][32][33][34].
Here, we assume that the connectivity m is isotropic and define m (r) := w p (r). The scalar weight w can either be positive (excitatory) or negative (inhibitory). The spatial profile p(r) is a symmetric probability density function with the properties p (r) = p (−r), p(r) > 0 for r ∈ (−∞, ∞) and ∞ −∞ p (r) dr = 1. Fig 3A  shows, as an example, a boxcar-shaped spatial profile with width R, defined by p (r) = 1 2R Θ (R − |r|) where Θ denotes the Heaviside function.
Throughout this study we investigate bifurcations of the system Eq 1 between a state of spatially and temporally homogeneous activity u(x, t) = u 0 to states where the activity shows structure in the temporal domain, in the spatial domain, or both. For this purpose we use Turing instability analysis [29,39,40]. Initially we assume that the model parameters are chosen such that the homogeneous solution is locally asymptotically stable, implying that small perturbations away from u 0 will relax back to this baseline. We ask the question: In which regions of the parameter space (R, d, w, ψ) is the stability of the homogeneous solution lost? To this end we linearize around the steady state and denote deviations δu(t) = u(t) − u 0 . Without loss of generality we assume the slope ψ ′ (u 0 ) of the gain function to be unity; a non-zero slope can be absorbed into a redefinition of w. We here use a gain function that allows u to become negative. Likewise, one can treat nonlinear gain functions ψ that are strictly positive (see, e.g., [51]). These two conventions can be mapped to one another by a suitable shift of variables. In either case, after linearization the deviation δu does not have a definite sign. Because the resulting linear system is invariant with respect to translations in time and space, its eigenmodes are Fourier-Laplace modes of the form where the wave number k ∈ R is real and the temporal eigenvalue λ ∈ C is complex. Solutions constructed from these eigenmodes can oscillate in time and space, and exponentially grow or decay in time. The characteristic equation (see Eq 33 in Appendix) comprises the effective profile c (k) := m (k) := w p (k). The Fourier transform of the spatial profile is denoted by p (k) which, by its definition as a probability density, is maximal at k = 0 with p (0) = 1 (see Eqs 40 and 41 in Appendix). The effective profile for the boxcar-shaped spatial profile is shown in Fig 3B, for excitatory and inhibitory weights with absolute magnitudes of unity. We next extend the system to two populations, an excitatory one denoted by E, and an inhibitory one by I. Time constants τ and delays d are assumed to be equal for both populations, but u becomes a vector, u = (u E , u I ) T , and the connectivity m (r) a matrix The linearized system again possesses the same symmetries as the counterpart for a single population so that the eigenmodes for the deviation from the stationary state are of the form δu (x, t) = ve ikx e λt with v denoting a constant vector. Hence, we arrive at an auxiliary eigenvalue problem (see Eq 34 in Appendix) with the two eigenvalues where These two eigenvalues play the same role as the effective profile c in the one-population case above. As a consequence, the same characteristic equation Eq 3 holds for both the one-and the two-population system, as a scalar and two-dimensional vector equation, respectively. In the following example we restrict the weights and the spatial profiles to be uniquely determined by the source population alone, denoted by w αE =: w E , w αI =: w I for α ∈ {E, I}. An illustration of the two spatial profiles of different widths R E and R I is shown in Fig 3C. The respective effective profile Eq 5 reduces to c 1 (k) = w E p E (k) + w I p I (k) =: c (k), and is shown in Fig 3D; The characteristic equation Eq 3 can be solved for the eigenvalues λ by using the Lambert W function defined as z = W (z) e W (z) for z ∈ C [95]. The Lambert W function has infinitely many branches, indexed by b, and the branch with the largest real part is denoted the principle branch (b = 0), see Eqs 37-38 in Appendix for a proof. The characteristic equation determines the temporal eigenvalues (see Eq 39 in Appendix and compare with [58]) As shown by Veltz and Faugeras [60], linearized stability of the homogeneous steady state of Eq 1 is fully determined by the eigenvalues Eq 7. These authors assume an open bounded domain and provide an example of a one-dimensional ring network. In our theoretical analysis, we decide to work with the infinite domain for technical convenience. Formulating the problem on a ring with periodic boundary conditions would not change any of our conclusions. The added value of their approach is the possibility to justify all steps in a mathematically rigorous way. In our approach, the temporal eigenvalue λ b is a continuous function of the wave number k. On a bounded domain with periodic boundary conditions, one obtains a discrete set of wave numbers k. Since the temporal eigenvalue λ b varies on a much slower scale compared to the resolution of discrete wave numbers k, this change does not have any qualitative effect on the resulting dynamics. The infinite domain, however, allows us to easily incorporate spatial profiles with unbounded support. For profiles with unbounded support that decay to zero fast enough, the theoretical prediction of the frequency of oscillation can be regarded as an approximation of the dynamics on a ring with periodic boundary conditions.

Conditions for spatial and temporal oscillations, and wave trains
The homogeneous (steady) state of our system is locally asymptotically stable if the real parts of all eigenvalues λ b are negative for all branches b of the Lambert W function. The system loses stability when the real part of the eigenvalue λ 0 on the principle branch becomes positive at a certain k = k * . Such instabilities may occur either for a positive or a negative argument of the Lambert W function.
We denote the maximum of c as c max and the minimum as c min occurring at k max and k min , respectively, as indicated in Fig  = d τ by the definition of the Lambert W function; so equality holds in Eq 8 independent of the values d and τ . The imaginary part of λ 0 is zero at such a transition because the principal branch of the Lambert W function has real values for positive real arguments. If the instability appears at a wave number k * = 0, the population activity is collectively destabilized. This transition corresponds in networks of binary neurons and of spiking neurons to the transition between the asynchronous irregular (AI) state and the synchronous regular (SR) state, where the system ceases to be stabilized by negative feedback and leaves the balanced state [69,96]. If this transition appears at a wave number k * > 0, it follows from Eq 2 that the activity shows spatial oscillations that grow exponentially in time.
For a negative argument of W of less than −1/e, the eigenvalues Eq 7 come in complex conjugate pairs. The real part of λ 0 becomes positive if the condition is fulfilled with a negative c min < −1. Because the eigenvalues have non-zero imaginary parts, this transition corresponds to a Hopf bifurcation and the onset of temporal oscillations. The condition for this bifurcation has been derived earlier [97, Eq 10] Here, d crit denotes the critical delay and c crit min a critical minimum of the effective profile for points on the transition curve. The system is stable for c min > −1 for all delays. For larger absolute values of c min , the homogeneous spatial oscillations temporal oscillations wave trains Table 1. Conditions for the onset of spatial and temporal oscillations, and wave trains. Gray cells in each column indicate the conditions required for the instability causing the bifurcation. White cells denote the conditions for the respective other bifurcation not to occur. Last row indicates whether the bifurcation happens for zero or nonzero wave number k * . Here d crit and c crit min , as defined in Eq 10 and shown in Fig 3E, denote the critical delay and the minimum of the effective profile on the transition curve for a Hopf bifurcation. bifurcation point is given by the critical value of the ratio between the time constant and the delay, shown in Fig 3E. If the transition occurs at k * = 0, temporal oscillations emerge in which all neurons of the population oscillate in phase ('bulk oscillations' [41]). In spiking networks this Hopf bifurcation corresponds to the transition from the AI regime to the state termed 'synchronous irregular fast (SI fast)' [64]. If the transition appears for k * > 0, spatial and temporal oscillations occur simultaneously. This phenomenon is known as 'wave trains', see [28,Section 8] and [42][43][44]. For the case that the system becomes unstable due to c max reaching unity, the transition curve in Fig 3E also provides a lower bound c crit min (τ /d crit ) above which temporal oscillations do not occur prior to the transition due to c max .
A Hopf bifurcation can give rise to either an asymptotically stable or unstable limit cycle, in the superor subcritical case, respectively. In our analysis we only identify the Hopf bifurcation point by checking when a complex conjugate pair of eigenvalues crosses the imaginary axis, and therefore cannot predict the stability of the emerging limit cycle. If we, however, in the simulation observe the transition from an asymptotically stable homogeneous steady state, corresponding to the asynchronous irregular regime, to spatiotemporal patterns, corresponding to a stable limit cycle, and make sure that the initial conditions are close enough to the homogeneous steady state, we know that the bifurcation we see is indeed a supercritical Hopf bifurcation. The analytical conditions for wave trains that we derive are necessary, but not sufficient.
In summary, the system is stable if c max < 1 and c min > c crit min (τ /d crit ). For transitions occurring at either c max = 1 or c min = c crit min (τ /d crit ) we distinguish between solutions with k * = 0 or k * > 0. In Table 1 we provide an overview of the conditions for bifurcations leading to spatial, temporal, or spatiotemporal oscillatory states. These conditions imply that a one-population neural-field model does not permit wave trains, which follows from the fact that the absolute value of p is strictly maximal at k = 0 (see Eqs 40-41 in Appendix). For a purely excitatory population (w > 0) the critical minimum c crit min (τ /d crit ) therefore cannot be reached while keeping the maximum c max stable as c max > |c min |. For a purely inhibitory population (w < 0), the condition k min > 0 is not fulfilled because c min occurs at k = 0 as p has its global maximum at the origin.
For a neural-field model accounting for both excitation and inhibition, however, we can select shapes and parameters of the spatial profiles, weights and the delay that fulfill the conditions for the onset of wave trains as demonstrated by example in the next section.

Application to a network with excitatory and inhibitory populations
Based on the conditions derived in the previous section, the minimal network in conformity with Dale's principle in which wave trains can occur consists of one excitatory (E) and one inhibitory (I) population. As in the example in Section 2.1, we assume that the connection weights and widths of boxcar-shaped spatial profiles only depend on the source population. The effective profile Eq 5 in this case is and positive and negative peaks of the profile are responsible for bifurcations to spatial or temporal oscillations or wave solutions, respectively. The previous section derives that in particular the position and height of the minima and maxima of the effective profile are decisive. To assess parameter ranges in which the peaks of the effective profile Eq 11 change qualitatively, we introduce the relative width ρ := R I /R E > 0 and the relative weight η := −w I /w E > 0, divide c (k) by w E and introduce the rescaled wave number κ = R E k to arrive at the dimensionless reduced profilẽ which simplifies the following analysis.
Our aim is to divide the parameter space (ρ, η) into regions that have qualitatively similar shapes of the effective profile. The Appendix section describes the derivation of transition curves and Fig 3F illustrates the resulting parameter space. Above the first transition curve η t1 (ρ) (dashed curve, see Eq 48 in Appendix), the absolute value of c min is larger than c max (regions 1 and 2), and vice versa below this curve (regions 3 and 4). The second transition curve η t2 (ρ) (solid curve, see Eq 51 in Appendix) indicates whether the extremum with the largest absolute value occurs at k = 0 (regions 2 and 3) or at k > 0 (regions 1 and 4). The diagram provides the necessary conditions and corresponding parameter combinations required for both spatial and spatiotemporal patterns, purely based on the relative weights and the relative widths which determine the effective profile. The analysis shows that wave trains require wider excitation than inhibition, ρ < 1, because only this relation simultaneously realizes a minimum at a non-zero wave number k * and a maximum with a peak below unity (see Table 1).
A neural-field model exhibiting wave trains can therefore be constructed at will by first selecting a point within region 1 of Fig 3F where ρ < 1 and η ensures that | c min | > c max . Next, c is fixed by scaling c with the absolute weight w E such that c max < 1 for a stable bump solution and c min < −1 for a Hopf bifurcation. Finally a delay d > d crit specifies a point below the bifurcation curve shown in Fig 3E, given by the sufficient condition for the Hopf bifurcation in Eq 10. Likewise, solutions for purely temporal oscillations appear in region 2, where c min < −1 is attained at a vanishing wave number k and a delay d > d crit ; in addition c max < 1 ensures absence of the other bifurcation into spatial oscillations. For purely spatial oscillations, however, the comparison of the absolute values of c min and c max is not sufficient; it is hence not sufficient to rely on the dashed curve separating regions 2 and 4 in Fig 3F. A loss of stability due to c max > 1 can emerge not only in region 4 but also in region 2, because even if |c min | > c max , stability of c min can be ensured by a sufficiently short delay d < d crit , as shown in Table 1.

Network simulation with nonlinear rate neurons
So far we have only considered a mathematical description of the nonlinear system with time and space represented by continuous variables and analytically analyzed its properties using linear stability analysis. Next, we test the derived conditions for the onset of oscillations, summarized in Table 1, for a nonlinear, discrete system in the continuum limit. We here consider a network of N E = 4, 000 excitatory (E) and N I = 1, 000 inhibitory (I) rate neurons described by a discrete version of the neural-field equation Eq 1 (see Table 3 for details). The model neurons within each population are positioned on a ring of perimeter L = 1 mm as described in Section 4. We choose periodic boundary conditions, i.e., the ring topology, due to the inevitably finite size of the discrete network although our theoretical considerations assume the real line as domain. This rate-neuron network constitutes an intermediate step towards a network of spiking neurons. Each neuron has a fixed in-degree K X (fixed number of incoming connections) per source population X ∈ {E, I} with connections selected randomly within a distance R X . A normalization of weights with the indegree, w ′ X = w X /K X , allows us to interpret p as a connection probability. The time constant τ and the delay d are the same as in the neural-field model. As nonlinear gain function in Eq 1 we choose ψ (u) = tanh (u).
The neuron activity of four rate-network simulations with different parameter combinations are shown in The system simulated in Fig 4A is stable according to the corresponding conditions. The square marker in the lower panels shows that c max < 1 (panel E), and although c min < −1, the delay is small such that the system is far away from the bifurcation (panel F). Indeed, the activity appears to not exhibit any spatial or temporal structure. Fig 4B illustrates a case where c max > 1 causes an instability (diamond marker in panel E). The Hopf bifurcation is remote in the parameter space (panel F) and panel G ensures k max > 0. A simulation of the corresponding rate-model network again confirms the predictions and exhibits stationary spatial oscillations (or periodic bumps). The predicted spatial frequency is k max / (2π) ≈ 3.74 mm −1 and we expect L · k max / (2π) bumps to emerge. In this finite-sized system with periodic boundary conditions, the bumps are homogeneously distributed across the domain and the wave numbers are integers; here we observe four stripes. Fig 4C demonstrates temporal oscillations at the parameter combination indicated by the circular marker. We here choose c max < 1 and c min < −1 (panel E). The latter condition leads to an entire range of delays that are beyond the bifurcation in panel F; we choose a delay slightly larger than the critical delay, lying to the left of the bifurcation curve. Inferred from panel G, k min = 0 and, as expected from the analytical prediction, the oscillations observed in simulations of the rate-neuron network are purely temporal.Based on the temporal eigenvalue with the largest real part, we predict a temporal frequency of Im [λ min ] / (2π) ≈ 66.68 Hz which fits well to the simulated oscillation frequency.
Finally, Fig 4D depicts wave trains (denoted by star marker), as predicted by the analytically tractable neural-field model. The instability results from c min < c crit min (panel F) and occurs at k min > 0 (panel G) while c max remains stable (panel E). With a spatial frequency of k min / (2π) ≈ 3.02 mm −1 and a temporal frequency of Im [λ min ] / (2π) ≈ 121.01 Hz, the predicted wave-propagation speed is Im [λ min ] / (k min ) ≈ 0.04 mm/ms which is in agreement with the simulated propagation speed of the wave train.

Linearization of spiking network model
To assess the validity of the predictions obtained from the analytical model for biologically more realistic spiking-neuron networks, we next linearize the dynamics of spiking leaky integrate-and-fire (LIF) neurons and derive a linear system similar to the neural-field model above. The sub-threshold dynamics of a single LIF neuron i with exponentially decaying synaptic currents is described by a set of differential equations for the time evolution of the membrane potential V i and its synaptic current I i as where we follow the convention of [98] (see Eq 62 in Appendix for the relation to physical units). This definition, with both quantities V i and I i having the same unit, conserves the total integrated charge per impulse flowing into the membrane independent of the choice of the synaptic time constant τ s . The membrane time constant, defined as τ m = R m C m with membrane resistance R m and membrane capacitance C m , couples current to the capacitance. We here assume τ s to be much smaller than τ m . The term s j (t) = k δ t − t j k denotes a spike train of neuron j which is connected to neuron i with a constant connection strength J ij and transmission delay d. Whenever V i reaches the threshold V θ , a spike is emitted and the membrane potential is reset to the resting potential V r and voltage-clamped for the refractory period τ ref .
We now assume that, conditioned on the time-dependent spike emission rate ν i (t) of neuron i, spikes are generated independently, thus with Poisson statistics (see, e.g., [64, Section 3.5] for a discussion of this approximation). A neuron then receives a superposition of many such uncorrelated and Poisson-distributed input spikes, so that the probability distribution p(V, I, t) follows a Chapman-Kolmogorov equation. We further assume the amplitudes of postsynaptic potentials to be small, and perform a Kramers-Moyal expansion [99,100] up to second order, which yields a Fokker-Planck equation for p(V, I, t) in which the first and second infinitesimal moments appear as Here (µ i , σ i ) can be thought of as the first two moments of a Gaussian white noise in the diffusion approximation [68,99,101]. This synaptic noise in the input to neuron i depends on the receiving neuron index i and hence on its position. Therefore µ i and σ i also depend on the column J ij ∀j of the connectivity matrix. Such a mean-field approach has been employed previously to study networks of spiking neurons without spatial structure [64,[68][69][70]., where ν j = ν are identical for all j, given by the population-averaged firing rate. We extend on this approach by assuming that the neurons are placed uniformly with density ρ x on a one-dimensional domain and apply the established procedure to obtain a continuum limit [24]: A volume element dx of the one-dimensional domain contains the number ρ x dx of neurons. We further assume that an incoming connection from a neuron at position y to a neuron at position x is drawn independently and identically distributed (i.i.d.) with probability proportional to a spatial profile p(x − y). Hence the J ij in Eq 14 are i.i.d. Bernoulli variables that take the value J with a probability ∝ p(x − y) and are zero otherwise. The expressions for the first and second infinitesimal moment in Eq 14 of a neuron at position x under expectation of the random connectivity are then We find it convenient to introduce a normalized profile p (x − y) = p(x−y) p(x ′ ) dx ′ and to define the number of incoming connections per neuron as K := p (x ′ ) ρ x dx ′ .
In the following we formally write down an evolution equation for the rate ν (x, t). We denote as ν ( •)] (t) the firing rate of a LIF neuron at position x at time t described by Eq 13 that is driven by a white noise with mean µ(x, t) and variance σ 2 (x, t). Clearly, F [µ, σ] (t) is a causal functional of its two arguments, which are functions of time (temporal argument denoted by •). The firing rate of the neuron at time point t only depends on the statistics of its input up to this point, hence on µ(x, s) ∀s ≤ t and σ(x, s) ∀s ≤ t. Thus we can define this functional as F [µ (•) , σ (•))] (t) := δ (t − t k ) ξ , where t k are the time-points of the threshold crossings of Eq 13 under expectation ξ over the realization of the white noise ξ with moments Eq 15 and δ is the Dirac distribution. Since the statistics of the input, the functions µ (t) and σ (t), are direct functions of the firing rate ν (y, t − d) by Eq 15, the evolution equation takes the form where the delay operator D d is defined to act on the second (temporal) argument of the function as In principle, the functional F can be computed -for example, by solving the mean first-passage time for the membrane potential V to exceed the threshold. For that purpose, we would drive the neuron with a Gaussian noise with a given, time-dependent statistics parameterized by µ and σ 2 . Powerful numerical methods are available for this purpose [102]. For the purpose of the present work, however, we do not need to determine F in complete generality, since we are only interested in a linear stability analysis of a spatially and temporally homogeneous state ν (x, t) = ν 0 . Hence it is sufficient to study the stability of Eq 16 with respect to spatio-temporal deviations of the form Linearizing Eq 16 we obtain (by a functional Taylor expansion or Volterra expansion to first order) where we introduce the short hand µ 0 = τ m JKν 0 and σ 2 0 = τ m J 2 Kν 0 . With ν 0 = F µ 0 , σ 2 0 the first line of Eq 18 cancels the corresponding term on the left hand side and we obtain a linear convolution equation for the rate deflection δν, whose spectral properties we need to analyze. The stationary firing rate ν 0 can be determined self-consistently from this condition (see Eq 52 in Appendix). The functional derivatives (and analogous for δF/δσ 2 ) are, by the right hand side of this definition, the responses of the system with respect to an impulse-like perturbation of µ and σ 2 , respectively. We denote these as which are causal functions of t − s only, since we linearize around a time-translation invariant state and causality clearly requires both kernels to vanish for s > t. These functions can analytically be computed in the Fourier domain for LIF models with instantaneous synapses [64,103], for fast colored noise [104], and in the adiabatic limit for slow synapses [105,106]; see 4.4 for details. The form of the response kernels Eq 19 is given in Eqs 53-55 in Appendix. These expressions are obtained by a perturbative calculation on the level of the Fokker-Planck equation that is correct to leading order in O( τ s /τ m ) [104], thus constitude good approximations for sufficiently short synaptic time constants. With this notation, the linearized dynamics Eq 18 obeys a convolution equation in space and time whose stability properties can be analyzed in Fourier domain by standard methods.
In the following section we will ignore the kernel h σ 2 , because its contribution is usually small [104]. Eq 20 provides a linearized system for the spiking model that is continuous in space and time and enables a direct comparison with the neural-field model in the following section.

Comparison of neural-field and spiking models
The linearization of the LIF model presented in the preceding section is the analogue to taking the derivative ψ ′ of the gain function in the linear stability analysis of the neural-field model in Section 2.1. By the assumption of conditional independence of spike trains given their firings rate, we achieve that the state of the spiking network is described by the time-dependent firing rate profile ν(x, t). Its temporal evolution follows Eq 16. This function therefore conceptually plays the same role as u(x, t) in the neural-field model. Therefore the results for the neural field model carry over to the spiking case. To expose the similarities between the linearized systems of the spiking model and the neural-field model, we may bring the equations for the deviation from baseline activity to the form of the convolution equation where the only difference is the convolution kernel relating the deviation from the input δi to those of the output δo defined as The kernel on the first line is the fundamental solution (Green's function) of the linear differential operator appearing on the left hand side of Eq 1, including the coupling weight w. As a consequence, the characteristic equations for both models result from the Fourier-Laplace ansatz δo (x, t) = e ikx e λt which relates the eigenvalues λ to the wave number k as The effective transfer function H describes the linear input-output relationship Eq 22 in the Laplace domain. It is obtained as the Laplace transform of Eq 23 of the respective functions for the spiking model h s (t) and for the neural-field model h nf (t). As a result we obtain the transfer function for the neural-field model The corresponding expression for the effective spiking transfer function H s (λ) results from Eqs 53-55 in Appendix.

Parameter mapping
So far the stability analysis shows that the characteristic equations for both the neural-field and the spiking model have the same form Eq 24 given a proper definition of the respective transfer functions. The transfer function characterizes the transmission of a small fluctuation in the input to the output of the neuron model. Because the transfer functions differ between the two models, it is a priori unclear whether their characteristic equations have qualitatively similar solutions. To provide evidence that this is indeed the case, in the following we devise a procedure that identifies solutions of the characteristic equations in Eq 24 for the rate model and the spiking model, and develop a practical method to obtain one solution from the other. To this end we use that the transfer function of the LIF model in the fluctuation-driven regime investigated here can be approximated by a first order low-pass (LP) filter [97,103,107] (see in particular [107, Fig 1]) with effective parameters H 0 and τ where H µ is the Fourier transform of h µ , defined in Eq 19. This simplified transfer function is similar to the the transfer function Eq 25 of the neural-field model, and thereby relates the phenomenological parameters w and τ of the neural-field model to the biophysically motivated parameters of the spiking model. We perform a least squares fit between H LP (λ) and H µ (λ) to obtain the values for the parameters τ and H 0 . According to Eq 23, H 0 directly relates to w as which follows by noting that h µ (t) = H µ (0) ≈ H 0 . The goodness of the fit of this transfer function to the first-order low-pass filter depends on the mean µ and variance σ of the synaptic input, as shown in Fig 5A. The color-coded error of the fit combines the relative errors from both fitting parameters: ǫ = ǫ 2 τ + ǫ 2 H0 . For the majority of working points (µ, σ) the error is < 1% but the relative errors increase abruptly towards the mean-driven regime. In this regime input fluctuations are small and the mean input predominantly drives the membrane potential towards threshold, so that the model fires regularly and the transfer function exhibits a peak close to the firing frequency [103,107]. We here fix the working point to the parameters indicated by the white cross (see Eq

Linear interpolation between the transfer functions
Evaluating the characteristic equation for the neural-field model yields an exact solution for each branch of the Lambert W function, given by Eq 7. For this model we already established that the principle branch is the most unstable one. An equivalent condition is not known for the general response kernel of the LIF neuron.
To asses whether we may transfer the result for the neural-field model to the spiking case, we investigate the correspondence between the two characteristic equations that are both of the form Eq 24 but with different transfer functions. For this purpose, we define an effective transfer function The first results from computing the derivative ∂λ/∂α (see Eqs 57-59 in Appendix) from the combined characteristic equation and integrating numerically with the exact solution of the neural-field model at α = 0 for each branch b as initial condition: with The spatial profile only enters the initial condition, and the derivative Eq 31 is independent of the wave number k.
As an alternative approach, we directly solve the combined characteristic equation Eq 29 numerically with the known initial condition. Fig 6A and B indicate that only the principle branch (b = 0) becomes positive while the other branches remain stable. The branches come in complex conjugate pairs. For the numerical solution of the characteristic equation, we fix the wave number to the value of k that corresponds to the maximum real eigenvalue.
The analysis shows that we may ignore the danger of branch crossing since different branches remain clearly separated in Fig 6A and B. In addition, the eigenvalue on the principle branch is mostly independent of α, even if the system is close to the bifurcation (when the real part of λ 0 is close to zero). Thus for all values of α we expect qualitatively similar bifurcations, including α = 1. This justification transfers the rigorous results from the bifurcation analysis of the neural-field model in Section 2.2 and Section 2.3, and corresponding effective parameters, to the spiking model.

Validation by simulation of spiking neural network
The Introduction illustrates spatiotemporal patterns emerging in a spiking network simulation in Fig 1 and the subsequent sections derive a theory describing the mechanisms underlying such patterns. Finally, the parameter mapping between the spiking and the neural-field model explains the origin of the spike patterns by transferring the conditions found for the abstract neural-field model in Section 2.2 and Section 2.3 to the spiking case. This section validates that the correspondence between network parameters in the two models is not incidental but covers the full phase diagram.
In the following, we simulate a network with the same neural populations and spatial connectivity used in the nonlinear rate-network in Fig 4, but replace the rate-model neurons by spiking neurons, and map the parameters as described in Section 2.6.1. The network model characterizes all neurons by the same working point (see Eq 61 in Appendix), which means that the connectivity matrix for the excitatory-inhibitory network has equal rows; entries in Eq 4 depend on the presynaptic population alone. Therefore the relative in-degree γ = K I /K E and the relative synaptic strength g = −J I /J E parametrize the spiking-network connectivity matrix as The rightmost panels of Fig  Choosing the synaptic delay d as a bifurcation parameter highlights the onset of temporal oscillations for the case k = 0 (panel B sequence, circular markers) and spatiotemporal oscillations for the case k > 0 (sequence in Fig 7C, star markers). In contrast to the case of purely spatial waves in panel A, the procedure preserves the effective spatial profile (fixed positions in panels D and F) and the system crosses the transition curve in panel E due to increasing delay alone, thus decreasing the ratio τ /d. Fig 7C illustrates the gradual transition to wave trains, where c max remains in the theoretically stable regime at all times, but is close to the critical value of 1 (see the star marker in panel D). As a result, we observe spatial oscillations with a spatial frequency given by k max before and even after the Hopf bifurcation. For delays longer than the critical delay, mixed states occur in which different instabilities due to c max and c min compete. The different spatial frequencies k max / (2π) and k min / (2π) become visible. For delay values well past the bifurcation, this mixed state is lost resulting in a dependency only on c min and wave trains with a spatial frequency that depends on k min .

Discussion
The present study employs mean-field theory [64] to rigorously map a spiking network model of leaky integrateand-fire (LIF) neurons with constant transmission delay to a neural-field model. We use a conceptually similar linearization as Kriener et al. [92] combined with an analytical expressions for the transfer function in the presence of colored synaptic noise [104]. The insight that this transfer function in the fluctuation-driven regime resembles the one of a simple first-order low-pass filter facilitates the parameter mapping between the two models. The resulting analytically tractable effective rate model depends on the dynamical working point of the spiking network that is characterized by both the mean and the variance of the synaptic input. By means of bifurcation theory, in particular linear Turing instability analysis [29,44,45], we investigate the origin of spatiotemporal patterns such as temporal and spatial oscillations and in particular wave trains emerging in spiking activity. The mechanism underlying these waves encompasses delay-induced fast global oscillations, as described by Brunel and Hakim [64], with spatial oscillations due to a distance-dependent effective connectivity profile. We derive analytical conditions for pattern formation that are exclusively based on general characteristics of the effective connectivity profile and the delay. The profile is split into a static weight that is either excitatory or inhibitory for a given neural population, and a spatial modulation that can be interpreted as a distance-dependent connection probability. Given the biological constraint that connection probabilities depend on distance but weights do not, wave trains cannot occur in a single homogeneous population irrespective of the shape of distance-dependent connection probability. Only the effective connectivity profile of two populations (excitatory and inhibitory), permits solutions where a mode with finite non-zero wave number is the most unstable one, a prerequisite for the emergence of nontrivial spatial patterns such as wave trains. We therefore establish a relation between the anatomically measurable connectivity structure and observable patterns in spiking activity. The predictions of the analytically tractable neural-field model are validated by means of simulations of nonlinear rate-unit networks [108] and of networks composed of LIF-model neurons, both using the same simulation framework [109]. In our experience, the ability to switch from a model class with continuous real-valued interaction to a model class with pulsecoupling by changing a few lines in the formal high-level model description increases the efficiency and reliability of the research.
The presented mathematical correspondence between these a priori distinct classes of models for neural activity has several implications. First, as demonstrated by the application in the current work, it facilitates the transfer of results from the well-studied domain of neural-field models to spiking models. The insight thus allows the community to arrive at a coherent view of network phenomena that appear robustly and independently of the chosen model. Second, the quantitative mapping of the spiking model to an effective rate model in particular reduces the parameters of the former to the set of fewer parameters of the latter; single-neuron and network parameters are reduced to just a weight and a time constant. This dimensionality reduction of the parameter space conversely implies that entire manifolds of spiking models are equivalent with respect to their bifurcations. Such a reduction supports systematic data integration: Assume a researcher wants to construct a spiking model that reproduces a certain spatiotemporal pattern. The presented expressions permit the scientist to restrict further investigations to the manifold in parameter space in line with these observations. Variations of parameters within this manifold may lead to phenomena beyond the predictions of the initial bifurcation analysis. Additional constraints, such as firing rates, degree of irregularity, or correlations, can then further reduce the set of admissible parameters.
To keep the focus on the transferability of results from a neural-field to a spiking model, the present study restricts the analysis to a rather simple network model. In many cases, extensions to more realistic settings are straight forward. As an example, we perform our analysis in one-dimensional space. In two dimensions, the wave number becomes a vector and bifurcations to periodic patterns in time and space can be constructed (see [28,Section 8.4] and [29]). Likewise, we restricted ourselves to a constant synaptic delay like Roxin et al. [42,51] because it enables a separation of a spatial component, the shape of the spatial profile, and a temporal component, the delay. A natural next step is the inclusion of an axonal distance-dependent delay term as for instance in [40] to study the interplay of both delay contributions [58]. For simplification, we use here a boxcar-shaped spatial connectivity profile in the demonstrated application of our approach. For the emergence of spatiotemporal patterns, however, the same conditions on the connectivity structure and the delays hold for more realistic exponentially decaying or Gaussian-shaped profiles [20][21][22]. If the spatial connectivity profiles are monotonically decaying in the Fourier domain (as it is the case for exponential or Gaussian shapes), the Fourier transform of the effective profile of a network composed of an excitatory and an inhibitory population exhibits at most one zero-crossing. Either the minimum or the maximum are attained at a non-zero and finite wave number k, but not both. With a cosine-shaped effective profile, only a single wave number dominates by construction [42,51]. Here, we decided for the boxcar shape because of its oscillating Fourier transform that allows us to study competition between two spatial frequencies corresponding to the two extrema.
Similar to our approach, previous neural-field studies describe the spatial connectivity profile as a symmetric probability density function (see, for example, [49]). For our aim, to establish a link to networks of discrete neurons, the interpretation as a connection probability and the separation from a weight are a crucial addition. This assumption enables us to distinguish between different neural populations, to analyze the shape of the profile based on parameters for the excitatory and the inhibitory contribution, and to introduce biophysically motivated parameters for the synaptic strength. Starting directly with an effective profile that includes both, excitation and inhibition, such as (inverse) Mexican hat connectivity, is mathematically equivalent and a common approach in the neural-fields literature [29,40,42,53]. But it neglects the biological separation of neurons into excitatory and inhibitory populations according to their effect on postsynaptic targets (Dale's law [110]) and their different spatial reach of connectivity [111]. A result of this simplification, these models can produce waves even with a single homogeneous population [42][43][44], while with homogeneous stationary external drive we show that at least two populations are required.
Local excitation and distant inhibition are often used to support stationary patterns such as bumps, while local inhibition and distant excitation are associated with non-stationary patterns such as traveling waves [28,40,112]. For sufficiently long synaptic delays, we also observe wave trains with local inhibition and distant excitation, as often observed in cortex [111]. However, we show that the reason for this is the specific shape of the effective spatial profile, and not only the spatial reach itself. Our argumentation is therefore in line with Hutt et al. [48,54] who demonstrate that wave instabilities can even occur with local excitation and distant inhibition for specific spatial interactions. The spatial connectivity structure and related possible activity states are in addition important factors for computational performance or function of model networks [90,113].
In Section 2.4, we compute the spatial and temporal oscillation frequencies as well as the wave-propagation speed. Such quantities are directly comparable with experimental measurements. The conduction speed in unmyelinated fibers is in the 0.1 mm/ms range and the propagation speed of mesoscopic waves is of similar order of magnitude [114,115]. Our prediction based on the current choice of model parameters is with 0.04 mm/ms in the same order of magnitude.
The parameter mapping between a neural-field and a spiking model in this study relies on the insight that the transfer function of the LIF neuron in the fluctuation-driven regime resembles the one of a simple first-order low-pass filter. Since this approximation not only holds for LIF neurons, but also for other spiking neuron models, our results are transferable. A further candidate model with this property is the exponential integrate-and-fire model [116]. Other examples include Nordlie et al. [117] who characterize the firing-rate responses of LIF neurons with strong alpha-shaped synaptic currents and similarly Heiberg et al. [118] for a LIF neuron model with conductance based synapses and potassium-mediated afterhyperpolarization currents proposed previously [119].
In the literature, the time constant of neural-field models is often associated with the membrane or the synaptic time constant [33,79,90]. Here, we observe that the time constant of the neural-field model derived from the network of spiking neurons falls in between the two. In line with [117,120], we suggest to reconsider the meaning of the time constant in neural-field models.
A limitation of the approach employed here is that the linear theory is only exact at the onset of waves. Beyond the bifurcation, it is possible that nonlinearities in the spiking model govern the dynamics and lead to different prevailing wave numbers or wave frequencies than predicted. Roxin et al. [51] report that the stability of traveling waves depends crucially on the nonlinearity. Nevertheless they do not observe traveling waves in their spiking-network simulations. In the present work, however, we identify biophysically motivated neuron and network parameters that allow wave trains to establish in a spiking network. Still, we had to increase the delay beyond the predicted bifurcation point to obtain a stable wave pattern.
Furthermore, the theory underlying the mapping of the spiking network to the neural-field model is based on the diffusion approximation and therefore only applicable for sufficiently small synaptic weights. Widely distributed synaptic weights, for example, may lead to larger deviations. We here primarily target a wavegenerating mechanism for cortical networks. Since in other brain regions involved neuron types, connectivity structures and input characteristics are different, other mechanisms for pattern formation not covered in this work need to be taken into account [3].
The working-point dependence of the neural-field models derived here offers a new interpretation of propagating activity measured in vivo [8,12]. Even if the anatomical connectivity remains unchanged during a period of observation, the stability of the neural system can be temporarily altered due to changes in activity. The transfer function of a LIF neuron depends on the mean and the variance of its input, and we have shown that stability is related to its parametrization. In particular, local changes of activity, for example due to a spatially confined external input, can affect stability and hence influence whether a signal remains rather local or travels across the cortical surface. That means, we would relate the tendency of a neural network to exhibit spatiotemporal patterns not only to its connectivity, but also to its activity state that can change over time.

Derivation of the characteristic equation
With the Fourier-Laplace ansatz u (x, t) = e ikx e λt for the integro-differential equation in Eq 1 linearized around u 0 and the choice to set the slope of the gain function to unity, the characteristic equation in Eq 3 results from In the last row, we recognize the Fourier transform p of the spatial profile p.

Effective connectivity profile for two populations
While the connectivity m is a scalar in the one-population model, it is a matrix M in the case of two populations (given in Eq 4). The ansatz for deriving the characteristic equation in the latter case reads δu (x, t) = ve ikx e λt , with v denoting a vector of constants. This leads to the auxiliary eigenvalue problem where c denotes an eigenvalue and M is an auxiliary matrix containing the Fourier transforms of the entries of M :

Largest real part on principle branch of Lambert W function
The We demonstrate that the branch of the Lambert W function with the largest real part is the principal branch. Considering only real-valued arguments x ∈ R, we write W (x) = |W (x)| e iϕ = α + iβ and where ϕ ∈ [−π, π] is the principal value. We index the branches by q ∈ Z according to the number of halfcycles of the exponential in Eq 37: ϕ + β = q · π. The branch number is equal to b = q 2 with ⌊·⌋ denoting the floor function. The principle branch is therefore given by the index q = 0 for x ≥ 0 and by q = 1 for x < 0.
Taking the absolute square of Eq 36 yields the real equation Without loss of generality we may assume β ≥ 0; this is certainly true for the real solutions with β = 0 and it also holds for one of the complex solutions for any complex pair. Complex solutions come in conjugate pairs due to the symmetry (ϕ, β) → (−ϕ, −β) exhibited by Eq 37 and Eq 38. Since each member of a pair has by definition the same real part, it is sufficient to consider only the member with positive imaginary part β > 0.
To prove that the real part α of W is maximal for b = 0, we show that α is a decreasing function of β along the solutions of Eq 36. Investigating the intersections of the left-hand side and the right-hand side of Eq 38 as a function of α illustrates how increasing the imaginary part β affects the real part α. The left-hand side is a decaying function of α with an intercept of x 2 . The right-hand-side is a parabola with an offset of , an intersection occurs either at a positive real part α ≥ 0 if x 2 ≥ β 2 , or at a negative real part α < 0 if x 2 < β 2 . Increasing β moves the parabola upwards and therefore the intersection to the left, meaning that α decreases with increasing β.
For x ∈ [−e −1 , 0), we distinguish the cases β = 0 and β > 0 which both have only solutions with α < 0. First, the two real solutions (q = ±1) existing in this interval correspond to two simultaneously occurring intersections; in addition a third intersection is created by the squaring Eq 38 but it is not an actual solution of Eq 36. The intersection at the larger real part per definition corresponds to the principal branch with index q = 1. Second, the complex solutions are indexed by odd numbers q with |q| > 1. Taking into account the interval where ϕ is defined, the imaginary part is bounded from below such that β ≥ 2π for non-principal branches. Analogous to the previously discussed interval of x, there exists only one intersection between the exponential function and the parabola for large values of β (in particular: x 2 < β 2 ) that moves towards smaller values of α with increasing β.
So in summary we have shown that for real x, the principal branch harbors the solutions with maximal real part α.

Characteristic equation with Lambert W function
The characteristic equation in Eq 3 can be rewritten in terms of the Lambert W function to Eq 7 using the transformation: The last step collects terms using the definition of the Lambert W function, z = W (z) e W (z) with z ∈ C.

Properties of the spatial profile
We assume that the spatial profile p is a symmetric probability density function, which implies that its Fourier transform p, also called the characteristic function, is real valued and even. Further, we can prove that p ∈ (−1, 1] and that p attains 1 only at the origin in two steps: • | p(k)| ≤ 1 for all k ∈ R: • | p(k)| < 1 for all k = 0: because |cos (kr)| < 1 except for a set of measure zero in r if k = 0, that does not influence the value of the integral.

Transition curves for reduced profile
We here use a graphical approach to derive the transition curves shown first in Fig 3F. A necessary condition for an extreme value of the reduced profile c (κ) from Eq 12 located at κ * is: this condition can be rewritten as where a 1 and a 2 are the absolute values of the complex numbers and φ 1 and φ 2 their phases, given by The vanishing right-hand-side of Eq 43 implies that the term in the square brackets is purely imaginary. An example solution for the case a 1 < a 2 is illustrated in Fig 8A in the complex plane. Note that a 1 and φ 1 are independent of the parameters ρ and η in this representation. In our graphical analysis, Eq 43 is interpreted as the sum of two vectors in the complex plane. As shown in Fig 8A, we determine φ 1 as the angle at which the tip of the second vector ends on the imaginary axis, which follows from elementary trigonometry as The locations of extrema are then given by the intersections of φ ± 1 with the second row of Eq 44. Here φ 2 is determined from the last equation in Eq 43. Fig 8B reproduces Fig 3F. The white bars connect points given by parameter combinations (ρ, η) on both sides of the transition curves, and the parameters are specified in panels C and D. The first transition curve η t1 (ρ) (dashed line in Fig 8B) is determined by c max (κ max ) = | c min (κ min )|, that means it is determined by parameters (ρ, η) for which the absolute values of the positive and negative extremum of the profile are equal. The top panel of Fig 8C compares two reduced profiles obtained for a fixed value for ρ and two values for η. The line colors correspond to the colored regions in the diagram in Fig 8B for the respective parameter combination | c min | > c max for the purple profile and vice versa for the dark gray profile. The point with the maximum absolute value of each profile is indicated with a cross. Exactly at the transition either κ max or κ min is zero (for example κ 0 = 0) and the other one is non-zero (for example κ 1 > 0). This condition, with Eq 12, yields the absolute value for both extrema at the transition, where they must be equal, thus | c (κ 0 )| = | c (κ 1 )| = |1 − η|. Any point on the transition curve is a unique triplet of parameters(ρ, η, κ 1 ), and with the condition ∂ ∂κ c (κ) | κ1 = 0 we obtain two equations that need to be fulfilled at each point for κ = κ 1 : The lower equation is obtained by identifying c (κ) in its derivative in Eq 42. We solve both equations with respect to η and equate them to get For a given value of ρ, we compute the roots of the left-hand-side expression, which defines κ(ρ). The bottom panel of Fig 8C shows φ 1 from Eq 44 as a black line and φ ± 1 from Eq 45 for the parameters of the two effective profiles (same color coding as in the top panel). The intersections corresponding to the relevant extrema are highlighted by crosses. This visual analysis allows us to identify the interval for κ in which zero-crossings of the left-hand side of Eq 47 as a function of κ can correspond to the extrema, that is κ ∈ (0, 4.49341) where the lower limit corresponds to φ 1 = π 2 and the upper limit to φ 1 = 3π 2 . The zero-crossing at the smallest non-zero κ indicates the extremum at κ 1 . Finally, the transition curve is given by η t1 (ρ) = 1 + cos (κ (ρ)) 1 + cos (ρκ (ρ)) , where κ(ρ) is given by the roots of (47). The second transition curve η t2 (ρ) (solid line in Fig 8B) indicates whether the extremum with the largest absolute value occurs at κ = 0 or at κ > 0. Fig 8D shows in the top panel two reduced profiles for a fixed value of η, but two values for ρ such that the c min occurs once at κ min = 0 (light blue as in Fig 8B) and once at κ min > 0 (purple as in Fig 8B), indicated by cross markers.
Graphical analysis using the bottom panel of Fig 8D indicates that this transition happens when φ − 1 at κ 0 switches from lying slightly above (light blue line) to below (purple line) the parameter-independent function φ 1 (black line). We observe that decreasing ρ moves the intersection point and with it the location of the extremum up the black line, starting from κ = 0 to larger values for κ.
Close to the transition, the intersection point comes arbitrarily close to κ = 0, which permits local analysis by a Taylor expansion of φ 1 for small κ: A comparison of the coefficients of the third-order polynomials then gives the transition curve because this coefficient decides for small κ whether φ 1 (black line) or φ − 1 as a function of the parameters (ρ, η) has a larger slope and lies on top.

Fast synaptic noise
The stationary firing rate of a LIF neuron model subject to fast synaptic noise has been derived in [107,121]. The linear response of the model to time-dependent stimuli has been derived in [104,Eq 29], by application of a general reduction technique to a white noise system with displaced boundary conditions.

Transfer function
The linear response of the firing rate is described by the transfer function, here denoted by H µ , that relates the modulation of the firing rate δν(ω) to the modulation of the mean δµ(ω) as It is computed based on the first term of [104,Eq 29] H G (ω) = ν 0 for the oscillation frequency ω and the boundaries x {r,θ} = √ 2y {θ,r} . The function Φ ω (x) = e 1 4 x 2 U iωτ m − 1 2 , x is defined by parabolic cylinder functions U [103,122] and Φ xθ is a short-hand notation for Φ ω (x r ) − Φ ω (x θ ). We need to multiply the transfer function with the transfer function of a first-order low-pass filter due to the exponential time course of our synaptic currents: We then obtain h µ by an inverse Fourier transform and a Laplace transform because λ is a complex frequency and ω is real in the present context: The latter relations imply a replacement iω → λ in Eq 53. For completeness we also provide the term due to the modulation of the variance [104,Eq 29], cf. also Eq 19, In the fluctuation-driven regime, H µ and H σ 2 both have a maximum at vanishing frequency. We compare these two contributions in Fig 5B, which shows that H σ 2 can be neglected compared to the former with only making a small error.

Effective coupling strength
For the numerical evaluation of the transfer function, we show H ecs 0 = w ecs / (τ m JK) as the dashed line in Fig 5B, obtained by calculating analytically the effective coupling strength w ecs from linear-response theory. The effective coupling strength for a connection from neuron j with rate ν j to neuron i with rate ν i is defined as [97, Eqs. A.2 and A.3 (correcting a typo in this previous work)]: where f and y {θ,r} are defined as in Eq 52. The dashed line in Fig 5B is given by the term ∝ α alone since we also ignore the small contribution of the variance to the transfer function (H σ 2 ) of the LIF neuron [104] .

Linear interpolation
To compute the derivative dλ/dα given in Eq 31, we use a method for computing the derivative of an implicit function: If R (α, λ) = 0, it follows that the derivative With the characteristic equation for the effective transfer function Eq 29, we get The partial derivatives of R with respect to α and λ are

Fixing the working point
For the spiking model, we fix the total input to each neuron in terms of its mean µ * and variance σ * to given values. To attain a fixed working point (µ * , σ * ), we add to the local contribution from the recurrently connected network (see Eq 15) external excitatory and inhibitory input with Poisson-distributed interspike interval statistics: The excitatory and inhibitory external connection strengths are J and −gJ, respectively. The expressions for the excitatory and inhibitory external rates are: Eq 61 corrects a small inconsistency in a preliminary report of this study [123] which used Eq E.1 in [97] to fix the working point. Accordingly, the values for ν E,ext and ν I,ext are updated in Table 4, affecting the raster plots in Figs 1 and 7.

Physical units
The sub-threshold dynamics of the LIF neuron in Eq 13 are, without loss of generality, given in scaled units. In this formulation, V , J and I are all quantities with unit Volt. For the parameter-wise comparison with numerical network simulation (for example using NEST [124]), it is useful to consider a description where I ′ and J ′ represent electric currents in units of Ampere: Here, we also introduce a resistive leak reversal potential E L , and shift threshold and reset potentials V ′ θ = V θ + E L and V ′ r = V r + E L , respectively. The membrane time constant τ m = R m C m relates the membrane resistance R m and capacitance C m . In units of Ampere, the total current input I ′ = I/R m and the synaptic weight amplitude J ′ = C m J/τ s .

Network structure and parameters
We simulate recurrently connected neural networks of one excitatory and one inhibitory populations each using the neural simulation software NEST [126], using either spiking-or rate-neuron models. The support for rate neurons in NEST was added as described in [108]. Figs 2 and 3 provide the complete neuron and network model descriptions and Activity Table 2. Summary of network models following the guidelines of Nordlie et al. [125]. Separation between nonlinear spiking and rate neurons as used in NEST simulations.
wave trains (marked by black star in Fig 1D, Fig 4D and Fig 7C). Other simulation parameters used to obtain other network states shown throughout this paper are indicated with a marker in Table 4, and the changed parameters are given in the corresponding figures. The same marker always denotes the same parameter combination across figure panels. The tables distinguish between network properties and parameters valid for both spiking and rate neuron models and those specific to only one neuron model. Irrespective of the choice of neuron model (rate vs. spiking), the neuron parameters are shared between both neuron populations.
The number of excitatory neurons N E in our network is four times larger than the number of inhibitory neurons N I [127]. All neurons are positioned on a grid along a one-dimensional path of perimeter L with a space constant of ∆x = L/N I . At each grid position x ∈ [0, L − ∆x], there is one inhibitory neuron and four excitatory neurons. The network activity in Figs 1, 4 and 7 is shown for all inhibitory neurons, but only for one excitatory neuron at each grid position. Connections between neurons are drawn according to a distancedependent rule with periodic boundary conditions (a "ring" network) using the NEST Topology module. The number of incoming connections, the in-degree K {E,I} , is proportional to the population size of the presynaptic population, assuming an overall connection probability of 10%. The width of the boxcar-shaped distance-dependent profile R {E,I} depends on the presynaptic population alone. Within a distance of R {E,I} around each postsynaptic neuron, potential presynaptic neurons are selected at random and connections are established until the prescribed in-degree is reached. The random component of this connection algorithm may lead to a slight asymmetry with respect to excitation and inhibition in the finite-sized network that might cause a small drift visible for example in stationary Turing patterns as in Fig 4B. Potentially presynaptic neurons within this distance are picked at random and connections are established until the fixed in-degree is reached. Multiple connections between the same pair of neurons termed multapses are allowed, but selfconnections (autapses) are prohibited.
The leaky integrate-and-fire model with exponential postsynaptic currents is implemented in NEST under the name iaf_psc_exp. The neuron parameters are the same as in the microcircuit model of [128] with the difference that our membrane time constant τ m is half of theirs and that we here omit the refractory period τ ref , although our results generalize to a non-zero τ ref . An excitatory and an inhibitory Poisson generator provide external input to all neurons. Their rates ν {E,I},ext are determined according to Eq 61 for fixing the working point (µ, σ).
The dynamics of rate-based units in NEST is specified as stochastic differential equations using the Itô convention [108], except that we here set the stochasticity (the variance of the input) to zero. We use the neuron model tanh_ipn, that employs a hyperbolic tangent as a gain function.
Simulations run for a simulation time T sim with a temporal resolution of dt. During rate simulations, the instantaneous rate is recorded once at each time step dt. Our raster plots from simulations of the spiking model and the image plots from simulation of the rate model show the network activity from all simulated neurons after a start-up transient T trans .

Network models Distancedependent connectivity
Neural units j ∈ X at location x j and i ∈ Y at x i in pre-and postsynaptic populations X and Y , respectively. Displacement between units i and j: Boxcar-shaped spatial profile with width R and Heaviside function Θ:

Spiking model Subthreshold dynamics
If t > t * + τ ref   Table 4. Simulation and network parameters. Parameters according to setting for wave trains as shown in Fig 1D, Fig 4D and Fig 7C (black star marker). Deviant parameters are given in the captions of the respective figures and indicated by different markers.