Modeling electrocortical activity through improved local approximations of integral neural field equations.

Neural field models of firing rate activity typically take the form of integral equations with space-dependent axonal delays. Under natural assumptions on the synaptic connectivity we show how one can derive an equivalent partial differential equation (PDE) model that properly treats the axonal delay terms of the integral formulation. Our analysis avoids the so-called long-wavelength approximation that has previously been used to formulate PDE models for neural activity in two spatial dimensions. Direct numerical simulations of this PDE model show instabilities of the homogeneous steady state that are in full agreement with a Turing instability analysis of the original integral model. We discuss the benefits of such a local model and its usefulness in modeling electrocortical activity. In particular, we are able to treat "patchy" connections, whereby a homogeneous and isotropic system is modulated in a spatially periodic fashion. In this case the emergence of a "lattice-directed" traveling wave predicted by a linear instability analysis is confirmed by the numerical simulation of an appropriate set of coupled PDEs.


I. INTRODUCTION
In many regions of mammalian neocortex, synaptic connectivity patterns follow a laminar arrangement, with strong vertical coupling between layers. Consequently, cortical activity is considered as occurring on a two-dimensional plane, with the coupling between layers ensuring near instantaneous vertical propagation. Hence, it is highly desirable to obtain solutions to fully planar neural field models. The most popular Wilson-Cowan ͓1͔ or Amari ͓2͔ style neural field models are typically written in the language of integrodifferential or purely integral equations ͑see ͓3-5͔ for recent reviews͒. In recent years there has been a growing interest in neural field models where the communication between different parts of the domain is delayed due to the finite conduction speed of action potentials ͓6-10͔. The advent of this work can be traced back to that of Nunez ͓11͔. However, the set of mathematical techniques for the analysis of nonlocal models with space-dependent delays is not yet as thoroughly developed as it is for local partial differential equation ͑PDE͒ models. As discussed in ͓12͔ a local PDE model would offer a number of advantages over its nonlocal counterpart, allowing the use of ͑i͒ powerful techniques from nonlinear PDE theory, ͑ii͒ standard numerical techniques for the solution of PDEs, and ͑iii͒ a more numerically straightforward analysis of the effects of spatial inhomogeneities. To date progress in this area has been made by Jirsa and Haken ͓13͔ for neural field models in one spatial dimension with axonal delays, and by Laing and Troy ͓14͔ in two spatial dimensions for models lacking axonal delays. In both cases integral transform techniques are exploited and a PDE description is obtained only when the integral models under consideration are defined by spa-tiotemporal kernels whose Fourier transform has a rational polynomial structure. It is the goal of this paper to address the physiologically important case of a model in two spatial dimensions with axonal delays and to obtain an equivalent PDE model. Previous work on this problem by Liley et al. ͓12͔ has shown that for synaptic connectivity functions that fall off exponentially with distance, there is an equivalent local model consisting of an infinite set of PDEs involving fractional derivative terms. Although not particularly useful in its own right this system can be approximated by a single hyperbolic PDE. This PDE has been shown to provide a so-called long-wavelength approximation to the underlying integral model, and equations of this type have been used in several EEG modeling studies ͑see, for example, ͓15-17͔͒.
In Sec. II we introduce the class of neural field population models that we study in this paper. Next, in Sec. III we derive the equivalent PDE model, and compare it to the model obtained using the long-wavelength approximation. In Sec. IV we present a Turing instability analysis of the original integral model. Importantly we show that numerical simulations of the PDE model are in precise agreement with the behavior predicted at the onset of a Turing instability. The case of spatially modulated synaptic connectivity is treated in Sec. V. Finally, in Sec. VI we discuss natural extensions of the work in this paper.

II. INTEGRAL NEURAL FIELD MODEL
We consider planar neural field models that incorporate delayed synaptic interactions between distinct neuronal populations where the activity of synapses in population a induced by activity in population b can be written as u ab = ab ‫ء‬ ab .

͑1͒
Here u ab = u ab ͑r , t͒, r = ͑r , ͒ (r ʦ R + , ʦ ͓0,2͒, t ʦ R + ), and a and b label functionally homogeneous neuronal populations. The activity variable u ab ͑r , t͒ can be interpreted as a spatially averaged synaptic activity centered about r. The symbol ‫ء‬ represents a temporal convolution in the sense that The variable ab ͑r , t͒ describes the presynaptic input to population a arriving from population b, which we write as where ‫ؠ‬ denotes composition. The function ab ͑t͒ ͓with ab ͑t͒ = 0 for t Ͻ 0͔ represents a normalized synaptic filter, while w ab ͑r , rЈ͒ is a synaptic footprint describing the anatomy of network connections. One common choice for the synaptic filter is the so-called delayed difference of exponentials: ab ͑t͒ = ͑t − ab ; ␣ ab , ␤ ab ͒, where ͑t ; ␣ , ␤͒ = ͑1/␣ −1/␤͒ −1 ͓e −␣t − e −␤t ͔⌰͑t͒ and ab is a mean synaptic processing delay between populations a and b. Here, ⌰͑t͒ is the Heaviside step function. In the absence of detailed anatomical data it is common practice to consider corticocortical connectivity functions to be homogeneous and isotropic so that w ab ͑r , rЈ͒ = w ab ͉͑r − rЈ͉͒. The function f a represents the firing rate of population a, and v ab is the mean synaptic axonal velocity along a fiber connecting population b to population a. For conduction velocities in the range 1.5− 7 m/s ͑typical of white matter axons͒ axonal delays are significant over scales ranging from a single cortical area ͑of spatial scale 10 mm͒ up to the scale of interhemispherical collosal connections. In a Wilson-Cowan or Amari style neural field model the variables h a are taken to be of the form h a = ͚ b u ab + h a 0 , with h a 0 a constant drive term. In more sophisticated models of electroencephalogram ͑EEG͒ activity, such as in the work of Liley et al. ͓12͔, h a is interpreted as the average soma membrane potential of a population and chosen to obey a nonlinear equation of the form Here the activity-dependent functions ⌫ ab weight the contributions from the various contributing neuronal populations, and take into account the shunting nature of synaptic interactions ͑see ͓12͔ for details͒.

III. EQUIVALENT PDE MODEL
The numerical solution of the neural field model defined in Sec. II is challenging for two reasons in particular. The first being that the nonlocal presynaptic input term ͑3͒ is defined by an integral over a two-dimensional spatial domain, and the second that it involves an argument that is delayed in time. In fact since this delay term is space dependent it requires keeping a memory of all previous synaptic activity. One of the key motivations of our work is to circumvent the huge numerical overheads in simulating such a delayed nonlocal system.
Introducing G ab ͑r , t͒ = G ab ͑r , t͒ ͑where r = ͉r͉͒ with G ab ͑r,t͒ = w ab ͑r͒␦͑t − r/ ab ͒, ͑5͒ allows us to rewrite Eq. ͑3͒ as where a = f a ‫ؠ‬ h a . Importantly, the right-hand side of Eq. ͑6͒ has a convolution structure. Introducing the 3D Fourier transform according to ͑r,t͒ = 1 Here J 0 ͑z͒ = ͑2͒ −1 ͐ 0 2 de iz cos is the Bessel function of the first kind of order zero. We recognize Eq. ͑8͒ as the Hankel transform of w ab ͑r͒e −ir/v ab .
If G ab ͑k , ͒ can be represented in the form R ab ͑k 2 , i͒ / P ab ͑k 2 , i͒ then we have that P ab ͑k 2 , i͒ ab ͑k , ͒ = R ab ͑k 2 , i͒ b ͑k , ͒. By identifying k 2 ↔ −ٌ 2 and i ↔ ‫ץ‬ t , then a formal inverse Fourier transform will yield a local model in terms of the operators ٌ 2 and ‫ץ‬ t . However, unless the functions P ab and R ab are polynomial in their arguments then the interpretation of functions of these operators is unclear. To illustrate this we revisit the common choice as follows: then the problem arises as how to interpret ͓A ab − ٌ 2 ͔ 3/2 . In the long-wavelength approximation one merely expands G ab ͑k , ͒ around k = 0 for small k, yielding a "nice" rational polynomial structure, which is then manipulated as described above to give the PDE as follows:

͑12͒
We refer to Eq. ͑12͒ as the long-wavelength model. Higher order approximations can be obtained by expanding to higher powers in k ͓18͔, although all resulting higher order PDE models will still be long-wavelength approximations.
To obtain a PDE model that sidesteps the need to make the long-wavelength approximation we use the observation that e −r can be fitted with a two-parameter function of the form ͓19͔ where K 0 is the modified Bessel function of the second kind of order zero. The prefactor ͑⑀ 1 2 − ⑀ 2 2 ͒ −1 ensures a common normalization for e −r and E͑r͒.
The form of Eq. ͑13͒ motivates the approxima- This form is particularly useful since K 0 ͑ar͒ has a simple Hankel transform given by 1 / ͑a 2 + k 2 ͒. For this choice G ab ͑k , ͒ takes the form In this case we obtain the PDE model, For want of a better name we shall refer to Eq. ͑15͒ as the rational model. Note that the long-wavelength model can also be obtained using the approximation e −r Ϸ L͑r͒ =2K 0 ͑ ͱ 2/3r͒ / 3. However, this approximation is poor for both small and large values of r since lim r→0,ϱ L͑r͒ / e −r = ϱ. Note that lim r→0 E͑r͒ = ͑⑀ 1 2 − ⑀ 2 2 ͒ −1 ln͑⑀ 1 / ⑀ 2 ͒, and so is well behaved at the origin. A plot of E͑r͒ ͑rational model͒ and L͑r͒ ͑long-wavelength model͒ is shown in Fig. 1. The longwavelength model is recovered from the rational model with the choice ͑⑀ 1 , ⑀ 2 ͒ = ͑ ͱ 3/2,0͒.
We note that the formulation of the rational model ͑and indeed the long-wavelength model͒ provides only an approximation to the original relationship G ab ͑r , t͒ = w ab ͑r͒␦͑t − r / v ab ͒. As a result synaptic activity does not just arrive at t = r / v ab . This is best seen by deriving G ab ͑r , t͒ for the choice ͑14͒. Writing the inverse transform of 1 / ͓A ab 2 ͑͒ + k 2 ͔ as H ab ͑r , t͒ ͑calculated in Appendix A͒ we have that G ab ͑r , t͒ Although the shape of G͑r , t͒ is consistent with our physical expectations, namely, a positive decaying pulse beyond t = r / v ab , we have as yet not fixed the parameters ͑⑀ 1 , ⑀ 2 ͒ of the rational model. To do this we could try and approximate the complex function ͑10͒ of the full model Eq. ͑14͒. However, a simpler approach is to consider fitting a projection of Eq. ͑10͒ that describes the linear stability of the homogeneous steady state. We do this in the next section. Here we show that the dynamic pattern forming instability borders for the rational model are in closer quantitative agreement to those of the full model than the long-wavelength model. In this sense we argue that the rational model is an improvement over the long-wavelength model ͑which it recovers as a special case͒. We can also interpret the above in terms of a distancedependent distribution of velocities q ab ͑v , r͒ for the spreading of synaptic activity by writing with the normalization ͐ 0 ϱ dvq ab ͑v , r͒ = 1. The original definition ͑8͒ is recovered for a single conduction velocity q ab ͑v , r͒ = ␦͑v − v ab ͒. Rearranging Eq. ͑18͒ gives the velocity distribution as Using Eq. ͑17͒ we find that, for both the rational and longwavelength model, the distribution q ab ͑v , r͒ has a peak at v = v ab as expected, as well as one for small r and v. This second peak, however, is strongly localized and hence merely introduces an insignificant overall delay to a traveling pulse.

IV. TURING INSTABILITY ANALYSIS
Here we explore the stability of the homogeneous steady state for the choice h a = ͚ b u ab + h a 0 . The extension of this analysis to include the Liley model given by Eq. ͑4͒ is straightforward. Let h a ͑r , t͒ = h a ss denote the homogeneous steady state, defined by h a ss = ͚ b W ab f b ͑h b ss ͒ + h a 0 , where W ab = ͐ R 2drЈw ab ͑rЈ͒. Linearizing around this solution and considering perturbations of the form h a ͑r , t͒ = h a e t e ik·r , gives the system of equations

͑21͒
Note that for a delayed difference of exponentials function we have simply that An instability occurs when for the first time there are values of k at which the real part of is non-negative. A Turing bifurcation point is defined as the smallest value of some order parameter for which there exists some nonzero k c satisfying Re(͑k c ͒) = 0. It is said to be static if Im(͑k c ͒) =0 and dynamic if Im(͑k c ͒) ϵ c 0. The dynamic instability is often referred to as a Turing-Hopf bifurcation and generates a global pattern with wave number k c , which moves coherently with a speed c = c / k c , i.e., as a periodic traveling wave train. Generically one expects to see the emergence of doubly periodic solutions that tessellate the plane, namely, traveling waves with hexagonal, square, or rhombic structure. If the maximum of the dispersion curve is at k = 0 then the mode that is first excited is another spatially uniform state. If c 0, we expect the emergence of a homogeneous limit cycle with temporal frequency c .
For computational purposes it is convenient to split the dispersion relation into real and imaginary parts and write = + i to obtain where E R ͑ , ͒ =Re E͑k , + i͒ and E I ͑ , ͒ =Im E͑k , + i͒.
Solving the system of Eqs. ͑23͒ gives us a curve in the plane ͑ , ͒ parametrized by k. A static bifurcation may thus be identified with the tangential intersection of = ͑͒ along the line = 0 at the point = 0. Similarly a dynamic bifurcation is identified with a tangential intersection at 0. This is equivalent to tracking points where Ј͑͒ = 0, given by the For example, consider two populations, one excitatory and one inhibitory, and use the labels a ʦ ͕E , I͖, with w EE,IE 0 Ͼ 0 and w II,EI 0 Ͻ 0. In this case

͑24͒
where G ab = G ab ͑k ,−i͒ and ab = ab ͑͒. For simplicity we shall set ␣ aI = ␤ aI = 1 and ␣ aE = ␤ aE = ␣ and ignore synaptic delays by taking ab = 0. In neocortex the extent of excitatory connections W aE is broader than that of inhibitory connections W aI , and so we take aE Ͼ aI . Again for simplicity we set aI = I = 1 and aE = E . For a common choice of firing rate function f a = f we may also set ␥ a = ␥. Finally, we focus on only a single axonal conduction velocity and set v ab = v.
In Fig. 2 we show a plot of the critical curves in the ͑v , ␥͒ plane above which the homogeneous steady state is unstable to dynamic instabilities with k c =0 ͑bulk oscillations͒ and k c 0 ͑traveling waves͒. The upper panel in Fig. 2 shows results for the full model ͓defined by Eq. ͑9͔͒, the middle panel that for the long-wavelength model, and the lower is that for the rational model. We note that there are no qualitative differences between the models in the sense that, at the linear level, all models support Hopf and Turing-Hopf instabilities, with a switch from one to the other with increasing v. One obvious difference is that with an appropriate choice of ͑⑀ 1 , ⑀ 2 ͒ the rational model is in far better quantitative agreement with the full model. To test the predictions of our linear stability analysis and to compare the nonlinear behavior of the two models we resort to direct numerical simulations. Necessarily this requires the choice of a firing rate function.
In all direct numerical simulations of the PDE models we take the sigmoidal form ͑25͒ Here ␤ is a gain parameter. Without loss of generality we set the steady state value of h E,I to be zero, giving ␥ = fЈ͑0͒ = ␤ / 4. The predictions of the linear stability analysis are found to be in excellent agreement with the behavior of the PDE models. Figure 3 shows a pattern seen in the longwavelength model, beyond the Turing-Hopf bifurcation, while Fig. 4 shows a pattern seen in the rational model, also beyond the Turing-Hopf bifurcation. For both models parallel moving stripes are very commonly seen beyond the Turing-Hopf bifurcation, particularly for small domains, but a variety of other patterns such as those shown here are also possible, i.e., both systems have multiple attractors. Therefore, based upon numerics alone it is not possible to make the statement that there is a qualitative difference between the types of possible patterns in the two models. A short discussion of the numerical techniques used in this paper can be found in Appendix B.

V. SPATIAL MODULATION
It is now known that the neocortex has a crystalline microstructure at the millimeter length scale, so that the assumption of isotropic connectivity has to be revised ͑for a recent discussion see ͓20͔͒. For example, in visual cortex it has been shown that long range horizontal connections ͑extending several millimeters͒ tend to link neurons having common functional properties ͑as defined by their feature maps͒. Since the feature maps ͑for orientation preference, spatial frequency preference, and ocular dominance͒ are approximately periodic this leads to patchy connections that break continuous rotation symmetry ͑but not necessarily continuous translation symmetry͒. With this in mind we introduce a periodically modulated spatial kernel of the form w ab P ͑r,rЈ͒ = w ab ͉͑r − rЈ͉͒J ab ͑r − rЈ͒, ͑26͒ where J ab ͑r͒ varies periodically with respect to a regular planar lattice L. Note that the patchy kernel w ab P is homogeneous, but not isotropic. Following recent work of Robinson ͓21͔ on patchy propagators we show how to obtain an equivalent PDE model for an integral neural field equation with a spatial kernel given by Eq. ͑26͒.
First we exploit the periodicity of J ab ͑r͒ and represent it with a Fourier series as follows: The vectors q are the reciprocal lattice vectors of the underlying lattice L, and J ab q are Fourier coefficients given by and G ab ͑k , ͒ is given by Eq. ͑8͒. We may then write ab ͑r , t͒ = ͚ q J ab q ab q ͑r , t͒, where ab q ͑k , ͒ = G ab ͉͑k − q͉ , ͒ b ͑k , ͒. Choosing G ab ͑k , ͒ according to Eq. ͑14͒ we see that ab q ͑r , t͒ satisfies where ٌ q = ٌ͑−iq͒. Hence, we have an infinite set of PDEs for the complex amplitudes ab q indexed by the reciprocal lattice vectors q. Since ab −q = ͑ ab q ͒ † then ab ͑r , t͒ = ͚ q J ab q ab q ͑r , t͒ ʦ R as required. Assuming that there is a natural cutoff in q, then we need only evolve a finite subset of these PDEs to see the effects of patchy connections on solution behavior. Note also that the Turing instability analysis for the patchy model is identical to that of the isotropic model under the replacement of G ab by Eq. ͑28͒ in Eq. ͑21͒, so that now depends on the direction as well as the magnitude of k. For the mode selected by the Turing mechanism all other modes generated by discrete rotations of the reciprocal lattice will also be selected. Thus periodic patchy connections favor the generation of periodic patterns.
In Fig. 5 we plot the dispersion surfaces Re ͑k͒, k = ͑k x , k y ͒, for parameters selected just beyond the instability of the homogeneous steady state. In the limit d → ϱ we recover the unmodulated model. For finite d we find that each lattice wave vector ±k 1,2 introduces a shifted copy of the peak of the dispersion surface from the unmodulated case ͓Fig. 5͑a͔͒. When these peaks are widely separated ͑for lattice spacing d Շ 3͒ the interaction between them is weak and the bifurcation parameter portrait is expected to be analogous to that of the unmodulated model ͑Fig. 2͒ ͑at least up to a factor of 4 coming from the particular choice of J ab q above͒. In Fig. 6 we plot the bifurcations for the modulated model. Compared to the unmodulated case the Hopf bifurcation is transformed to a Turing-Hopf bifurcation with critical wave vectors coinciding with those of the lattice and independent of the axonal velocity v. This is associated with the central peaks at ±k 1,2 in Fig. 5͑c͒ crossing through zero from below. With increasing v the dominant bifurcation is also of Turing-Hopf type. However, in this case it is a ring of wave vectors surrounding ±k 1,2 that go unstable first, as in Fig. 5͑d͒. In both cases this suggests the emergence of traveling waves aligned to the lattice size and direction, which are indeed observed in direct numerical simulations. We shall refer to these as lattice-directed traveling waves. In the regime 3 Շ d Շ 6 four wave vectors become unstable with ͉k x ͉ = ͉k y ͉, as in Fig. 5͑b͒, and for d տ 6 the system is effectively that of the unmodulated case described by Fig. 5͑a͒.
In Fig. 7 we plot the speed of a traveling wave at the Turing-Hopf bifurcation at v = 1. The speed of the wave is seen to increase almost linearly with the spacing of the square lattice d. This reflects the fact that for small d, the emergent frequency c is independent of d and k c coincides with ͉k 1,2 ͉. The linear analysis predicting emergent wave speed is found to be in excellent agreement with direct numerical simulations. Figure 8 shows a lattice-directed traveling wave created in the Turing-Hopf bifurcation shown in Fig. 6, for v =1.
In Fig. 9 we show a pattern generated in the Turing-Hopf bifurcation for the system with a periodically modulated kernel for v = 10. It emerged from the spatially uniform state at the value of ␥ predicted from Fig. 6. The pattern is compatible with the wave number k for which the spatially uniform state is unstable.

VI. CONCLUSIONS
Neural field models of firing rate activity have had a major impact in helping to develop an understanding of the dynamics of EEG ͓12,22,23͔. In this paper we have shown how to write down an equivalent PDE model for a neural field model in two spatial dimensions with a particular form of decaying connectivity and a space-dependent axonal delay. Importantly this has avoided the so-called longwavelength approximation that has been used in many EEG models to date. Direct numerical simulations of the equivalent PDE model have been shown to be consistent with a linear instability analysis of the original integral neural field model. Moreover, we have extended our approach to allow for patchy connections and used simulations of an appropriate set of coupled PDEs to confirm the existence of latticedirected traveling waves.
A number of natural extensions of the work presented here are possible. The first concerns pattern selection; linear stability analysis alone cannot distinguish which of the doubly periodic solutions ͑hexagon, square, rhombus͒ will be excited first. To do this requires a further weakly nonlinear analysis. Techniques to do this for integral models in one spatial dimension with axonal delays have recently been developed in ͓24͔, and are naturally generalized to two spatial dimensions. To further cope with patchy connections one may well be able to borrow from techniques developed for the study of amplitude equations in anisotropic PDE models ͓25͔. Another extension would be to use the ideas presented here to discover if axonal delays have any significant effect on the existence and stability of other patterns seen in twodimensional neural fields such as spiral waves ͓26͔ and spatially localized bumps ͓14,27͔. The treatment of distributed transmission speeds ͓28-30͔ is another important area, as is the extension to heterogeneous connection topologies ͑more realistic of real cortical structures͒ ͓20,31͔ and addressing parameter heterogeneity ͑describing more realistic physiological scenarios͒ ͓17͔. All of these are topics of ongoing activity and will be reported upon elsewhere.  Fig. 6 for v = 1 as a function of square lattice spacing d. The speed of the wave is seen to increase almost linearly with d.
Here v = 1 and other parameters are as in Fig. 2. The circles denote the results from direct numerical simulations.  ͱ v ab 2 t 2 − r 2 ⌰͑v ab t − r͒. ͑A1͒

APPENDIX B
Here we provide some details on the simulation strategy for equations of the form ͑15͒. By defining some auxilliary variables and applying the chain rule this can be written as four first-order ͑in time͒ PDEs. For our choice of , Eq. ͑1͒ can be written as ͫ 1 ␣ ab ␤ ab ‫ץ‬ 2 ‫ץ‬t 2 + ͩ ␣ ab + ␤ ab ␣ ab ␤ ab ͪ ‫ץ‬ ‫ץ‬t + 1 ͬ u ab = ab ͑t − ab ͒.

͑B1͒
Note that for simplicity we set ab = 0 in our simulations. To solve an equation like Eq. ͑29͒ we let ab q = e iq·r ab q , which converts Eq. ͑29͒ to ͑A ab;⑀ 1 − ٌ 2 ͒͑A ab;⑀ 2 − ٌ 2 ͒ ab q = e −iq·r w ab 0 B ab b , ͑B2͒ and then proceed as above. The square domains were discretized with a regular grid and the spatial derivatives were approximated using finite differences. Periodic boundary conditions were used. The resulting ODEs were integrated using ODE45 in MATLAB with default tolerances. Figure 3 had a discretization of 60ϫ 60, Fig. 4 was 51ϫ 51, Fig. 8 was 20ϫ 20, and Fig. 9 used 50ϫ 50.