Guiding synchrony through random networks

Sparse random networks contain structures that can be considered as diluted feed-forward networks. Modeling of cortical circuits has shown that feed-forward structures, if strongly pronounced compared to the embedding random network, enable reliable signal transmission by propagating localized (sub-network) synchrony. This assumed prominence, however, is not experimentally observed in local cortical circuits. Here we show that nonlinear dendritic interactions as discovered in recent single neuron experiments, naturally enable guided synchrony propagation already in random recurrent neural networks exhibiting mildly enhanced, biologically plausible sub-structures.


I. INTRODUCTION
Cortical neural networks generate a ground state of highly irregular spiking activity whose dynamics is sensitive to small perturbations such as missing or additional spikes [1][2][3][4]. A robust, reliable transmission of information in the presence of such perturbations and noise is nonetheless assumed to be essential for neural computation. It has been hypothesized that this might be achieved by propagation of pulses of synchronous spikes along feed-forward chains [5]. In current models, functionally relevant chains require a dense connectivity between the neuronal layers of the network [6] or strongly enhanced synapses and specifically modified response properties of neurons within the chain [7]. Such highly distinguished largescale structures however are not observed experimentally.
Can less structured networks also guide synchrony? Recently, single neuron experiments have revealed a mechanism that nonlinearly promotes synchronous inputs. Upon synchronous dendritic stimulation, neurons are capable of generating fast dendritic spikes.
In the soma, these induce rapid, strong depolarizations [8] that are nonlinearly enhanced compared to depolarizations expected from linear summation of single inputs. If the dendritic spike induces an action potential in the soma, the latter occurs at a fixed time after the stimulation, with sub-millisecond precision. Other experiments have found slow dendritic spikes which are comparably insensitive to input synchrony [9]. These slow dendritic spikes endow single neurons with computational capabilities comparable to multi-layered feed-forward networks of simple rate neurons [10]. Furthermore, they provide a possible mechanism underlying neural bursting and its propagation, which have been shown to enhance reliability and temporal precision of signal propagation [11,12]. The impact of fast dendritic spikes that induce non-additive coupling, on collective circuit dynamics has not been systematically investigated so far in a general setting.
In this article, we show that and how fast dendritic nonlinearities may support guided synchrony propagation in neural circuits. First, we develop an analytical approach to describe such propagation in linearly and nonlinearly coupled networks. In particular, we derive an expression for the critical connectivity above which propagation occurs and for the size of the propagating pulse. We quantify how dendritic nonlinearities compensate for dense anatomical connections and thereby promote propagation of synchrony. Finally, using large-scale simulations of more detailed recurrent network models, we show that feed-forward networks that occur naturally as part of random circuits enable persistent guided synchrony propagation due to dendritic nonlinearities.

A. Analytically tractable model
Model with linear summation of inputs. As a basis model we consider networks of conventional leaky integrate-and-fire neurons that interact by sending and receiving spikes via directed connections. The membrane potential V l of a neuron l satisfieṡ where γ l is the inverse membrane time constant and I l (t) is the total input current at time t. In addition to inputs from the network, the neurons receive excitatory and inhibitory random inputs which emulate an embedding network, i.e.
where I 0 l is a constant input current modeling slow external (from outside the chain) and internal (from the chain) currents, I ext,ex l (t) and I ext,in l (t) are the contributions due to arriving external excitatory and inhibitory spikes (that are modeled as independent random (Poissonian) spike trains with rate ν ext,ex and ν ext,in respectively) and I net l (t) are the contributions originating from spikes of neurons of the network. In the absence of any spiking activity, the membrane potential will exponentially converge towards its asymptotic value V ∞ l := I 0 l /γ l . When the neuron's membrane potential reaches or exceeds its threshold Θ l , its membrane potential is reset to V reset l and a spike is emitted, which arrives at the postsynaptic neuron j after a delay time τ jl . For a refractory period t ref l after the reset, all incoming spikes to neuron l are ignored and the membrane potential is kept at V reset l . We model the fast rise of the membrane potential upon the arrival of a presynaptic spike by an instantaneous jump, such that the contributions of the arriving external spikes to the total input current are given by where t ext,ex l,k (t ext,in l,k ) are the arrival times of the kth excitatory (inhibitory) external spike at neuron l, ǫ ext,ex > 0 or ǫ ext,in < 0 are the strengths of single external spikes and δ(·) is the Dirac δ-distribution. Analogously the contribution of spikes received from neurons of the network is given by where ǫ lj is the coupling strength from neuron j to l and t f j,k is the kth spike time of neuron j.
Model with nonlinear summation of inputs. In the above model without nonlinear dendrites, the strengths of synchronous inputs are summed up linearly (cf. Eq. (5)). We incorporate nonlinear dendrites by modulating this sum for excitatory inputs by a nonlinear function σ, that can be directly read off from experimental results [8]: σ equals the identity for small excitatory input, increases steeply when the input exceeds a threshold Θ b and saturates for larger inputs. We define the dendritic modulation function as For simplicity, we consider only exactly simultaneous spikes as synchronous. Accordingly, conduction delays are chosen homogeneously, τ ij ≡ τ , so that synchronous presynaptic spiking can be amplified. In this scenario the detection of synchronous events is straightforward.
However, systems with heterogeneous delays and finite dendritic integration window exhibit qualitatively the same phenomena [13]. The contribution of spikes received from the network is then given by where t f are all firing times in the network. The sets M ex (t f ) and M in (t f ) denote the sets of indices of neurons sending an excitatory and inhibitory spike at time t f , respectively.
Networks with linear dendrites can be described by setting σ(ǫ) = ǫ.
B. Biologically more detailed model Conductance based model. In the last part of the article we employ a biologically more detailed neuron model to highlight the generality of our findings on propagation enhancement.
The neuron model is a conductance-based leaky integrate-and-fire neuron that is augmented by terms introducing the impact of dendritic spikes (see also [14]). The subthreshold dynamics of the membrane potential V l of neuron l obeys the differential equation Here, C m l is the membrane capacity, g L l is the resting conductance and V rest l is the resting membrane potential, E Ex and E In are the reversal potentials, and g A l (t) and g G l (t) are the conductances of excitatory and inhibitory synaptic populations, respectively. I DS l (t) models the current pulses caused by dendritic spikes and I 0 l is a constant current gathering slow external and internal currents. The time course of single synaptic conductances contributing to g A l (t) and g G l (t) is given by the difference between two exponential functions (e.g. [15]). Whenever the membrane potential reaches the spike threshold Θ l , the neuron sends a spike to its postsynaptic neurons, is reset to V reset l and becomes refractory for a period t ref l . To account for dendritic spike generation, we consider the sum g l,∆t of excitatory input strengths (characterized by the coupling strengths) arriving at an excitatory neuron l within the time window ∆t for nonlinear dendritic interactions, where χ [t,t−∆t] is the characteristic function of the interval [t, t−∆t], t f j,k is the kth firing time of excitatory neuron j and τ denotes the synaptic delay. We denote the peak conductance (coupling strength) for a connection from neuron j to neuron l by g max lj . If g l,∆t exceeds a threshold g Θ , a dendritic spike is initiated and the dendrite becomes refractory for a time Panel (c) shows the input sequences (black lines, strength: g ex = 2.3nS, closeups given by insets) and the sum g l,∆t (t) of excitatory inputs received within the dendritic integration window [t − ∆t, t] (gray), cf. Eq. (9). At the first spike arrival around t = 1ms, three inputs are received within ∆t such that g l,∆t (t) reaches 6.9nS. The sum is smaller than the dendritic threshold g Θ = 8.65nS (red horizontal line in c), so no dendritic spike is generated and there is no difference between the membrane potential for a neuron with and without a mechanism for dendritic spike generation.
Around t = 50ms, four spikes arrive within ∆t, g l,∆t (t) exceeds the dendritic threshold and a dendritic spike is generated. Around t = 100ms four spikes arrive at the neuron, but the temporal difference between the last and the first spike is slightly larger than ∆t. Consequently, g l,∆t (t) does not exceed the dendritic threshold and no dendritic spike is initiated.
window t DS,ref . The effect of the dendritic spike is incorporated into the model by the current pulse that reaches the soma a time τ DS thereafter. This current pulse is modeled as the sum of three exponential functions, with prefactors A > 0, B > 0, C > 0, decay time constants τ DS,1 , τ DS,2 , τ DS,3 and a dimensionless correction factor c (g ∆t ), where g ∆t is the summed excitatory input at the initiation time of the dendritic spike as given by Eq. (9). The factor c (g ∆t ) modulates the pulse strength, ensuring that the peak of the excitatory postsynaptic potential (pEPSP) reaches the experimentally observed region of saturation. At very high excitatory inputs, the conventionally generated depolarization exceeds the level of saturation, and the pEPSP increases (cf. Fig. (1)a).
Detection probability. In the last part of the article we investigate recurrent networks, where a feed-forward subnetwork consisting of a certain number of layers (groups) is created by modifying strengths of existing synaptic connections of the network. To decide whether propagation of synchrony in recurrent networks is successful, we consider the signal to noise ratio (SNR): We pick ω neurons, randomly selected from the network, to be a first group.
After initiation of synchronous activity in this group, we count the number of spikes from neurons of the i th group (for details on how the i th group is defined, see section on recurrent Here t exp i is the expected time for the synchronous pulse to reach layer i and t w is the expected width of the synchronous pulse. We consider all spikes within the time window of size t w centered at t exp i as part of the synchronous pulse. We assume that t exp after simulation such that i S i becomes maximal, Here, Gr (i) are the indices of neurons of group i, t f j,k is the k th firing time of neuron j and χ denotes the characteristic function, as before.
To determine the noise level of group i, we measure the probability P i ∆t obs ,t w (k) of finding k spikes from neurons of group i within time windows t w over a control time interval where no synchronous activity is initiated. The noise level N i of group i is the minimal value with a constant a 1.
Finally, we denote the propagation of synchrony up to the i th layer as successful, if the SNR is larger than b,  where b ≥ 1. This means in particular that we can distinguish the background (spontaneous) activity from the signal induced by propagation of synchrony in all layers 1, . . . , i.

A. Feed-forward chains with linear coupling
How can diluted feed-forward networks (FFNs) propagate synchrony? FFNs consist of a sequence of layers, each composed of ω excitatory neurons, and forward connections to neurons in the subsequent layer randomly present with probability p. Present connections have strength ǫ. Synchronous spiking activity is initiated by exciting neurons of the first layer to spike simultaneously. In the second layer they excite a certain subgroup of neurons to spike simultaneously that in turn generates a synchronous input to layer three, etc.
To understand the collective dynamics analytically, we consider networks of leaky integrate-and-fire neurons in the limit of fast synaptic currents (cf. Section "Models and Methods"). In the absence of synchronous activity, each neuron of the FFN receives a large number of inputs from an emulated external network and only very few inputs from the pre-vious layer, such that its dynamics is practically identical to the ground state of balanced networks. If the connections within the FFN are weak and/or the connection probability is low, the spontaneous spiking activity is only weakly influenced by spiking activity of the FFN. Therefore, we assume that the ground state activity is exclusively governed by the external inputs, effectively setting couplings within the chain to ǫ ij = 0. The external input is balanced, i.e. the mean input is subthreshold and spontaneous spiking is caused by fluctuations in the input. The network's neurons thus spike in an asynchronous and irregular manner [1,2] and the stationary distribution of membrane potentials P V (V ) can be calculated analytically in diffusion approximation [2,16].
is the probability of finding a neuron's membrane potential in the interval [Θ − x, Θ]. We model the fast rise of the membrane potential upon the arrival of (possibly nonlinear enhanced) presynaptic spikes by an instantaneous jump in the membrane potential (cf. Section "Models and Methods"), thus p f (σ (hǫ)) specifies the spiking probability of a single neuron, after receiving h input spikes of strength ǫ from the preceding layer.
To assess the propagation of synchrony, we consider the average number of neurons which are activated in each layer in response to the initial synchronous pulse (cf. also [17]). When g i neurons spike synchronously in layer i, is the probability of spiking of a particular neuron in layer i + 1, where the number of simultaneous inputs h is binomially distributed, h ∼ B(g i , p). Thus, for layers of size ω, the average number of neurons spiking in layer i + 1 is Substituting the average group size g i for the actual size g i yields the interpolated map g i+1 = ωp sp ( g i ) , whose fixed points qualitatively determine the propagation of synchronous activity, cf. Fig. 2.
The trivial, absorbing fixed point G 0 = 0, defining a state of extinguished activity always exists. For sufficiently small p, ǫ and ω, this is the only fixed point. With increasing connectivity and layer size, a pair of fixed points (G 1 , unstable, and G 2 , stable) appears via a tangent bifurcation. Initial pulses in the basin of G 2 (i.e. those larger than G 1 ) typically initiate stable propagation of synchrony with group sizes around G 2 . For given layer size ω and connection strength ǫ, the critical connectivity p * for which G 1 = G 2 marks the minimal connectivity that supports stable propagation of synchrony.
To elaborate the influence of nonlinear dendritic interactions, we derive the critical connectivity for FFNs. The mechanisms underlying propagation of synchrony are different for networks with and without nonlinear dendritic interactions and thus require different analytical approaches to derive p * . We first consider feed-forward chains with conventional, linear coupling, i.e. σ(x) = x. To obtain p * , we first expand p f (x) into a Taylor series up to first order around the mean of the binomial distribution specifying the average number pg i of active neurons in each layer, such that Eq. (16) simplifies to The linear approximation becomes exact in the limit of large layer sizes ω and small couplings ǫ, where the product ǫω is kept constant. We obtain an interpolated map from Eq. (20) by replacing g i by its mean value g i . At the fixed point G := g i+1 = g i , the function vanishes. Here, the values G and p † are the average group size and the connection probability at the fixed point for given layer size ω and coupling strength ǫ. Furthermore, F has a double root at the bifurcation point, so the derivative also vanishes such that the derivative of p f at the bifurcation point is given by Combining the above equations, we express the derivatives of F at the bifurcation point by Applying the implicit function theorem yields the set of derivatives of p * , which are solved by where λ is a constant independent of ω and ǫ. We note that we did not make explicit assumptions on the distribution of membrane potentials P V (V ), which is determined by the setup of the external network, i.e. the external input current I 0 , the coupling strengths ǫ ext,ex and ǫ ext,in as well as the firing rates ν ext,ex and ν ext,in . With a different, lengthier approach based on a second order expansion of p f one can derive an analytical estimate of λ [13]. Fig. 3 displays this analytical approximation for p * L and its agreement with numerical simulations. The reduction factor c = p * L /p * NL > 1 shows that nonlinear dendritic interaction compensates for reduced connectivity. In both scenarios, with linear and nonlinear coupling, we find that p * ∝ ω −1 , such that the reduction factor is independent of the layer size. In networks with linear couplings the critical connectivity is p * ∝ ǫ −1 , whereas in networks with nonlinear coupling the dependence on ǫ −1 is nonlinear. Therefore the reduction factor increases with decreasing coupling strength. Dashed horizontal lines indicate jumps in the reduction factor where the number of inputs needed for dendritic spike generation changes.

B. Feed-forward chains with nonlinear coupling
We now consider networks incorporating nonlinear dendritic interactions and show that and how the connectivity and number of active neurons required for propagation of synchrony is smaller. In such networks, the mechanism underlying propagation of synchrony is different, because it is supported predominantly by nonlinearly enhanced inputs. This implies that the maximal input is bounded by κ, leading to a saturation in the return map (16) (cf. Fig. 2c).
The saturation enables propagating pulses of a size substantially smaller than ω, in contrast to linearly coupled networks. The discontinuity in the modulation function σ induces a discontinuity in p f (σ (x)) which prevents our previous analytical method. We thus determine the critical connectivity by a self-consistency approach. When a synchronous pulse arrives at a specific layer, the summed excitatory input strength x is either smaller or larger than the dendritic threshold Θ b . For sufficiently small Θ b , the spiking probability of a neuron due to a subthreshold input is much smaller than due to a suprathreshold input, i.e. p f (Θ b ) ≪ p f (κ).
Thus only a small fraction of neurons receive an input smaller than Θ b and is elicited to spike.
We approximate p f (x) = 0 for x ≤ Θ b . When there is persistent propagation of synchrony, p γ , which denotes the fraction of neurons that receive sufficiently strong input to reach the dendritic threshold, is constant throughout the layers. The total spiking probability of a single neuron upon the arrival of the synchronous pulse is then given by the product p γ p f (κ).

The probability
for g neurons to spike synchronously follows a binomial distribution. By combining the total spiking probability and the topological connection probability p, we compute the probability that a neuron of the subsequent layer receives exactly k synchronous spikes. Thus, k itself is binomially distributed and we denote its mean value by δ and its standard deviation by σ δ . Using a Gaussian approximation of the Binomial distribution yields the self-consistent equation where we defined as the distance between the average number of inputs and the number needed to reach the onset of the nonlinearity, measured in units of σ δ . Solving definition (38) for p that occurs as an argument of δ and σ δ and using Eq. (37) yields the connection probability in terms of n, For a certain setup of the FFN with variable connectivity, p NL (n) is the connectivity for which a stationary propagation of synchrony occurs with a certain n. Any p NL above the critical connectivity p * NL , has two preimages n, corresponding to the group sizes G 1 and G 2 , p * NL has one preimage, and any p NL below p * NL has none, cf. Fig. 2c. Thus, p NL (n) has one global minimum at n = n * where dp NL (n) dn n=n * = 0 and the critical connectivity is p NL (n * ) = p * NL . The comparison of the results for linearly and nonlinearly coupled FFNs is particularly enlightening in the limit of large layer size (ω ≫ 1) and small coupling strengths (ǫ ≪ Θ − V reset ). We fix the maximal input to a neuron from the previous layer, ǫω = const, to preserve the network state and expand Eq. (40) in a power series around ω → ∞ and ǫ → 0.
Considering the leading terms we find We note that propagation of synchrony mediated by dendritic spikes is enabled if a sufficiently large fraction of neurons of each layer receives a total input larger or equal Θ b ; this implies in particular Θ b < ωǫ. Moreover, if the connectivity within the FFN is low, stable propagation even requires Θ b ≪ ωǫ and p NL further simplifies to As described above, the critical connectivity is given by the minimum of p NL as a function of n, which is assumed at n = n * . dp NL (n) dn n=n * = 0 yields n * as an implicit function of Θ b ǫ , For better readability, we define where n * = n * Θ b ǫ as given by Eq. (43). Combining Eqs. (42-44) enables to simplify the critical connectivity to which depends nonlinearly on the number of spikes needed to reach the dendritic threshold (Θ b /ǫ) through the function 1/β (·). One can show that β(Θ b /ǫ) increases with decreasing coupling strength ǫ from β(Θ b /ǫ) = 0.5 for large ǫ and becomes maximal in the limit of small couplings, lim ǫ→0 β(Θ b /ǫ) = 1. Fig. 3 displays the results for p * NL together with numerical simulations. As in the linearly coupled network the critical connectivity decays with layer size and coupling strength, but the dependence on 1/ǫ is nonlinear. The factor by which the nonlinear dendritic interactions reduce the required network connectivity increases with decreasing threshold Θ b and increasing enhancement κ. Fig. 3c illustrates the numerically obtained reduction of connectivity: The critical connectivity p * NL is smaller over the whole parameter range; the reduction is most effective for small ǫ and largely independent of ω.
Nonlinear dendrites thus foster propagation of synchrony. We remark that our model still overestimates the capability of linearly coupled networks to propagate synchrony: Upon synchronous input linearly coupled groups of neurons generate synchronous output (if they generate output at all). This is a consequence of the infinitesimally short synaptic currents.
In neurons with extended synaptic currents the timing of the output strongly depends on the neurons' state and input strength. In contrast the timing of somatic action potentials elicited by dendritic spikes is largely independent thereof. We therefore expect the effect of nonlinear dendrites to be even stronger in networks of biologically more detailed neurons as considered in the next section.

C. Recurrent networks
The main findings generalize in two ways, to FFNs occurring in recurrent random networks and to biologically more detailed models. For such systems, we show that in nonlinearly coupled networks stable propagation naturally emerges where it is difficult to achieve in linearly coupled networks. In contrast to isolated FFNs studied above, we now account for effects of the FFN on the surrounding network and its feedback. Further, we choose a more detailed neuron model (see section "Models and Methods") to assure that main assumptions underlying the analytically tractable model are not crucial for stable propagation of synchrony. In particular, we show that systems with temporally extended postsynaptic responses and temporally extended nonlinear dendritic interaction window exhibit qualitatively the same phenomena as found above.
We consider networks of randomly connected conductance-based leaky integrate-andfire neurons (cf. Eqs. (8)-(10)). The networks consist of N E excitatory and N I inhibitory neurons. A directed connection between two neurons is present with probability p. As for the isolated FFNs considered above, we construct the network such that the ground state in the absence of synchronous activity is characterized by balanced excitatory and inhibitory input, which results in an asynchronous irregular spiking activity. For simplicity, all neurons have the same parameters, e.g C m l = C m , g L l = g L , etc. First we set up a model network, where the total excitation and inhibition to the neurons is balanced such that the spiking activity is asynchronous irregular. The external constant current I 0 determines together with the leak conductance g L and the resting potential V rest the asymptotic membrane potential in the absence of incoming spikes, Additionally, each neuron receives excitatory and inhibitory random Poissonian spike trains.
The frequencies are denoted by ν ext,ex and ν ext,in and the ratio between them is chosen such that it equals the ratio of the number of excitatory and inhibitory neurons in the network, This ensures that each neuron receives the same ratio of excitatory and inhibitory input from both the network and external sources when neurons in the excitatory and inhibitory network populations spike on average with the same mean rate. All excitatory as well as all inhibitory connections have the same strength, i.e. g max lj = g ex for excitatory and g max lj = g in for inhibitory connections. The ratio of the peak postsynaptic potentials due to an inhibitory and an excitatory input at the asymptotic membrane potential V ∞ is approximately given by We set to obtain balanced activity.
In contrast to the model considered in the first sections, now excitatory neurons have a non-zero time window ∆t for nonlinear dendritic modulation. When the strength of the excitatory input within ∆t exceeds a threshold, a current pulse is injected into the soma, modeling the effect of a dendritic spike. The neuron parameters for this phenomenological model are chosen according to experimental findings to reproduce the time course of the membrane potential in response to a dendritic spike quantitatively (see section "Models and Methods").
Considering a random network we detect naturally occurring weak feed-forwardstructures suitable for signal transmission in the following way: We randomly choose a group of x neurons to be the first layer. The second layer is composed of x neurons out of those receiving the largest numbers of connections from the initial group. By repeating this selection process l times, we identify a FFN consisting of l layers. In each selection step, we exclude the x neurons of the previous layer, but allow to choose neurons which are members of the layers preceding the previous one. The high-connectivity subnetwork selected from an existing random network as described above by construction has a slightly higher than average connection probability. Therefore this structure is particularly well suited to enable propagation of synchrony. Alternatively one can assign neurons randomly to the different layers and compensate for smaller connectivity by, e.g., larger layer sizes according to Eq. (45).
The measurements start after an equilibration phase (initially the network is at rest).
In the ground state, the network generates balanced irregular activity. Propagation of synchrony is initiated by exciting the neurons of the first layer to spike within a short time interval, which is smaller than the time window of dendritic integration, ∆t. This leads to an increased input to the second layer after a delay time τ . This, in turn, may lead to highly synchronous spiking of a certain number of neurons of the second layer (possibly supported by dendritic spikes) and therewith to synchronous spiking after another delay time τ in the third layer etc. Propagation of synchrony requires that (i) the total input of a layer to its successor within the FFN is sufficiently strong and (ii) that the input to the remaining network is sufficiently weak to avoid excitation of too many neurons to synchronous spiking.
After initiating a propagation of synchrony by exciting the neurons of the first group to spike within a short time interval, we measure the probability of detecting a synchronous pulse in the subsequent groups (see section "Models and Methods"), cf. Fig. 4a,b. Although the average connectivity within the identified FFN is significantly larger than the overall connectivity p, it is still small and propagation of synchronous activity is very unlikely (upper insets of Fig. 4a,b). We find that it is not sufficient to choose high-connectivity subnetworks as FFNs (as described above) to obtain a stable propagation of synchrony, but that the synapses within the FFN have to be strengthened. To study the transition to propagation we strengthen the synapses within the FFN gradually. As suggested by the results on isolated chains, we observe a propagation of synchrony over more and more layers for moderate enhancements (Fig. 4c,d). For very strong enhancements the feedback from the network becomes important: The synaptic amplification leads to an increased spontaneous activity within the FFN and this in turn results in an increased background activity. The overall increased spiking activity causes spontaneous synchronous pulses and a separation of the induced synchronous signal from the background activity is not possible anymore (the detection probability decreases, see Fig. 4a,b and lower insets).
In agreement with previous studies (cf. [7]), we find that in the linearly coupled networks considered a synchronous pulse propagates only over a few layers, even in the optimal enhancement range (Fig. 4a). In contrast, networks incorporating nonlinear dendrites support stable propagation of synchrony (Fig. 4b) in a substantial region of parameter space. In addition the propagation is enabled for enhancements considerably smaller than the optimal enhancement for networks with linear dendrites.

IV. DISCUSSION
In conclusion, we have analyzed strongly diluted networks with linear and nonlinear dendritic interactions. We have shown that and how nonlinear dendritic interactions may enhance and stabilize synchrony propagation in both isolated feed-forward chains and in recurrent network structures. Moreover, our results show that such local nonlinear interactions support the separation of propagating synchrony and asynchronous background activity. Earlier works [6,7] did not take into account supralinear amplification of synchronous activity. There is one study [7] using existing connections in recurrent networks to create diluted chains assuming strongly enhanced synapses and at the same time specifically tuned neuron properties; still synchrony could propagate only over a few groups. In contrast, the results presented above indicate that a reliable propagation is achieved by only mildly adapted synapses and without specifically tuning or changing neuron properties or rewiring the network.
In a recent study [18] incorporating nonlinear dendrites it has been shown that syn-chronous activity can propagate in purely random networks without modified connections.
There are no specific propagation paths but neurons are recruited in a quasi-random manner. Our results above now indicate that specific feed-forward chains that naturally occur in random neural circuits are capable of persistently propagating synchronous signals if their synaptic strengths are increased. The strengths required in the presence of nonlinear interactions are common in biological neural circuits [19] and may well be generated by learning, e.g. through spike-timing-dependent plasticity.
Dendritic (coupling) nonlinearities therefore offer a viable mechanism for guiding synchrony through weakly structured random topologies.
Recently [12] feed-forward chains with slow dendritic (probably calcium) spikes have been simulated to check the possibility of the occurrence of specific spike patterns that are experimentally observed in the higher vocal center of song birds. Our theoretical work now yields analytic insights about the collective dynamics of circuits with fast dendritic (sodium) spikes. Fast dendritic spikes have been found in the hippocampus and in the neocortex and may thus be involved in hippocampal replay, memory formation and other computational processes. Experimentally, the influence of fast dendritic spikes could be directly checked by selectively blocking dendritic sodium channels (e.g. [20] indicates that the types of sodium channels in the dendrite and soma are different) and thereby distinguishing effects coming from non-additive coupling via fast dendritic spikes from those induced by other mechanisms.
During the last decade, the number of neurons simultaneously accessible has multiplied from a few to the order of 10 2 neurons, with this rapid trend further ongoing. When recording the activity of a substantial fraction of neurons of a local circuit synchrony propagation should be clearly detectable and analyzable. Our results suggest that synchrony propagation and thus spike patterns should be influenced if dendritic sodium channels and thus fast dendritic spikes are blocked. Specifically, in the hippocampus, the precision of (replayed) spike patterns will decrease or the patterns vanish after blocking. Such experiments would thus provide a direct test of how non-additive coupling is exploited for the collective dynamics of neural circuits.
Once the connectome, i.e. the structural synaptic connectivity, of neural circuits becomes available in the future [21], the relative impact of synaptic, structural to dynamic features of single neurons on circuit dynamics may be well distinguishable.
The basis model of pulse-coupled units considered here is applicable to a range of systems in nature, not only neural circuits but also, e.g., earthquakes emerging from abruptly relaxing tectonic plates, and fireflies interacting by exchanging light flashes (e.g. [22]). We have now studied the impact of nonlinear input modulation on collective network dynamics and derived methods for their analysis that may be useful also in a non-neuronal setting. Interestingly, very recent results [23] have shown that fireflies are more prone to respond to synchronous flashes rather than to asynchronous ones suggesting a direct application of our model.

V. ACKNOWLEDGMENTS
Supported by the BMBF (grant no. 01GQ1005B), the DFG (grant no. TI 629/3-1) and the Swartz Foundation. Simulation results were partly obtained using the simulation software NEST [24].
The maps and transition matrices presented in We consider the propagation for a certain setup specified by ǫ, ω and p as 'successful', if a synchronous pulse propagates along the whole FFN in more than 50% of o = 31 realizations of the FFN with different initial conditions. We derive the critical connectivities p * L and p * NL up to a resolution of ∆p p = 5 · 10 −3 by repeatedly bisecting the interval [0, 1] and testing the success of propagation. Fig. 4 For the network simulations, we employed the simulation software NEST [24], by the  [25,26], I 0 = 250pA, and the frequencies of the external inputs are ν ext,ex = 2.4kHz and ν ext,in = 0.6kHz. The recurrent connectivity in cortical and hippocampal networks is sparse: Connection probabilities between 1% and 10%, depending on the distance and the region have been estimated (e.g. [19,25,27]), for our simulations we choose p = 0.03.

B. Parameters for
The time constants of the excitatory (AMPA) conductances are τ A,1 = 2.5ms and τ A,2 = 0.5ms [28]. For simplicity, we choose the same time constants for the inhibitory (GABA A ) conductances, τ G,1 = 2.5ms and τ G,2 = 0.5ms. The reversal potentials are E ex = 0mV and E in = −75mV [15,25]. The strengths of experimentally observed pEPSPs due to single inputs range from small values like 0.1mV to larger values like 2mV [19,25,27]. For nonenhanced couplings, we set g ex = 0.6nS, which corresponds to a pEPSP of approximately 0.3mV at rest. According to Eq. (51), the coupling strength of the inhibitory synapses are g in = −6.6nS to maintain balanced input. This configuration results in an asynchronous irregular ground state with a spontaneous firing rate ν ≈ 1.8Hz.
The parameters of the dendritic spike current are chosen according to single neuron measurements in hippocampal cells: ∆t = 2ms [8], g Θ = 8.65nS (corresponding to a pEPSP of about 3.8mV at rest [8]), τ DS = 2.7ms (such that τ + τ DS = 4.7ms and the peak of the depolarization is reached approximately 5ms after presynaptic spiking), A = 55nA, B = 64nA, C = 9nA, τ DS,1 = 0.2ms, τ DS,2 = 0.3ms, τ DS,3 = 0.7ms and t ref,DS = 5.2ms. The correction factor, which modulates the strength of the dendritic spike, is found by fitting a linear correction function, c(g) = max 1.5 − g · 0.053nS −1 , 0 , such that the experimentally observed region of saturation is obtained. The dynamics of the neuron model incorporating the mechanism for dendritic spike generation is illustrated in Fig. 1.
For calculating the SNR we use an a = 0.99 and b = 2 and an expected width of the synchronous pulse t w = 10ms; the result is insensitive to changes in these parameters.
The expected interval between successive synchronous active layers,∆t exp , is chosen from the interval [2ms, 7ms] such that the signal, i S i , is maximized (cf. section "Models and Methods"). The control interval time interval for the estimation of the noise level is ∆t obs = 15s. The detection probability shown in Fig. 4a,b is the fraction of successful propagations obtained from 10 different network realizations, where for each network setup propagation of synchrony was tested for 20 initial conditions. All measurements start after an initial equilibrium phase of t 0 = 4000ms.