Physics of psychophysics: Large dynamic range in critical square lattices of spiking neurons

Psychophysics tries to relate physical input magnitudes to psychological or neural correlates. Microscopic models to account for macroscopic psychophysical laws, in the sense of statistical physics, are an almost unexplored area. Here we examine a sensory epithelium composed of two connected square lattices of stochastic integrate-and-ﬁre cells. With one square lattice, we obtain a Stevens’s law ρ ∝ h m with Stevens’s exponent m = 0 . 254 and a sigmoidal saturation, where ρ is the neuronal network activity and h is the input intensity (external ﬁeld). We relate Stevens’s power-law exponent with the ﬁeld critical exponent as m = 1 /δ h = β/σ . We also show that this system pertains to the directed percolation (DP) universality class (or, perhaps, the compact-DP class). With two stacked layers of square lattices and a fraction of connectivity between the ﬁrst and second layer, we obtain at the output layer ρ 2 ∝ h m 2 , with m 2 = 0 . 08 ≈ m 2 , which corresponds to a huge dynamic range. This enhancement of the dynamic range only occurs when the layers are close to their critical point.


I. INTRODUCTION
Psychophysics is perhaps the oldest experimental part of Psychology, starting with the pioneering work of Fechner in 1860 [1].Its main aim it to describe how sensation is related to the input level reaching a sensory organ.Psychophysical laws are emergent properties of neuronal networks [2].A fundamental problem in these laws is that they relate several orders of magnitude of input to few orders of magnitude of output.This means that biological sensors have a huge dynamic range (DR) and they should have that since natural stimuli varies by orders of intensity.For example, several experimental results can be fitted by a Hill curve [3]: where S(I is the sensation level (to me measured in some scale, S max is a saturation level, c is a constant and I is the input level.For moderate input such that cI m ≪ S max , we have the famous Stevens's power law [2][3][4][5][6][7]: where m is the so called Stevens's psychophysical exponent.If m < 1 we have a compressive curve with large dynamic range. It is not clear how to get this result because sensory neurons at periphery have small dynamic range.In a sense we have a problem typical of statistical physics: how to construct a microscopic model (here based in neurons) that explains a high level phenomenological law as a collective phenomenon.
A result very similar to the Hill curve is predicted by there computational models [14][15][16], but without a simple analytic form as Eq. ( 1), suggesting that the use of a Hill curve in Psychophysics is only a phenomenological or fitting procedure, that cannot be obtained from first principles.This view of a large dynamic range as a collective or emergent property (critical or not) of networks of excitable cells is relatively new [27][28][29][30].
The standard textbook model to account for a large DR constructed from small DR units is some variation of Recruitment Theory: sensory neurons, that present sigmoidal responses with short DRs but different response thresholds, are combined to produce a total output with a large DR.In these models, the value of exponent m is not predicted or constrained (see as examples [3,31,32]).Recruitment theory has a major flaw: for a wide range of stimulus to be perceived an equally wide variety of receptor expression in sensory cells must occur.Experimentally, however, receptor over-expression is only about two or threefold [31], so it is plausible to assume that this is not the main mechanism responsible for the phenomenon?[15].Also, the model mechanism is somewhat over-simplistic: a simple linear sum of sigmoids with a distribution of cells fitted by hand to produce a more or less acceptable power law in some input range.
In a sense, Recruitment Theory is a curve fitting exercise with sigmoids, not a predictive theory.For example, it does no predicts the possible values for Stevens's exponent m (as will be done here): this exponent is only a fitting parameter which is unable to give deep insights about the underlying neuronal network model.On the other hand, in our theory, m relates to the statistical physics field critical exponent as m = 1/δ h .
Here we offer a modern view aboutvpsychophysical laws: any network of excitable cells, indeed any excitable media [14,16,27], produces a Stevens's response S ∝ I m for moderate I (and a Hill's type saturation afterwards).This is an intrinsic and basic collective property of excitable cells.So, we need not devise some tricky mechanism to obtain Stevens's law: it is there from start, as a basic property of any network of excitable cells.
In this paper, we study stochastic integrate-and-fire neurons interacting in two coupled square lattices that would be a toy model for a biological sensor.We assume that the coupling inside the square lattices is done by electric synapses, as observed say in the retina or the olfactory bulb.We show that, in each layer, there occurs a continuous phase transition from a silent phase to an active phase.This is called an absorbing phase transition: the absorbing phase corresponds to silence or zero activity, from what the system can not spontaneously escape.The active phase emerges with a given critical exponent from the critical point.
Almost all of such transitions pertain to the ubiquitous Directed Percolation (DP) class [33,34], or perhaps the so called Compact DP (C-DP or Manna) class which, in the square lattice (d = 2), have specific critical exponents that define its universality class.
Our 2d result is new but not surprising, since the mean field DP exponent m MF = 1/δ h = 1/2 has already been found for complete graphs and random networks [14,35,36].However, our 2d DP exponent m ≈ 0.254 means that S ∝ I 0.254 , being a huge improvement over the mean-field S ∝ I 0.5 result.
The second square lattice layer is put after the first one, with a small fraction p of electrical synapses between them.The aim of constructing such retina-like two-layered sensor is to show that other Stevens's exponents can be obtained by changing the network topology.This has already been demonstrated for random networks [37], where the second layer presented Here we obtain a similar result, but with a huge dynamic range given by m 2 = 0.078 ≈ m 2 , that is, an input range of O(10 12 ) (similar to the difference between luminosity at midnight and noon) can be mapped onto an O(1) output.

II. MODEL AND METHODS
We use a stochastic leaky integrate-an-fire neuron originally proposed by Gerstner and van Hemmen [38], rigorously investigated by Galves and Lcherbach [39], Ferrari et al. [40] and simplified in [35,41,42].Time is discrete and updates are done in parallel.For a discussion about LIF discrete time models, see [43][44][45].
The membrane potential of a neuron situated at the i-th line and j-th column is given by: Here, X ij [t] is a binary state variable (X = 1 = spike, X = 0 = silence).A neuron stays in its active state for only one time step, assumed here as the typical spike width of 1 ms.
If at a given time t the neuron with indexes ij fires, its membrane potential is reset to zero, Eq. ( 4), otherwise the neuron will follow Eq. ( 3).The parameter µ ∈ [0, 1] is a leakage term which controls how much the neuron remembers from its previous voltage V ij [t], I ij [t] is the external input.Neurons interact in the two dimensional lattice where each one is connected to its four nearest neighbors (the von Neumann four sites neighborhood is V.).The strength of the (electric) synapse between the postsynaptic neuron ij and the presynaptic neuron kl (with k = i and l = j ± 1 or k = i ± 1 and l = j) is denoted by W ij,kl .We use periodic boundary conditions.
In this stochastic model, the firing of a neuron is probabilistic and given by a firing function Φ: Notice that this form emphasizes that the neuron has one time step of absolute refractory period, although this is implicit because we assume Φ(V = 0) = 0.The function Φ needs only to be a sigmoidal function.For mathematical convenience, we use the so called "rational function" [41,42] : Here, θ is a firing threshold bellow which the neuron cannot fire, i.e Φ(V ) = 0 for V < θ.
The Γ parameter in Eq. ( 6) is the neuronal gain.Notice that both the gain Γ and thresold θ are parameters experimentally related to the phenomenon of firing rate adaptation [46,47].
Although not implemented here, a homeostatic dynamics in the individual synapses W ik and thresholds θ ij can be used to self-organize the network towards the critical state [36].So, in our discussion about maximizing the dynamic range at criticality, we will assume that such homeostatic mechanisms can operate in our network.
For a meticulous analysis of the mean-field approximation regarding this model we refer to [35,41,42].We will show that the system undergoes an absorbing second order phase transition if the external field and the firing thresholds are equal, θ ij = I ij , see [36].This apparent fine tuning condition will be discussed in Sec.IV.If θ ij = I ij , the transition is of first order [35,41].
The activity of a network with N neurons is, at any time, it can be measured by where (t i , t f ) marks a large time interval in the simulation far from transient states.The time average ρ(W, Γ), Eq. ( 8), is used as our order parameter and the average synaptic weight W = W ij,kl and average gain Γ = Γ ij are our control parameters.We notice that, if at a given time step, we have ρ[t] = 0, then a random site is chosen and its state is put as . This is done to get the network out of the absorbing state.
We define another quantity, the order parameter fluctuations ∆ρ, related to the fluctuations of the activity ρ, In order to evaluate how the neuron model responds to external stimulus when interacting in a layered system one needs to know how a single 2d lattice of neurons work.First, with Γ = 1 fixed, we explore the effect of the control parameter W to determine roughly where the phase transition occurs, i.e, we need to know where are the sub-critical, critical and super-critical regimes.Then, we can refine our measurements and explore the critical region for systems with different sizes N and use finite-size scaling techniques [48] to determine better the critical point of the transition W c , the order parameter critical exponent β and the susceptibility critical exponent γ ′ .
In the vicinity of the critical point, ρ and ∆ρ should scale as, where W is the reduced control parameter W = (W − W c )/W c .
We then proceed to our bottom-up psychophysics approach, where we build a sensor with two layers of neurons.Each layer is a square lattice.The first one is stimulated by the external stimulus, modeled as a Poisson process.The probability per time step of a neuron being activated is: Here, r is a stimulation rate.Neurons in the first layer can either fire due to synaptic excitation from neighbors or with probability λ due to the input Poisson process.
On the second layer, we chose randomly p = 0.1N neurons to connect to the first layer.
These neurons receive an input where X 1 ij is the neuron of the first layer, that is, both connected neurons of the first and second layer share the same indexes ij.The value J used was high enough to guarantee that the connection between layers always pass information, that is, if a neuron in the first layer spikes, then, the neuron connected to it in the second layer also forcibly spikes.The electric synapses J are unidirectional, that is, activity in the second layer does not excites back the first layer.
We ran simulations for various system sizes in the three regimes, always stimulating neurons from the first layer with rates ranging from r = 10 −6 up to r = 0.1.We will show that in, the critical region, the activity ρ in each layer can map stimulation rates r with large dynamic range.For the first layer, and for the second layer, Critical systems in the presence of an external field h have a well established behavior.
For small field, the order parameter scales as a power law, where β is the order parameter critical exponent and σ is the critical exponent associated with the mean cluster size.If we identify the stimulation rate r as the external field h, it is possible to write Stevens's exponent m of Eq. ( 12) as: Eq. ( 14) is valid for asymptotically small fields, h → 0. This means that the relation between exponents ( 15) is valid as long as the stimulation rate r of the Poisson process (11) is small.
To quantify how each layer responds to the stimulation, we follow the dynamic range definition of Kinouchi and Copelli [14].We measure the individual activity of each layer as a function of the stimulus rate r.We then calculate the dynamic range ∆ h of each layer for the three regimes: where the values r 0.9 and r 0.1 are the stimulation rates which evoke the activities ρ 0.9 and ρ 0.1 .These activity values are obtained through the equation , where the values ρ max and ρ 0 are just the largest and smallest (not necessarily zero, due to selfsustained supercritical activity) response of a layer.The dynamic range is a measure that relates, in decibels, the largest and smallest inputs that the system can map in the output.

III. RESULTS
A. The 2d lattice with leakage µ = 0 For the 2d network, we first present curves ρ(W ; N) for different square and rectangular lattices (from here we fix Γ = 1 without loss of generality), see Fig. 1a.The solid line has a form: with the 2d critical point W c = 1.74.We can see that this curve produces a very good fit of the data for large N if we use the tabulated 2d DP critical exponent β = 0.583 [49].We also plot the fluctuations ∆ρ(W ; N) as a function of the control parameter W in Fig. 1b.
We obtain a good fit ∆ρ ∝ W for large networks if we use the 2d DP tabulated exponent γ ′ = 0.2998 [49].
The significance of the DP transition for our neuronal network model is the following.
The model admits a silent phase (ρ = 0) but, with increasing coupling, there occurs a change of phases.This could be an oscillatory phase or a coexistence with two fixed points for the same coupling (a non trivial ρ phase and the trivial ρ 0 absorbing phase).This last case is achieved by a discontinuous (firs order) phase transition.
In our case, we found a continuous (second order) transition from the absorbing state ρ 0 to an active phase ρ at a critical point W c .As discussed in the method section, critical exponents can be defined only for continuous transitions.The found exponents enable us to classify our transition as pertaining to the Directed Percolation (DP) universality class, or perhaps the Compact DP (Manna) class.This is not so surprising, because almost all continuous transitions from a single absorbing state pertain to such clases and follows the so called Janssen-Grassberger conjecture [33,34].What perhaps is a bit curious is that such conjecture works in a model with somewhat complicated elements like our stochastic LIF neurons.We tried, however, to obtain the critical exponents in an independent way.In the critical region, both ρ N and ∆ρ N are strongly dependent on the system size N.To get around the fact that we are far from the thermodynamic limit, we use finite-size scaling techniques.
First, we re-scale ρ and ∆ρ: where L = √ N is the characteristic square network size, ν ⊥ is the spatial correlation length critical exponent and G ρ and G ∆ρ are scaling functions.Then, we plot ρ and ∆ρ as functions of the characteristic system size L for different values of W .When W = W c , the reduced control parameter is W = 0.The re-scaled parameters ρ and ∆ρ are then power laws which depend only on L, The fine determination of W c , β, γ ′ and ν ⊥ can be done with standard finite size techniques, as delineated above.However, this is not our main concern here since that would require much more computational effort and is not the main subject of the paper.By now, it is sufficient to show that the critical exponents of our stochastic integrate-and-fire neuronal network are compatible with the 2d DP class or with the 2d Compact-DP class (Manna class), see Table I.First, simulation results for large N, varying W , indicates W c → 1.74 ± 0.01.Then, a simple fit of the data in Fig. 1a to Eq. ( 10) yields β ≈ 0.564 ± 0.05.By using Eqs.(20) and Fig. 2 fits, we obtain β/ν ⊥ ≈ 0.77 and γ ′ /ν ⊥ = 0.50, that is, ν ⊥ ≈ 0.73 ± 0.06 and γ ′ ≈ 0.37 ± 0.04, see Table I.We also have obtained, from the dependence on the external field at the critical point, m = 1/δ h = 0.254 ± 0.005, see Sec.III C. From the equality δ h = σ/β [49] we obtain σ = 2.22 ± 0.05 We observe that the errors are not statistical but simple fitting errors, see Table I. and ∆ρ N (W ) curves and the Janssen & Grassberger conjecture [33,34] seems to be enough reasons to consider that our stochastic leaky integrate-and-fire neuronal network indeed belongs to the DP or the C-DP universality classes (the exponents of these two classes are very similar and it is difficult to determine the class only from the numerical results, see Table I).

B. The case with leakage µ > 0
The leakage parameter µ in Eq. ( 3) is the ingredient that makes our neuron different from a simple binary automaton and to be defined as an integrate-and-fire element.The neuron has memory of its previous inputs because forgets its membrane potential with a time scale given by µ.We found that µ > 0 changes the location of W c (Fig. 4a).In the mean-field case, we [35,42]: where C(µ) is independent of W .This means that, if x = W − W c (µ), the function We searched for a similar result for our square lattice, but now with 74.We find that the collapse is not good (not show), meaning that the mean-field result W c (µ) = (1 − µ)W c (0) does not generalizes to the 2d case.However, by using the measured 2d W c (µ) from Fig. 4a, we obtain a very good data collapse, see Fig. 4b.
This collapsed data means that leakage µ > 0 does not change the universality class of our system.
To understand how much the regimes influence the response consider Fig. 5a.In the sub-critical regime, since the coupling between neurons is small, the external input does not propagate and the system response is linear.The same thing happens in the super-critical regime, but for a different reason: in this case, we have self-sustained activity and small inputs are lost in a noisy environment.It is only close to criticality (a few percent from W c ) that both small and large stimulus alike can be mapped in the output [14,[16][17][18][19][20][21]24].The dynamic range ∆ h of the first layer can be seem in Fig. 5b.At criticality, one can obtain a ∆ h ≈ 32 dB for L = 256, and this value can be higher for larger networks.to the second layer).This accords with the expected value for the exponent of Eq. ( 13), which is m 2 = m 2 = 0.072, if we assume that the first layer represents the external input for the second layer.Here, a fraction p = 0.1 of the neurons of the first layer are connected randomly to the second layer by forcing synapses, that is, if the corresponding first neuron spikes, the connected neuron in the second neuron spikes after a time step.
The dynamic range of the second layer in the sub-critical and super-critical regime are low for the same reason they are small in the first layer.However, in the critical regime, the dynamic range of the second layer is huge (above 40 dB, see Fig. 6b) and sufficient to account for the exquisite performance of biological sensors.

E. The effect of the inter-layer connectivity p
As observed, for small interlayer connectivity p = 0.1, the second layer exponent seems to preserve the relation m 2 = 0.072 ≈ m 2 = (1/δ h ) 2 [48,50].However, for larger p, we deviate from this behavior, see Fig. 7 for simulations with 0.05 ≤ p ≤ 0.5.
We must remember here the origin of the increase of dynamic range in networks of excitable cells (in contrast to pools of isolated cells in in recruitment theory).Networks enable signal amplification, that is, the stimulation of one cell by a input signal can produce a cascade of firings in neighbours (a branching process).This branching of the original signal means that small signals are amplified, increasing the response to them.On the other hand, the saturation due to very large input is delayed because the branching processes interact and, since the cells have refractory periods, the activity is suppressed [27][28][29].This occurs both for subcritical and suprecritical networks (their DR is always better than a pool with the same number of isolated neurons), but the optimal point is the critical one [14,15].
A very low p means that the layers are uncoupled, so the signal amplification mechanism does not works.On the other hand, a large connection p means that the activity of the first layer, that is already increased, is heavily communicated to the second layer.Each neuron that receives a synapses is now the source of a new branching process.This means that high p induces saturation in the second layer, increasing its Stevens exponent m 2 .
Indeed, from Fig. (7)a, is possible to see that m 2 (p) is a monotonically growing function of the connectivity p.This variation, however, is not so large (0.07 < m 2 < 0.10), see Fig. 7 inset.From Fig. (7)b we see that the dynamic range for various p is not sensitive to such small variation.

IV. HOMEOSTATIC CRITICALITY
Up to now, we have shown that critical networks have maximal dynamic range.However, we have not discussed how biological neuronal networks could tune themselves towards criticality.In a series of publications [22,35,36,41,42,51] we have explored such homeostatic mechanisms, that implement the so called Self-Organized quasi-Criticality (SOqC) scenario [52,53].
Homeostatic criticality means that the critical point turns out an attractor for some adaptive dynamics of the system.In contrast to conservative SOC models like sandpiles, where the self-organization depends on dissipation on the borders of the system that have no explicit equations for that, SOqC models like forest fire models or neuronal networks have explicit drive and dissipation mechanisms.
For example, lets consider activity dependent synapses [22,36,54] such that they depress by a factor u when the presynaptic neuron fires (due to vesicle depletion) and recover toward a baseline level A with a characteristic time τ : where k, l is the sites neighbors of neuron i, j.Here, the drive is the recovering mechanism and the dissipation is due to the short term depression.It is possible to show that, with this dynamics and for large τ , the average value We have noticed before that the second order phase transition only occurs when θ = I, where θ = θ ij and I = I ij are the average threshold and average input.This condition, for neuronal networks, seems to be a fine tuning.However, if we think that the field h = I −θ is the average suprathreshold current, the condition of zero field h = 0 is a natural requisite for continuous phase transitions in Statistical Physics.
So, inspired in firing rate adaptation mechanisms that postulate dynamic thresholds [46], we propose the following homeostatic dynamics: where now the signs are inverted: the dissipation is due to the 1/τ decay and the drive (grow of the threshold) occurs when the neuron spikes.It is also possible to show that this adaptive dynamics leads to θ[t] = θ ij [t] → I, that is, h → 0. An experimental prediction of this mechanism is that, in critical neuronal networks with power law avalanches, the neurons mostly adapt their firing thresholds to their external inputs.
So, with these two homeostatic mechanism, the critical point W = W c , h = h c = 0 in the phase diagram of the systems turns out an attractor of the overall dynamics.Simulations of these homeostatic mechanisms in the 2d lattice are somewhat out of the scope of this paper, but full results will be presented in future work.
V. DISCUSSION AND CONCLUSION The biological motivation for our the model is the retina, where both lateral and vertical coupling by electric synapses (gap junctions) occur, forming neuronal networks with stacked layers [55].All these electric synapses are plastic, from the milisseconds to the minutes time scales [56], what opens the possibility of homeostatic tuning to criticality [36,42] as supposed here.Moreover, there is experimental [57,58] and computational [59] evidence that disruption of electric synapses diminishes the sensitivity and degrades the retina dynamic range.We emphasize that we worked here with stochastic integrate-and-fire neurons, not cellular automata as done in [14-16, 22, 28, 30, 37], generalizing thus these results to biologically more realistic elements.
By studying the critical exponents at the second order phase transition, we found that 2d lattices of stochastic integrate-and-fire neurons are compatible with the Directed Percolation universality class.We then proposed the topology of two coupled square lattices to increase the dynamic range of a retina-like sensor.The first one receives Poisson inputs at rate r, and represents it as a neuronal activity ρ 1 ∝ r m , with m = 1/δ h = β/σ = 0.268.This activity is passed, by a fraction p of neurons, to the second layer which then presents an output activity ρ 2 ≈ r m 2 .The final Stevens's exponent of the system is m 2 = 0.078 ≈ m 2 = (β/σ) 2 = 0.072.Thus, the exponent relation Eq. (15) proposed in [14] seems to be valid, regardless of topology, as long as the stimulus intensity is moderate: the power law response is valid only before a saturating regime (Hill's like curve) also found in biological sensors.
It is possible to show that 1d systems (a ring of neurons) pertain to the 1d DP class [16,60] (or perhaps the 1d Manna class).In this case, we have a very large dynamic range due to the expected value m = β/σ = 0.276486/ 2.554216 = 0.108247.This means that an

FIG. 1
FIG. 1: a) Order parameter ρ(W ; N) for systems of different sizes close to the critical region.b) Order parameter fluctuations ∆ρ(W ; N) for various system sizes near the critical region.

FIG. 5
FIG. 5: a) First layer activity ρ(r, W ) as a function of stimulation rate r.The red line (W c = 1.74) is the power law with exponent m = 0.254.Network size N = 256 × 256.b) Dynamic range for the first layer, for several values of W and systems with three different sizes.

D
. Dynamic range of the second layer Like the case of the first layer, we present examples of the response function ρ 2 in the three regimes, see Fig. 6a.The fit for the critical power law gives m 2 ≈ 0.078 (the index refers

FIG. 6
FIG. 6: a) Second layer neuronal activity ρ 2 (r, W ) as a function of stimulation rate r at the first layer.The red line (W c = 1.74) refers to the power law Eq.(13) with m 2 = 0.078.Network size N = 256 × 256.b) Dynamic range for the second layer, for several values of W and systems with three different sizes (the first layer has N = 256 × 256 neurons).

FIG. 7
FIG. 7: a) Stimulus and response curves for various values of inter layer connectivity fractions p.The inset computes the exponent of the power law ρ ∝ r m 2 as a function of p. Simulation was carried using a 128 × 128 bi-dimensional network for each layer and fixed synaptic weight close to the critical point W = 1.75; b) Dynamic range for various values of p for the second layer of a bi-layered system, each one consists of a 128 × 128 two dimensional network.