Abstract
Autonomous, randomly coupled, neural networks display a transition to chaos at a critical coupling strength. Here, we investigate the effect of a time-varying input on the onset of chaos and the resulting consequences for information processing. Dynamic mean-field theory yields the statistics of the activity, the maximum Lyapunov exponent, and the memory capacity of the network. We find an exact condition that determines the transition from stable to chaotic dynamics and the sequential memory capacity in closed form. The input suppresses chaos by a dynamic mechanism, shifting the transition to significantly larger coupling strengths than predicted by local stability analysis. Beyond linear stability, a regime of coexistent locally expansive but nonchaotic dynamics emerges that optimizes the capacity of the network to store sequential input.
- Received 28 September 2017
- Revised 13 August 2018
DOI:https://doi.org/10.1103/PhysRevX.8.041029
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
Published by the American Physical Society
Physics Subject Headings (PhySH)
Popular Summary
To perform complex tasks, our brains transform inputs in a complicated, nonlinear manner. Such transformations are implemented by large recurrent networks of simple interacting units: the neurons. Corresponding neural network models exhibit a transition to chaotic activity if the overall coupling strength between neurons is increased. This transition is believed to coincide with optimal information-processing capabilities, such as short-term memory. However, we show that this coincidence is not valid for networks receiving time-varying inputs.
We theoretically analyze the stochastic nonlinear dynamics of randomly coupled neural networks in the presence of fluctuating inputs. The underlying theory, also known as dynamic mean-field theory, is derived using systematic methods from statistical physics. This approach reveals that fluctuating inputs shape the network’s activity and suppress the emergence of chaos. We discover an unreported dynamical regime that amplifies perturbations on short timescales but is not chaotic for long timescales. In this regime, networks optimally memorize their past inputs.
Our work opens the study of recurrent neural networks to the rich and powerful set of methods developed in statistical physics. This approach will foster progress in the understanding of biological information processing and impact the design, control, and understanding of artificial neural networks.