Abstract
Understanding how the statistical and geometric properties of neural activity relate to performance is a key problem in theoretical neuroscience and deep learning. Here, we calculate how correlations between object representations affect the capacity, a measure of linear separability. We show that for spherical object manifolds, introducing correlations between centroids effectively pushes the spheres closer together, while introducing correlations between the axes effectively shrinks their radii, revealing a duality between correlations and geometry with respect to the problem of classification. We then apply our results to accurately estimate the capacity of deep network data.
- Received 29 November 2022
- Revised 3 March 2023
- Accepted 21 April 2023
DOI:https://doi.org/10.1103/PhysRevLett.131.027301
© 2023 American Physical Society
Physics Subject Headings (PhySH)
Viewpoint
Performance Capacity of a Complex Neural Network
Published 12 July 2023
A new theory allows researchers to determine the ability of arbitrarily complex neural networks to perform recognition tasks on data with intricate structure.
See more in Physics