Abstract
Perceptual manifolds arise when a neural population responds to an ensemble of sensory signals associated with different physical features (e.g., orientation, pose, scale, location, and intensity) of the same perceptual object. Object recognition and discrimination require classifying the manifolds in a manner that is insensitive to variability within a manifold. How neuronal systems give rise to invariant object classification and recognition is a fundamental problem in brain theory as well as in machine learning. Here, we study the ability of a readout network to classify objects from their perceptual manifold representations. We develop a statistical mechanical theory for the linear classification of manifolds with arbitrary geometry, revealing a remarkable relation to the mathematics of conic decomposition. We show how special anchor points on the manifolds can be used to define novel geometrical measures of radius and dimension, which can explain the classification capacity for manifolds of various geometries. The general theory is demonstrated on a number of representative manifolds, including ellipsoids prototypical of strictly convex manifolds, balls representing polytopes with finite samples, and ring manifolds exhibiting nonconvex continuous structures that arise from modulating a continuous degree of freedom. The effects of label sparsity on the classification capacity of general manifolds are elucidated, displaying a universal scaling relation between label sparsity and the manifold radius. Theoretical predictions are corroborated by numerical simulations using recently developed algorithms to compute maximum margin solutions for manifold dichotomies. Our theory and its extensions provide a powerful and rich framework for applying statistical mechanics of linear classification to data arising from perceptual neuronal responses as well as to artificial deep networks trained for object recognition tasks.
5 More- Received 17 October 2017
DOI:https://doi.org/10.1103/PhysRevX.8.031003
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
Published by the American Physical Society
Physics Subject Headings (PhySH)
Popular Summary
Our brains have the ability to accurately classify objects that we see despite huge variability in parameters such as illuminations, pose, and background features. Recent advances in machine learning have resulted in neural networks with similar abilities. However, researchers lack a mathematical understanding of how biological and artificial systems achieve such remarkable recognition accuracy. Here, we show how statistical mechanical theory can be used to explain fundamental principles underlying the ability of a neural circuit to discriminate objects in the face of enormous physical variability.
We geometrically model the variability in the neural representation of a particular object as a manifold. The number of manifolds that can be classified at a particular stage in the network grows in proportion to the dimensionality of the neural representation, but the proportionality depends on the shape of the manifolds. Our theory can be used to analyze the structure of the manifold representations as they transform and propagate through the network, leading ultimately to successful classification.
Our theory describes the shape of the neural manifolds using geometrical measures that are able to predict when a randomly labeled set of manifolds can be separated. These measures lead to quantities that characterize manifolds with arbitrary geometry and can be computed efficiently; we use them to analyze prototypical manifold models of neural responses.
Our work provides a new theoretical framework to understand and analyze the representations formed by hierarchical neural networks, and it may lead to novel insights into how perceptual systems efficiently code and process sensory information.